Parameterized tests in JUnit can be very useful when writing tests based on tabular data. These type of tests can save you from writing a lot of duplicate or boilerplate code. While there is a fair amount of articles on the subject on the Internet, I wasn’t able to find a code sample that you can simply copy into your project and execute. So, here it goes.
A Few Notes
@Parameterized.Parameters annotation is available sin JUnit 4.11 and it is used to generate more readable test titles your IDE will display when executing tests. This makes it easier to see what test case failed without looking into the detailed test trace.
It seems that this version is still not propagated in Maven repositories, so I added 4.11-beta-1 version of JUnit as a Maven dependency in order to get all of this working. If you are using some earlier version of JUnit, you simply eliminate the annotation. If you would still like to improve on how the test title is reported, you can take a look at few alternatives proposed in this Stackoverflow question:
Grails is par-excellence platform for implementing applications in Domain Driven Design style . At the center of Grails approach are Domain Classes that drive the whole development process. As you are probably guessing, the choice of word domain in Grails is not just a coincidence.
You start by defining your Domain Classes and then you can use Grails to do all heavy lifting in providing persistence and generating the GUI. It’s worth noting that when the DDD book was written, it was before the Grails or other similar frameworks were created, so a lot of problematic dealt with in a book has to do with issues resolved or greatly reduced by the framework.
Some of DDD concepts resolved by Grails
I will use DDD pattern summary to address different DDD elements. (Quotes italicized in the text below).
Domain model is structured through Domain classes, Services, Repositories and other DDD Patterns. Let’s take a look at each of these in detail.
“When an object is distinguished by its identity, rather than its attributes, make this primary to its definition in the model”
These are Domain Classes in Grails. They come with persistence already resolved through GORM. Model can be finely tuned using the GORM DSL. Take a look at hasOne vs. belongsTo property. It can be used to define the lifecycle of entities and their relationships. belongsTo will result in cascading deletes to related entities and other will not. So, if you have a Car object, you can say that Motor “belongsTo” a Car and in that case Car is an Aggregate Root and Motor an aggregate.
“When you care only about the attributes of an element of the model, classify it as a VALUE OBJECT. Make it express the meaning of the attributes it conveys and give it related functionality. Treat the VALUE OBJECT as immutable. Don’t give it any identity…”
In Grails, you can use “embedded” property in GORM field to manage a value object. Value object can be accessed only through an entity it belongs to, does not have its own ID and is mapped to same table as the entity it belongs to. Groovy also supports @Immutable annotation but I am not sure how it plays with Grails.
“When a significant process or transformation in the domain is not a natural responsibility of an ENTITY or VALUE OBJECT, add an operation to the model as a standalone interface declared as a SERVICE. Make the SERVICE stateless.”
Just like Entities, Services are natively supported in Grails. You place your Grails Service inside the services directory in your Grails project. Services come with following out of the box:
- Dependency Injection
- Transaction Support
- A simple mechanism for exposing services as web services, so that they can be accessed remotely.
“Choose MODULES that tell the story of the system and contain a cohesive set of concepts. “ Grailsplug-in mechanism provides this and much more: a very simple way to install and create plugins, defines how application can override plugins etc.
“Cluster the ENTITIES and VALUE OBJECTS into AGGREGATES and define boundaries around each. Choose one ENTITY to be the root of each AGGREGATE, and control all access to the objects inside the boundary through the root. Allow external objects to hold references to the root only.”
I already mentioned some lifecycle control mechanisms. You can use Grails Services and language access control mechanism to enforce access control. You can have a Grails Service playing the role of DDD Repository that permits access to Aggregate Root only. While Controllers in Grails can access GORM operations on Entities directly, I’d argue that for better layered design, Controllers should be injected with services that delegate to GORM Active Record operations.
“Shift the responsibility for creating instances of complex objects and AGGREGATES to a separate object, which may itself have no responsibility in the domain model but is still part of the domain design.”
Groovy builders are excellent alternative for constructing complex objects through rich DSL. In DDD, Factories are more loose term and does not translate directly to GoF Abstract Factory or Factory Method. Groovy builders are DSL implementation of GoF Builder pattern.
“For each type of object that needs global access, create an object that can provide the illusion of an in-memory collection of all objects of that type. Set up access through a well-known global interface. Provide methods to add and remove objects, which will encapsulate the actual insertion or removal of data in the data store. Provide methods that select objects based on some criteria and return fully instantiated objects or collections of objects whose attribute values meet the criteria, thereby encapsulating the actual storage and query technology. Provide repositories only for AGGREGATE roots that actually need direct access. Keep the client focused on the model, delegating all object storage and access to the REPOSITORIES.”
Grails Service can be used to implement a dedicated Repository object that simply delegates its operation to Grails GORM. Persistence is resolved with GORM magic. Each Domain class provides a set of dynamic methods that resolve typical CRUD operations including ad-hock querying.
“State post-conditions of operations and invariants of classes and AGGREGATES. If ASSERTIONS cannot be coded directly in your programming language, write automated unit tests for them.”
- Take a look at Groovy @Invariant, @Requires, @Ensures annotations, these can be used to declare DbC style Invariants and Pre and Postconditions
- When you create your domain classes with Grails command line, test classes are created automatically and these are another mechanism for expressing assertions in your domain.
Declarative Style of Design
“A supple design can make it possible for the client code to use a declarative style of design. To illustrate, the next section will bring together some of the patterns in this chapter to make the SPECIFICATION more supple and declarative.”
This is where Grails excels because of dynamic nature of Groovy language and Builder pattern support for creating custom DSLs.
Comes “out-of-the-box” with Grails through proposed “Convention over Configuration” application structure in a form of a layered MVC based implementation.
*Originally published as an answer on Stackoverflow: http://bit.ly/mWtLFc
I was curious when our client asked us to put our web flow definition files into the database. We used these files to define a basic navigation flows in our web application. To put it simply, our homemade framework was conceptually very similar to a Spring Web Flow. Such flow definition is a first class programming artifact and putting it into a database doesn’t make too much sense. You do not need to maintain application logic similar to some application business entity, since modifying it equals to deploying a new version of the application. What’s more, client is asking for a file to be saved in database in unstructured form, in a textual field.
Actually, storing the flow file in the database will provoke more than one problem. For example, preforming rollback becomes very unreliable since you need to synchronize the rest of the artifacts, for which on the other hand all you need to do is to deploy the old war, with the flow definition. If the database fails, the application will not be able to perform even the simplest navigation etc. Then there are security issues-you cannot sign the code anymore, probable performance issues etc. Even after being confronted with these issues, the client was still adamant that he needs this feature.
What was behind this request? As it happens, our client was working in a large corporation with strict, rigid policies in the IT department. Those policies defined a number of steps that had to be performed before new application could be put into production. And the process was terribly slow. Even the simplest change of to an application could take more than a month to perform. Our client, working in a certain department had to follow corporate policies, was unable to change them, but needed to move much faster.
So how placing the file in the database would help? Well, since application files would stay intact, he could simply update the database to change the application “no questions asked”. Of course, this means outwitting policies that were put in place in order to supposedly provide quality and security to IT operations. In this case, the client is at mercy of corporate policies and pushing the application logic into the database is the only way for him to do his job.
Another client has a similar story. A number of cryptic flags in the database are used to “configure” core business rules. No versioning or roll-back strategy in place, poor security (binaries are not signed), difficult to understand logic etc. Similar to the case above, procedures for getting the new version of base code into the production are quite complicated. There is a separate DB Admin Department in charge of performing the Q.C. of a proposed DB structure: in reality Q.C. consists of verifying that some cryptic prefixes are used when naming fields and tables. In this case, the team is much more empowered and could probably fix the deployment procedures. Unfortunately, the idea that the deployment pipeline could be automated and performed in such a short time to render the configurable logic mechanisms useless looks completely unrealistic.
Using configuration to store application logic has a number of downsides:
- Cryptic code difficult to interpret and maintain
- Poor security since configuration-code is generally not signed and access to modifying it is less restricted
- Generally no versioning or roll back mechanisms in place
- Probable performance issues
Conclusion? Fix and automate your development pipeline. Make use of tools and techniques like Continuous Integration and Deployment and Automated Testing. Make the development and deployment process work for you, not against you.
Finally: “Make your configuration complex enough and you will end up implementing a new programming language – poorly!” a wise man once said. (Or maybe it was just me?).
- Analyst receives the requirement and enters it into the flow
- Requirements committee assesses the requirement and produces estimation:
- Non-requirements are sent to help desk or refused
- Requirements are classified as projects or simple requirements
- Information is sent to all departments in order to obtain the estimation (effort and delivery date)
- Estimation total ($ and delivery date) is produced by summing all individual estimations and sent to a client
- After client accepts the estimation, the Project Manager is assigned to the requirement.
- Project Manager role is to track individual tasks that requirements was broken into and to prod and beg departments to have their task delivered first. As far as I could see, there is no transparent mechanism for managing task priorities. The initial estimation date seems to be the early factor, more priority given to those who are the most behind the deadline. Then project managers can “scale up” their requests, occasionally reaching the CEO himself. In the end, it is the employee seniority that decides the task priority.
- It’s the different factories that implement the tasks once they can get their hands on them, according to prioritization that I just explained.
- Since each factory is specialized and works on the single application layer, there is often some integration work to be done. This work is (reasonably enough, right?) performed by the Integration department.
- After all this, requirement reaches a UAT departments and if it is not turned back (as it often happens) it is ready for the production.
- There is a lot of defects caught by QA (or “UAT”)
- Producing the initial estimate is taking more than a month in average! This is a constant complaint by the customer.
- There are a high number of “frozen” projects. These are those projects that are put on hold in any moment of time. It is not clear how many of these are resumed (I doubt that many are resumed).
- There are a high number of projects that have poor (even null) exploitation once they reach the production.
- While client can opt-out from a project in the very early phase, this is not so once the project is set in motion and man-hours spent on it.
No matter where you look, everyone talking about CMMI will sooner or later mention the “c” word. Even Google, when you enter “CMMI” as a search term will try to help you by presenting “CMMI Certification” as a related search.
No wonder your surprise to learn that such thing ,as a CMMI Certification, does not exits! Software Engineering Institute (SEI) of Carnegie Mellon University issues a meager “appraisal”. Well, you might say, appraisal, certification what’s the difference? In the end, it’s the same thing and to reason to make a fuss about it.
Let’s see what does SEI has to say about it:
“The SEI does not certify the results of any appraisal nor is there an official accreditation body for CMMI. True certification of appraisal results would involve the ongoing monitoring of organizations’ capabilities, a shelf life for appraisal results, and other administrative elements. When an organization is appraised against the CMMI model, their Lead Appraiser’s findings may indicate that the organization is operating at a particular “maturity level.” The SCAMPI appraisal method maturity ratings are 1 through 5.
The SEI does not have a defined requirement for periodic follow-up after appraisals, nor does it accept legal responsibility for the performance of appraised organizations. All of these characteristics are required for a program that would provide certification of appraisal results. However, CMMI Appraisal results do expire after a period of three years.”
Source: SEI website
While difference is subtle, it is by no means innocuous, as it is explained clearly on the SEI website. SEI does not accept any legal responsibility for performance of companies they awarded appreciation report nor does it monitor them directly.
As you can see, the choice of words was not random, but carefully made. No reason then to be surprised by reports (and here) where companies that get CMMI “Certified” use only one department or team on order to get the logo, where companies after certification soon go back to old ways, where only motivation they have is winning a contract or a tender etc. Ever heard of the LCPBCs? If not you can google it, I will not spoil the fun
In the end, one can not but wonder how come that some agile circles were not able to avoid the certification trap, when even some of the organizations that inspired the whole movement, were smarter than that?
Just had my say in a discussion on LIDNUG group on LinkedIn. The question put for discussion was “Is it ok to cut corners to meet a deadline?”. Most of answers (or at least the way I interpreted them) say that you generally shouldn’t, but that sometimes you just might have to compromise, especially if that can be justified from business point of view. I think that from agile point of view, the reply is quite obvious. Here is what I had to say:
It’s NOT OK to cut corners, but it’s OK to cut features.
I guess that sums a great deal that agile development is all about! You do very short iterations but you do them properly (no cutting corners). Once you start with new iteration, your client can invent new, eliminate old, reprioritize all features etc. (This is what I actually mean by “cutting features”.) As it happens, most of the time the client will realize that some features are not needed, that other are, and will accept that he can live without “nice to have” as long as the core features are done right and without bugs. As a matter of fact, it is difficult (no to say impossible) to know all the features you will need right at the start of the project. So, why should you cut corners to deliver a feature you are not even sure it’s needed? This doesn’t necessarily have to do with 80-20 rule, it’s more of looking at the software as the “work in progress”. You can release the first version once you have implemented the minimum set of core features that do something useful.
Once you and the client change the mentality from “features initially put into the contract” to “real business value delivered”, you will have no need to cut corners. Switching to short iterations and having users participate in planning and accepting results of each feature had a profound effect on how my team is operating and had enabled us to really uphold the quality aspect of software.
Take a look at this post at lostechies.com.
Does this say anything about refactoring adoption among .NET developers? Maybe. It definitely says something about Microsoft refactoring tools state of the art (thanks Jeff). Refactoring support in Visual Studio 2010 lags miles behind refactoring support in free tools like Eclipse or Netbeans. JustCode features here.
After hearing that UML is touted as a “next big thing” in Visual Studio 2010, I must admit I was less than elated. Since I am hardly a “new kid on the block”, I freely admit that I remember that quirky diagramming tool called Visual Modeler that shipped with Visual Studio 6.0. (“Ten years after” already?). Had it been 1999, I guess UML might even sound, well… intriguing.
Fortunately, I recently came across this video:
TDD with Visual Studio
Believe it or not, but 2010 version of Visual Studio should finally provide a lot less friction for TDD developers. It can generate class and member stubs based on client code. More surprisingly, there is an integrated Test Runner that does not “fall apart” (just pick your song!) if you have tests written is some 3rd party unit testing frameworks. (On the video Karen shows executing MbUnit tests with VS Test runner.)
TDD is hardly a news these days, but hardly feels as passé as UML!