bad.robot

good robots do what they're told

JDK7 Previewed

Oracle put out the preview release of JDK7 last month. I guess they felt they had to. So, it’s not what was once heralded (will 8 see lambdas?) but still has one or two interesting language features. A few that caught my eye include…

Type Inference on Generic Object Creation

Which allows a little brevity to the garrulity of the language, at least against generic object instantiation where the type can be inferred. For example,

private Map<Size, List<Shoe>> stock = new HashMap<Size, List<Shoe>>();

can be reduced to

private Map<Size, List<Shoe>> stock = new HashMap<>();

Logging is evil but…

Logging is a nightmare. I don’t mean here that conveying information about exceptional circumstances is a nightmare, I mean the combination of over eager developers and [insert your current logging framework here] is a recipe for disaster. We’ve all seen too much of

Logger log = Logger.getLogger(ThisSucks.class);
...
try {
    somethingRisky();
} catch (SomethingVeryBadException e) {
   log.error(e);
   throw e;
}

which is just one example where the exception handling policy for the system (it’s a system-wide concern remember) is muddled at best. Nothing is saying that the same exception isn’t logged elsewhere or that the exception is even handled correctly or the right people notified. It’s not ok to just log and rethrow and every single time we go to declare a new logger, we should think twice.

Pairing Honestly

Recently we had particularly good retrospective where the team were able to admit that each of us has had difference experiences pairing. We were honest in saying that despite having “done pairing” we’d all done different amounts of pairing and that, at times, we weren’t even sure what we were supposed to get out of it.

There can be a fair amount of peer pressure to pair but if the pair don’t know what they can get out of it, it’s unlikely to succeed. We should be honest about that. What makes a good pair (see a previous post) and how do we know that we’re getting something out of it?

Lambdas vs. Closures

When writing Java in a functional style, things tend to get very verbose. We often create a bunch of anonymous implementation fragments and pass them around akin to a function in functional languages. These fragments often get called closures or lambdas, but what’s the difference between the two terms?

Growing Team Skills

I love the idea of growing teams. Team dynamics are really important and we have to be conscious of how we work together if we want to be able to reflect on how well the team are doing. It’s useful to understand the strengths and weaknesses of the team, together and individually so that we can a) improve and b) match people to tasks, and teams to projects. It all contributes to a happy working environment.

Often, new or junior developers worry about the barrage of new and unfamiliar technologies. In my opinion, technologies are such a small part of what we do but it can be useful to be explicit about individual competencies to re-assure the bedazzled new developer. In this post, I present a tool I came up with to help, a kind of competency depth distribution chart.

Changing Test Gears

Good poker players know when to change gears. They know when to alter their playing style from cautious to aggressive as the game changes and players drop out. They look at how the odds change as the game progresses and react appropriately. It’s the same with testing, you gotta know when to change gears.

To put it development terms, good developers know when to change gears. They know when to change their testing style from cautious to aggressive as the code evolves.

Lets pretend there is just three types of testing; unit, integration and acceptance. In the interest of stereotyping, we’ll define them simplistically as

  • Unit - single object tests, no collaborations (strict I know, but bear with me)
  • Integration - testing object collaborations, for the purposes of this article, lets assume end-to-end testing slot into this bracket
  • Acceptance - leaning towards end-to-end but key here is that they are customer authored. As such, to convince the customer these will likely be relatively coarse grained and start outside the system boundary

Generate Concordion Overviews

Customer authored acceptance tests are great. Getting your users to tell you exactly what they want and don’t want in the form of a specification can be liberating. You’ll thrash out the details and come up with examples that can be exercised against the running system. Everybody wins.

I can’t really comment on some of the BDD targeting frameworks like JBehave, EasyB or Cucumber but I do like using Concordion. We try and use it in such as way to fit in with a BDD approach, it’s actually flexible enough to use with almost any approach.

It’s frequently cited closest comparator is probably Fit but that’s a little unfair. As I say, Concordion tries really hard not to tie you into a particular approach, whereas Fit invariably leads you down a certain route. So to compare it against the less flexible Fit, isn’t really accurate.

Anyway, the point to this post isn’t really to comment on Concordion but to advertise a little Ant task I wrote to help auto-generate Concordion-friendly summary pages for your existing Concordion tests.

Objectives

It’s objective setting time for the guys at work. Aligning what’s really good for you with what’s good for the company shouldn’t be difficult, right? Your generally heading in the same direction and have similar interests at heart. Why then do we end up with generic meaningless objectives like “attend a XXX course”? I think you can get the most out of the objective setting exercise by thinking about what you want personally out of your career, don’t settle for the bland, phrase your objectives so there is real value in them. So, if the company think of them in terms of

S.M.A.R.T

I like the think of them as

S.M.A.R.T.Y

Where the Y is all about YOU; make it personal.

Un/Marshalling

This post should probably be titled “Why I don’t like Unmarhsalling frameworks”. They just hack me off. I mean really, who wants to have to use the Java objects that they make you use? It’s not that they don’t represent as objects the data that the frameworks unmarshall, they do. It’s more that the underlying data may or may not match your systems view of the domain. I don’t want to have to run some process to generate Foo.java only to convert it to MyFoo.java, I’d rather go straight to MyFoo.

I was using Castor to do the former some time ago. It sounded like a good idea, and I got going really quickly. However, I quickly didn’t like the objects its produced so had to modify the mapping.xml to “tailor” the marshalling. Still fine, this was ok for a while but soon enough I descended into a kind of Castor mapping hell. I seriously spent days trying to tweak things and had to compromise the model in the end.

After some reflection, I thought I’d try and dump Castor. I replaced the marshalling with merging to a Freemarker template and produced cleaner XML in under an hour. The unmarshalling, replaced with manually parsing the XML and inserting elements directly into objects. I couldn’t believe how much less pain I was feeling.

For me, the overhead of maintaining these more “manual” approaches is far less than working around any framework mismatches. If you’re lucky enough to find a marhsalling framework that can grow with you, fair play, but I’m heavily biased towards a more manual approach these days. Down with [insert framework here].

Setter vs Constructor Injection

So what is the argument for / against? It can be a tough one to describe as I recently discovered.

You can say that constructor injection forces an object to have its dependencies set explicitly and setter injection is open to forgetfulness or misuse but how powerful an argument is that really? Surely the tests would catch it if you miss a set call? Constructor injection does say upfront “this is what I need”, so there’s no “first call this, then this and don’t forget this” - its explicit and you don’t need to know about the internals that setters expose. That’s useful. But on the small, how complex do the combination of set calls get?

If you use your IDE effectively, constructor injection will have better rafactoring support. There’s no way an IDE will interpret adding a field to mean calling setter for it. However, if add a constructor parameter, the IDE can push a default value out to all usages.

I don’t think that a large number of constructor arguments justifies defecting to the setter camp as the real smell here is often that the class is doing too much and/or has too many dependencies. It’s often cited that the reason there are lots of setters is because of particular dependency injection framework leads you in that direction but why compromise (I should probably say, comply) because you’re told to? What if that compromises other design goals?

How about the way the system would grow using constructors vs setters? I think this is where the real argument lies. You can move forward perfectly happily with either approach, content with the fact that dependencies are isolated. Testing becomes simpler, more focused on the objects under test and the earth continues to orbit the sun. Its only much later when the system is all but grown up that you can look back and reflect. Have setters contributed to creating a system that is assembled in a complex, disheveled way? Are you leaning on external (ahem, xml) configuration to manage this? Alternatively, has a constructor centric approach left you with a system with a more concise assembly strategy? Which has more noise? You tell me…