Monday 28 June 2010

Unit testing and training; a day to reflect and update.

I can't believe how long it has taken me to be bothered to convert the JUnit 3 tests to JUnit 4, and transfer the exercises on our training courses to be set around Eclipse. Sad I know, but without an appropriate lull in other work, and a need due to a forthcoming course that would benefit the change, I've never got around to doing the updates.

It didn't even take that long, but I'm off to PNG at the end of the week, and I thought it was an opportunity to update the pracs on the unit testing exercises. Previously, these were arranged to run from command line, using Ant. It was a simple transfer to Eclipse, with my main concern how Eclipse managed the project.

If any of the classpath definitions was absolute rather than relative, then anything I set up in a project would fail once I transfer Eclipse from my PC to another. In the end, the .classpath file and .project file look simple and are based on current node as the root.

This way, I can take my install of Eclipse, transfer to memory sticks, along with the exercises, and not worry about configuration problems.

As I only look at Java a few times a year, I end up ramping up over a time to get to the same level of comfort. The tool integration provides a great opportunity to demonstrate further approaches with greater ease, such as easier refactoring, clearer code coverage reports, and greater relevance to attendees work experience.

With that in mind, I am reviewing the code coverage tools I added as plugins, in the hope that there is something as sophisticated as Clover but that costs the same as a coffee; at the moment, most of the tools cost the same as a coffee machine.

Ideally I'd like to have centralised training servers with SVN and CI tools running, but without a regular need, it makes it hard to justify.

Now I've made the change with the tools, I can look at enhancing the content to increase the information on mock objects, have greater information on refactoring, demonstrate the ease with which you can write unit test cases, rather than having to assist students in editing code using notepad.

Friday 25 June 2010

skulim tisa wokim studens amamas, or 'How to keep students happy'

Of to PNG in a few days to do some CSTP and developer training. One of the most interesting aspects are the genuine and honest nature of the folks there. They are incredibly polite, to the point that it can be hard to extricate information from them when they are placed as students in a classroom. The poorly translated title is my question.

It may well be that I'm not reading the situation correctly, but I'm interpreting their reticence to speak as respect to the teacher, or perhaps the way education is performed in PNG. I like to engage with students for a number of reasons.
  1. To confirm they are assimilating the information.
  2. To identify areas, concepts and topics they are not comfortable with.
  3. To ensure they are within their comfort zone, and are able to ask me further questions.
  4. To be able to gauge when a break is required.
  5. To understand if the material presented is relevant to the role they perform.
  6. To keep my ego inflated.
The last point stems from the perspective of the trainer as an entertainer. In order to provide the right level of energy, and to pace the delivery correctly, I must understand how the students learn, and what makes them tick.

Without any feedback, this becomes almost impossible.

This will be particularly hard when I have to run one training session on a Saturday. This presents two issues, as I won't be able to go diving, and I face a far greater challenge to normal to ensure the students are engaged and excited about the training.

Ice-breakers, energisers and other techniques for engagement are ok but . . . . actually typing those last few words has pinpointed perhaps what I should be looking for.

They are not comfortable voicing their opinion, so perhaps mime might be the way to go. One of the easiest ways to engage people is through humour. Good comedians make you laugh as they provide a mirror to an embarrassing event or situation that you the audience has faced, and use themselves as the protagonist in the centre of the situation. Through exaggeration of the circumstances, and self-depreciation, they allow us to laugh at them, and also relate how they are embarrassed by a situation we might have also come across.

So humour is the weapon to break down the silence, mime is the vehicle, and I must be the target.

Simple. Bugger.

Wednesday 9 June 2010

Risk Analysis

I've got my head immersed in risk assessments at the moment. The organisation I'm currently working with is developing a risk based approach for testing and auditing, which will affect a large number of organisations and individuals in the domain, and the consequence of some areas reaches into high safety integrity.

The standards and processes around risk identification itself though is the hard part. There are multitudinous ways to identify risk; FMEA, FTA, HAZOP & CHAZOP (the Chas and Dave of the risk world?), SWIFT etc etc.

  • Which one do you choose?
  • Which one would you choose if your life depended on it?
  • Which one would you choose if you were a vendor developing a system and didn't want to spend much money?
So I'm having to evaluate all the risk assessment standards and approaches. Thank goodness for both ISO. In fact, thank goodness for the AS/NZS organisation, as the AS/NZS 4360 was adopted as the new ISO 31000 standard for risk assessment. The accompanying standard (the catchy) ISO 31010 goes into some wonderful detail about how many approaches you might take to identify risks, including weighing up the pros and cons of each approach.

Yet there isn't a one size that fits all. While I'm not surprised by this, my single model for risk assessment is turning into a 3 or 4 part model.

Step 1 - Generic high level top down approach using risk catalogue.
Step 2 - Mandatory analysis of the standard.
Step 2a - choose from FMEA, SWIFT or CHAZOP
Step 3 - Optional. Do a specialist risk assessment such as Privacy Impact Assessment.

For once the standards bodies have got it right, in recognising that no one approach is satisfactory under different conditions. While that is a blindingly obvious statement, many standards documents contradict this viewpoint, and appear to be fairly rigid in their structure. On a side note, this is all the more surprising considering how few SW testing tools adopt IEEE 839 terminology and instead come up with all kinds of weird shizzle.

This has led to an interesting consulting cul-de-sac. Most of the time we are able to recommend one particular course of action, based on the information we have. In this instance I am having to recommend that you can do all sorts of things. The challenge is to place some structure so that whoever does the evaluation of the approach against the target, employs a consistent repeatable approach, that delivers the same outcome time after time.

Two things I'd like to invest in; cloning and foresight.