Quantcast
Channel: Invariant Properties » security
Viewing all articles
Browse latest Browse all 29

Lessons from BIBIFI

$
0
0

I haven’t posted in a while since I’ve been very busy. Coursera classes on computer security, cloud computing and geospatial technologies, corporate initiatives, even an Amazon Web Services certification. It adds up. The capstone of the computer security specialization was a Build-It, Break-It, Fix-It. It was a 6-week effort for working professionals with a wide variety of skills and experience. It was still pretty stressful albeit for different reasons.

It was not my finest hour. But that’s okay since falling flat on your face (well, it wasn’t THAT bad) is when you learn the most. I didn’t learn anything new but before I had only read it in a book. This time I lived it – or more precisely saw in retrospect just how much of a difference it would have made.

(Note: this are general management issues. I’ll probably discuss the security aspects later.)

A clear specification is still critical

Many of us have gotten sloppy since the agile methodology was introduced. We’re used to working closely with product owners and business analysts and the time spent to nail down specifications that change at the end of the sprint is considered wasted.

There’s a lot of benefits to this approach but the BIBIFI round reminded me that we still need the ability to write clear specifications and quickly identify ambiguities in the rest.

In the competition the spec was ambiguous in spots but the problems weren’t fully recognized until long after development had started. It was no longer possible to make changes since there were approx. 60 teams working during their limited free time.

An executable specification is a huge improvement

Many of us are familiar with test-driven development (TDD). In brief the tests are written first and the code is implemented only far enough for the tests to compile but fail. Development only proceeds once the tests have been written and stops once all tests pass.

Tests are still expected to pass before code is checked in, or at most by the end of each sprint.

An executable specification is similar. Acceptance tests are written against the user interface – these tests can use SoapUI, selenium, etc. Internal libraries may only need to pass unit and functional tests. The system is initially only implemented far enough for the tests to compile and run. Development stops once all tests pass.

The key difference is that some tests will remain red until the end. This now looks very odd to many of us, or at least it should. It’s important to remember that it serves a different purpose. This shows how much you have left to do, not whether you have introduced defects with your most recent changes. Some people believe these tests shouldn’t even be visible to developers. We need to see them during sprint planning but in their view they’re nothing but a distraction during development.

In the competition a submission was required to pass several acceptance tests before it would be accepted for the ‘break-it’ phase but the acceptance tests were far from complete. Someone who should have known better (ahem) relied on them instead of writing his own test suite based on the specification. That was a mistake since the acceptance tests were far from complete and key functionality was overlooked.

(One of my recommendations for future rounds is that the acceptance tests be beefed up and then deeming anything that passes as correctly implemented. The uncertainties in our specification and the scoring resulted in many teams focusing on ‘bug hunts’ (with guaranteed points) instead of the much harder and riskier effort to examine code for security flaws.)

Product owner specifications are best

Behavior-driven development (BDD) is superficially similar to test-driven development. One big difference is that it puts an emphasis on writing specifications in terms that are understood by the product owner. The onus is on the developer to work with this format, ideally in an automated fashion.

In the competition a lot of information was provided in JSON files. I understand JSON and these were simple files so I thought I could just view the raw file. In retrospect I often lost the big picture in the minutia.

I became much more productive after I spent a little time writing a tool that converted between the JSON files and Excel® files and could run the tests directly. It was too late to make much of a difference.

What is the best format for specifications?

Easy – the one you will use.

A bit more seriously it needs to be structured but easily used by product owners and non-technical business analysts. In many cases that means Excel® files. They’re flexible without being completely free form (like Word® documents), familiar to everyone, and easily tracked in Sharepoint® and the other tools so beloved by the suits.

Excel® can read and write CSV files – easily parsed – but for various reasons I lean towards using the native xlsx format and using a library like Apache POI to read and write the files. It requires a little more effort on our side but it will be much easier for the PO/BA to maintain. Note: LibreOffice can also read and write these files and I routinely use it on Linux in preference to Excel® on Windows®.

The implementation is fairly straightforward – each sheet is associated with a test class and contains one set of test data. The class knows how to map each column to expected inputs and outputs. A basic system simply reads the Excel® file, a more advanced system can produce an Excel report as well.

The TestNG framework makes it easy to nest test classes so the workbook is opened once, each of the tests is run, and then the workbook is closed and any report is written and published.

Final note

I don’t know if Apache, LibreOffice, TestNG, and others want the ® annotation. I’m pretty sure Microsoft® does!


Viewing all articles
Browse latest Browse all 29

Trending Articles