Specification By Example (Book Review & Summary)

I was lucky enough to be a reviewer on Specification By Example by Gojko Adzic, and the final version was recently released to print by Manning. And I was stoked to see not only my name in the acknowledgements, but that my quote made it to the cover of the book. The following is my brief review and notes from the book.

Review

“I love this book. This is testing done right.” That is my quote on the back cover of the book, and I meant every word of it. Having been a quality advocate in the agile space for a few years now, this is the first book I have read in a long time which had me nodding my head all of the way through, as it resonated with my ideas on how development teams need to reconsider specifications and testing.

The book starts out by summarising why specification by example is so important and outlines some key patterns for success and then, through examples throughout the book, steps through the patterns pointing out the warning signs along the way. The key steps are to ensure the culture is fit, then approach specification in a collaborative manner, use examples and automate and finally evolving a living document / specification.

I really appreciated the fact that the examples were not just the run of the mill greenfield Java web applications that are used in most books. There is a good sampling of different organisations, most of which are using this technique on existing legacy applications on a variety of different platforms. The book is an easy read for the entire team, which means it can (and should) be required reading for the developer, tester, analyst and project manager. I have encouraged many of my teams to take a look at the book, and a couple of my colleagues have indicated this book helped convince and reinforce why this approach is so valuable.

My only concern when reviewing was the fact that the title of this book may not standout to testers and developers (not perhaps as much as Acceptance Test Driven Development or ATDD might). Currently the community has a number of similar approaches with similar names, although I must acknowledge that the specification by example tag has grown on me over the last few months.

The book does not expend much effort talking about tools in this space, by design, I think this fact makes the book more readable and accessible to a wider audience, but that said it suggests to me that there is still a gap for a good text that matches specification by example to particular tools like Concordion, Fitnesse and the like.

Overall, this book is a definite must read for any teams (particularly agile teams) who are trying to balance or find a decent approach to specifications and testing. It is a good balance of patterns and real case studies on how testing and specifications should be approached in an agile world. It would make my list of Top 5 must read testing books and Top 10 must read agile books. And now I know what the proper name is for the cats eyes that are embedded in the freeway!

Finally, I had some other suggestions for summaries for the book that did not make it to cover, but they are just as relevant of my feelings about the book:

  • “One of the best Agile related books I have ever read. Buy it, read it, recommend it to your colleagues.”
  • “This book sums up the right way to attack requirements and testing while delivering to your customer. A must read for all agile teams.”
  • “I loved this book. I could not stop raving about it to my colleagues. It’s testing done right”

Summary

Here are my key notes from the book:

  • a people problem, not a technical one
  • building the product right and building the right product are two very different things, we need both to be successful
  • living documents – fundamental – a source of information about system functionality that is as reliable as the programming language code but much easier to access and understand
  • allows easier management of product backlogs
  • proceed with specifications only when the team is ready to start implementing an item eg. at the start of an iteration
  • derive scope from goals – business communicate the intent and team suggest a solution
  • verbose descriptions over-constrain the system – how something should be done rather than just what is to be done
  • traditional validation – we risk introducing problems if things get lost in translation between the business specification and technical automation
  • an automated specification with examples, still in a human readable form and easily accessible to all team members, becomes an executable specification
  • tests are specifications, specifications are tests
  • consider living documentation as a separate product with different customers and stakeholders
  • may find that Specification By Example means that UAT is no longer needed
  • changing the process – push Specification By Example as part of a culture change, focus on improving quality, start with functional test automation, introduce a new tool, use TDD as a stepping stone
  • changing the culture – avoid agile terminology, management support, Specification By Example a better way to do UAT, don’t make automation the end goal, don’t focus on a tool, leave one person behind to migrate legacy scripts (batman), track who is/isn’t running automated tests, hire someone who has done it before, bring in a consultant, introduce training
  • dealing with signoff and tracebility – keep specifications in a version control system, get signoff of living documentation, get signoff on scope not specifications, get signoff on slimmed down use cases, introduce use case realisations
  • warning signs – watch out for tests that change frequently, boomerangs, test slippage, just in case code and shotgun surgery
  • F16 – asked to be built for speed but real problem was to escape enemy combat – still very successful 30+ years later
  • scope implies solutions – work out the goals and collaborately work out the scope to meet goals
  • people tell you what they think they need, and by asking them ‘why’ you can identify new implicit goals they have
  • understanding why something is needed, and who needs it, is crucial to evaluating a suggested solution.
  • discuss, prioritise and estimate at goals level for better understanding and reduced effort
  • outside-in design – start with the outputs of the system and investigate why they are needed and how the software can provide them (comes from BDD)
  • one approach is to get developers to write the “I want” part of the storycard
  • when you don’t have control of scope – ask how something is useful, ask for an alternative solution, don’t only look at lowest level, deliver complete features
  • collaboration is valuable – big all team workshops, smaller workshops (three amigos), developers and analysts pairing on tests, developers review tests, informal conversations
  • business analysts are part of the delivery team, not customer representatives
  • right level of detail is picking up a card and saying ‘I’m not quite sure’, it pushes you to have a conversation
  • collaboration – hold introductory meetings, involve stakeholders, work ahead to prepare, developers and testers review stories, prepare only basic examples, overprescribing hinders discussion
  • one of the best ways to check if the requirements are complete is to try to design black-box test cases against them. If we don’t have enough information to design good test cases, we definitely don’t have enough information to build the system.
  • feature examples should be precise (no yes/no answers, use concrete examples), realistic (use real data, get realistic examples from customers), complete (experiment with data combinations, check for alternate ways to test) and easy to understand (don’t explore every combination, look for implied concepts)
  • whenever you see too many examples or very complicated examples in a specification, try to raise the level of abstraction for those descriptions
  • illustrate non-functional requirements – get precice performance requirements, use low-fi prototypes for UI, use the QUPER model, use a checklist for discussions, build a reference example for things that are hard to quantify (such as fun) to compare against
  • good specifications – should be precise and testable, not written as a script, not written as a flow
  • watch out for descriptions of how the system should work, think about what the system should do
  • specifications should not be about software design – not tightly coupled with code, work around technical difficulties, trapped in user interface details
  • specifications should be self explanatory – descriptive title and short paragraph of the goal, understood by others, not over-specified, start basic and then expanded
  • specifications should be focussed – use given-when-then, don’t explicitly detail all the dependencies, put defaults at the technical layer but don’t rely on them
  • define and use an ubiquitous language
  • starting with automation – try a small sample project, plan upfront, don’t postpone or delegate, avoid automating existing manual scripts, gain trust with UI tests
  • managing test automation – don’t treat as second-grade code, describe validation. don’t replicate business logic, automate along system boundaries, don’t check business logic through the UI
  • automating user interfaces – specify interaction at a higher level (logging rather than filling out the login page), check UI functionality with UI specifications, avoid record and playback, setup context in a database
  • test data management – avoid using pre-populated data, use pre-populated reference data, pull prototypes from the database,
  • Bott’s Dott’s are the lane markers on the roads that alert you when you move out of your lane, continuous integration has that function in software, run it with Specification By Example and you have continuous validation
  • reducing unreliability – find most annoying thing and fix it, identify unstable tests, setup dedicated validation environment, automated deployment, test doubles for external systems, multi-stage validation, execute tests in transactions, run quick checks for reference data, wait for events not elapsed time, make asynchronous processing optional, don’t use specification as an end to end validation
  • faster feedback – introduce business time, break long tests into smaller modules, avoid in-memory databases for testing, separate quick and slow tests, keep overnight tests stable, create a current iteration pack, parallelise test runs
  • managing failing tests – sometimes you can’t fix tests – create a known regression failures pack, automatically check disabled tests
  • easy to understand documentation – avoid long specifications, avoid lots of small specifications for a single feature, look for higher level concepts, avoid technical automation concepts
  • consistent documentation – evolve an ubiquitous language, use personas, collaborate on defining language, document building blocks
  • organize for easy access – by stories, functional areas, UI navigation routes, business processes, use tags instead of URLs
Advertisement

Agile Australia 2011: The Speed to Cool – Valuing Testing & Quality in Agile Teams

Agile Australia 2011My presentation from Agile Australia 2011 called “The Speed to Cool – Valuing Testing & Quality in Agile Teams” is available on SlideShare.

Ensuring that the approach to testing and quality is understood and appropriately valued in an Agile world can be a struggle for many organisations, especially when resources are limited and our customers are expecting business value in a timely manner. In this session Craig Smith will define what quality means, share a number of tools for measuring it as well as discussing approaches to improving the skills, empowerment and role of testing in the organisation and share why testing is the coolest role on the team and why it is everyone’s responsibility.

Some of the comments on Twitter included:

@AgileAcademy: Good luck today with your presentation at 11.30am on The Speed to Cool: Valuing testing and quality in Agile teams @smithcdau #agileaus

@vivierose: Waiting for room to fill at @smithcdau was standing room only last year! #agileaus

@adrianlsmith: #agileaus @smithcdau Software is a crime – Testers are detectives

@vivierose: Testers think ‘Everything is guilty until proven innocent’ @smithcdau #agileaus

@adrianlsmith: #agileaus @smithcdau discusses increasing technical skills of testers http://t.co/WPhbDsV

@mrembach: seeing an excellent talk by @smithcdau at the qualityInn #agileaus

@vivierose: Everyone likes to be seen as valuing quality, just like they love kittens, but it’s the 1st thing to be dumped #agileaus @smithcdau

@stephlouisesays: #agileaus loving @smithcdau challenging how well we apply the manifesto to testing. Card wall stages look remarkably like a waterfall… Hmm

@adrianlsmith: #agileaus @smithcdau applies agile manifesto to testing practices – great analogies

@AgileRenee: @smithcdau use the quality assessment tool now avail on the @AgileAcademywebsite http://lockerz.com/s/111080989

@stephlouisesays: #agileaus acceptance test development = specification by example… Beats heavy documentation any day @smithcdau

@timechanter: Fabulous presentation by Craig smith on agile testing. Liking the specification by example stuff. #agileaus

@stephlouisesays: #agileaus awesome session by @smithcdau quality and testing – relevant content, interesting slides (love the pics!) and fab speaker #newfave

@SMRobson: #agileaus @craigsmith finally!! Well done!

@AgileAcademy: 150+ watching great talk by @smithcdau on Valuing testing & quality in Agile teams. Terrific energy & passion. #agileaus #yam

@AgileAcademy: A tester is like Robocop – part man/woman; part machine but all tester! @smithcdau #agileaus #testing #quality #yam

@AgileAcademy: Thanks for the mention about the Agile Quality Practices sheet on our website. @smithcdau agileacademy.com.au #agileaus #yam

@seat_paul: #agileaus very good talk by Craig smith. As he says testers can be very cool!!

@mrembach: @smithcdau great talk Craig. Lots of take-aways

@smamol: Really cool #agileauspresso – beautiful slides: The Speed to Cool – Valuing Testing in Agile Teams http://t.co/1m1jwBL #in

ANZTB SIGIST Brisbane: Agile Testing & How We Need To Change

ANZTBA couple of weeks ago I had the opportunity to present at the ANZTB SIGIST Brisbane September meeting with my colleague Rene Maslen. Our talk was “Agile Testing and How We Need To Change” and the slides are available on Slideshare.

Some of my other colleagues also presented on the night including Ben Sullivan and Brent Acworth who spoke on BDD and some work they are doing on an open source framework for JBehave and Craig Aspinall who spoke on Automated Black Blob Testing.

Alister Scott had some nice words to say on my presentation on his blog and was also nice enough to take some pictures, which I have embedded below:







AAFTT Workshop 2010 (Orlando)

Agile AllianceThis year, I again had the great opportunity to attend the Agile Alliance Functional Testing Tools Workshop (AAFTT), the day before the Agile 2010 conference in Orlando. Organised by Jennita Andrea, Elisabeth Hendrickson, Aimme Keener and Patrick Wilson-Welsh, it was once again a wide variety of participants with a passion for testing and testing tools.

IMG_5958

It was also awesome to have Rachel Davies (co-author of Agile Coaching) facilitate the session.

First of all, the the agenda was set:

IMG_5959

We then did a round circle stating our name, role, claim to fame and hopes from the workshop. Due to the high percentage of Canadians in the workshop, our locations quickly turned in Canada-location jokes (OK, you had to be there…)

IMG_5956

Rachel then asked us to re-check the goal of the workshop. She suggested that we should check this with the group, even though it is usually set before the workshop. She asked the organisers to clarify the goals before re-checking what is in and out of scope for the workshop

The goal was: To reflect on the state of the practices and to identify ways to move forward.

IMG_5975

In scope for the workshop was ATDD, marketing, pre-requisites, how we teach, effective ways we talk about terms and use of tools to support practice. Out of scope was TDD, terminology / ontology and test after.

Pleasure and Plains of ATDD

We then broke into a number of groups to draw pictures of our pleasures and pains in relation to ATDD.

IMG_5967
IMG_5974
IMG_5973
IMG_5972
IMG_5971
IMG_5970
IMG_5969
IMG_5968
IMG_5960

This resulted in the following:

IMG_5979
IMG_5978

ATDD Retrospective

We then did an ATDD retrospective, which revealed a rich history!

IMG_5976

Open Space

After lunch, we conducted 2 open space sessions.

IMG_5986

Sales and Adoption Strategies

In the first session I attended was on Sales and Adoption Strategies which was hosted by Mark Levision:

  • Elisabeth Hendrickson suggested that ATDD does not get traction because we need to bring everyone to the table, however it can give power to a struggling agile adoption, we need to understand the fears and concerns of people and we need to have skills to talk to all of the audiences
  • Jim Cornelius said that managers think they want BDD for everything, yet we do not want to express everything in Given When Then format
  • I told the story of using ATDD on non-software development teams, as well as some of our challenges in getting test team buy-in from our testing teams to tools such as Concordion
  • Patrick Wilson-Welsh told a story about him introducing story testing to a team with no skills in testing, yet they were a rock star development team, they used JBehave and a thin TeamCity wrapper too convince them of the value
  • it was suggested that continuous integration needs to be in place to give ATDD adoption teeth
  • Andreas Ebbert-Karroum told a story of success when his team lost track of what they wanted to show at the end of the sprint, the team then wanted to do it, the barrier has been a steep technical learning curve, they are using Robot Framework but are always finding pieces missing (and wonder why in this day and age they seem to be the first person finding these issues)
  • we then heard the example of simple DSL Lookup screen (the one where you type a phone number to see if you can get DSL coverage) at BellSouth which was an example of an extremely simple user interface and lots of business logic, they used Fitnesse, as they found it was hard to test manually
  • Bob Galen suggested the need for effective grooming sessions to start collaboration, create a shared vision at the organisation level not just at the developer level, and the fact you need grass roots and top level support
  • Mark  Levison told all of his executives to read Agile Testing and now he is getting buy in and getting questions about how it might work
  • Jason Montague suggested his success has been through just in time communication, need to teams to get it working and sell the results for you

IMG_5987

Pleasure and Pain with Tools

The second session I attended was on Pleasure and Pain with Tools hosted by Elisabeth Hendrickson:

  • Express expectations in English and relate to underlying specifications
  • unit testing successful because it is in my language, my idea, nobody else involved – all you need to do (sounds easy) is to change the way of thinking, and it works for both test first and after
  • Robot Framework allows a test to be written in text, you can link from the wiki to the version control system and read directly from the trunk
  • test from a photo of the whiteboard using ApprovalTests – put a whiteboard photo in the test directory, get the system under test to create a bitmap, first manually match the photo and once the customer is happy then get the system to diff automatically moving forward
  • Sikuli uses images for tests – push a button that looks most like the bitmap in the test
  • ATDD frameworks facilitate collaboration, like Cucumber, then you need a driver
  • there was some discussion of Fitnesse versus Slim and the fact that the GPL2 code prevents adoption
  • Jason Huggins suggested that Concordion / Robot Framework / Fitnesse are too much bondage, as a developer all I need is assert
  • the question of testing Android was raised, with Jason Huggins suggesting Robotium
  • White is a .Net test framework
  • Given When Then loses detail if business people afe solely thinking that way
  • it is awesome that most of the tools now support tables, given when then and other more recent test tool approaches
  • too many tools are solely web focussed, Robot Framework is one exception although there are others
  • wikis are not comfortable for non-technical users (Elisabeth Hendrickson called this the last mile problem), many tools do not have command completion or refactoring support and output from the tools is not always management friendly (who usually want to know number of tests passed / failed)
  • to get ATDD tool buy in you need to get from scratch to test infected in 20 minutes
  • need to discover you are done sooner than you expect
  • a good analogy is why we have brakes on a car, not so you can stop but so you can go fast, the same stands for tests

Lightning Talks

We then kicked off a round of lightning talks:

IMG_5985

Every Acceptance Test Should Look Like An iPhone Commercial (Jason Huggins)

I had seen this last year at AAFTT, but good to see Hugs was still passionate about it and has some updated examples this year

How Does ATDD Work with Kanban (Matt Philip)

Principle of TD (Llewelyn Falco)

  • need to do something and then verify it (do verify)
  • benefits of a test – specifications, feedback, regression, granularity

Structure of Scalable & Maintainable Test Suite (With Robot Framework) (Andreas Ebbert-Karroum)

IMG_5997

A copy of the model is available in the blog post: How to Structure a Scalable and Maintainable Acceptance Test Suite

  • if I have stable platform and libraries, tests should be stable
  • the resource layer is the layer that should be variable (in the middle)\
  • Given When Then (GWT) tests grouped into themes, import file shield tests from business object changes

Testing Circle (Llewellyn Falco)

IMG_5999

  • discuss story on whiteboard
  • then becomes a story in written firm
  • then becomes code
  • result then looks like whiteboard

WWII & ATDD (Brian Marick)

IMG_6001

  • places with good infrastructure had their infrastructure pounded to rubble in World War II but the USA did not, therefore we have no fast trains, no fast Internet, bridges that fall down, etc.,.
  • all of XP is just what Ward Cunningham does naturally
  • spend too much time figuring out how to survive with legacy code, we should go back to the small!

Open Space Summaries

    • Why Have We Not Yet Discussed ATDD (J. B. Rainsberger) – came to conclusion that it is the same as BDD, How Test Driven Development Works (And More!)
    • Adoption and Sales Strategies (Mark Levison) – small pilots under the covers, ping pong developer tester ATDD, automation directors and managers and developers have different perspectives, talk at an enterprise level, semantic patterns of needing to slow down versus switching everyone all at once, how do we maintain momentum, always feel we are the first person tacking the issue
    • Facilitating Conversations (George Dinwiddie) – describing the problem rather than the solution, getting the language right, personas, myth of single product owner
    • Coding Dojo (Llewellyn Falco) – wrote tests for Yahtzee
    • Start a Business Example to Be Used for Many Tools (Mark Levison) – looked at example that Brian Marick wrote in Ruby, but still needs work
    • Fixtureless Testing (Declan Whelan) – fixtures break cadence in flow, especially when a tester needs to talk to a developer, wiki storage format not optimal as not easy to refactor, should be stored as code so we can refactor and reuse, want language to be close between system under test and acceptance test (use the domain language)
    • Pleasure and Pain with Tools (Elisabeth Hendrickson) – see above for notes
    • Workflow – Activities and Deliverables – built up a workflow

IMG_5988

Wrap Up

We then wrapped up with the future of AAFTT Program, in a discussion led by Jenitta Andrea:

IMG_5984

All in all it was a great day. Lisa Crispin was taking some video so if the quality was good I will help her get it uploaded. I also have more pictures up on Flickr for those that are interested.

Agile Alliance Brisbane: Changing Role of a Tester

The inaugural Agile Alliance Meetup in Brisbane kicked off on May 13 at the Hilton, Brisbane with a talk by Kristan Vingrys (Global Test Lead for ThoughtWorks) talking about the changing role of a tester. I have had the good fortune to work with Kristan a couple of times this year, so was looking forward to catching up again as well as seeing this talk.

Among other announcements, Robin Mack is the Brisbane liason for Agile Alliance Australia and they are currently looking for volunteers, sponsors and speakers.

The following are my notes from the session, the slides are also available online:

Agile Alliance Brisbane Kristan Vingrys

  • testing – find out what system is and what it does
  • testing is not about putting a quality assurance stamp on software before it goes out the door, its about understanding the quality of the application
  • traditional testing – V model – independent, based on a feature complete system, exit criteria from each phase
  • collaboration – more related to agile – single velocity for entire team (not just developers), co-located and interspersed team, loose boundary of roles, work closely with developers (not throwing information over the wall)
  • should not be iterative testing (testing 2-3 iterations behind), independent or separate velocity
  • stop finger pointing, no longer a gatekeeper
  • tester should be able to check out code, try it out whenever they want to
  • tester should have direct contact with business to break down direct animosity with the developer
  • measured by value added to the team not by defects found or number of tests written
  • test early – make sure the story is testable, do all testing earlier (including non-functional testing such as UAT)
  • Acceptance Test Driven Development – building quality in, not testing it in, this is not just about automation
  • handover – developer should call tester over to ensure that the requirement has been met, give the tester the keyboard for 2 minutes to see if they can break it before check-in
  • exploratory testing – difference between exploratory and ad-hoc, it should have a plan, allows to execute test cases while learning the system, a good approach to test the tests, looking for different ways to learn about and break the system
  • automation – a good way to continually do regression testing, it is not a test strategy, is not just tests (can also be used for automated assisted testing, trawling through log files, etc), automation is code so treat it like code (not using good behaviours will end up as brittle code)
  • unit testing – benefits the tester, know what comes from the developers has been tested to a certain level
  • continuous integration – push the regression suite so that it becomes the responsibility of the team
  • pairing – way to transfer knowledge, to and from the tester, focus on higher value testing
  • co-location – can talk directly to the team about issues or further understanding
  • standups – understanding what the team is doing, what they are working on, know areas of system to spend time on or stay away from
  • automated functional testing – not running the same test again and again and again
  • ratio of testers to developers – hard to gauge, depends on the skills, not everybody on the testing team needs to be a tester
  • test analysis – what are we going to test, understand requirements, architecture, test execution (by a human or a machine), environment management (testing the right thing, a stable environment)
  • “you can’t find a defect if it doesn’t exist”
  • troubleshooting – being able to track down a problem, find the artefacts and hand it off to the correct team to fix
  • every role on the team should contribute to testing in some way
  • metrics that you collect change when you move from waterfall to agile testing
  • generating a test report should not take very long – should be an indication as to whether someone needs to have a conversation with me, answers questions about the state of the test team
  • people make assumptions on vanilla metrics

Metrics:

  • points in test over time – number of stories that are in the testing column on the storywall, indicate if the team is keeping up with testing activities
  • outstanding defects over time  – how defects are tracking over time
  • number of bounces per story point – bounces back and forward between the development and the testing columns
  • not all defects need to be logged – only needs to be captured if it is not going to be fixed straight away, better to spend time on automated test to check defect will not come back rather than logging it
  • be aware of fake defect debt – when a story gets pushed through and defect/s get raised to be fixed later
  • add value by getting team focused on quality

Then there were some questions from the crowd:

  • environments – want enough environments to avoid test scheduling (where possible), happy to do UAT in an earlier environment if it means you are getting early feedback that is useful
  • tester velocity – should be focused on the velocity of the team and the definition of done, investigating using Kanban techniques to set a limit on a column (perhaps for testing to enable team flow)
  • automated tests – depends on where business logic, for AJAX you need to do more UI tests, main issue is getting tests running as quickly as possible and getting fast feedback, prefer not to test via the UI because it is very fragile
  • skillsets – pulling a team together will look at troubleshooting and may need to hire somebody just for this task, a team with developers that strong minded makes it hard for a tester to get their opinions across
  • test plans – does not necessarily need to be a document, the important thing is that there is a shared understanding, should never be signed off because it should be constantly revisited
  • skilled team of testers and developers – who should write the automated tests – look at skills, should not be written by those whp do not understand programming or test analyst – answer what I want to test and how do I go about developing it
  • regression test definition of done – business need to articulate the level of risk they are willing to accept, understand where the business value is in the system, finding a defect may encourage you to spend more time

 

Continuous Integration

Earlier in the year, I gave an internal videocast to my colleagues in IT on continuous integration. I finally got around to posting it online and the presentation is now available on Slideshare.

Dave Thomas on Maximum Software Productivity – Breaking The Rules!

Dave Thomas paid a visit to Brisbane to present his talk Maximum Software Productivity – Breaking The Rules! at the Microsoft office at Waterfront Place. He set the scene by suggesting the last time he gave this talk he was innundated with hate mail from agilista and objectas! Dave promised the slides would be posted, but I have not been able to locate them if they have. Here are my notes from the session:

  • tired of need to get agile and need to get objects – both good, but got nothing from it, where is the value?
  • objects good, agile good, but are we any better off?
  • most software late, bloated, poor to maintain
  • business does not define requirements well, don’t engage or talk to the customers (NEHITO visits – “Nothing Ever Happens In The Office”), insist on the industry norms because that is what everybody else is using
  • storypoints – training wheels for people who don’t know how to estimate
  • IT does not estimate well, do not build continuously or automatically test (need to be prepared to write a big cheque), also fixed in technology
  • if not ready to do TDD, just stop! Scrum will only make you feel good
  • need to change the rules to be competitive
  • living in legacy due to OO mud ball – legacy is code where there is no tests (see Brian Foote – Big Ball Of Mud)
  • objects too hard for normal people, first thing to be voted off the island should be Hibernate, objects don’t work for lots of things (queries, rules, transformations, etc…)
  • 80% of objects are CRUD – no objects except for the junk in the middle – just data, no simulations or data model – taking a solution and making it complicated and slowing it down
  • frameworks – so many to choose from, new versions, latest things, lots of dependencies
  • dependency injection is proof of how much we have screwed up objects
  • object libraries are unstable and languages are complex (attributes, generics, concurrency)
  • very few people use interfaces properly and keep them stable release to release
  • little reuse – promise not a reality
  • serialization carries all of the baggage of the objects contained within
  • objects suck up performance and memory – bulky and computationally expensive
  • objects cumbersome and slow for multi-core processors to run efficiently
  • objects are sequential and parallel
  • Java application will be 4 to 5 times slower than PHP, because objects are slower
  • can we shorten software value chain – shorter, faster, cheaper – Facebook built in PHP but the money rolls in
  • agile shows predictability and productivity but zip about quality
  • agility is good, but in Java it is very difficult to change the code quickly
  • scrum increases morale (everybody feels better), but makes no difference to quality unless doing TDD
  • lean thinking – software waste – if not prepared to spend $2 million to do continuous integration and TDD then they are not serious about their software output and quality
  • don’t go to meetings unless it is increasing the bottom line – will it help ship code?
  • how to make zero defect code? Don’t ship anything!
  • lean – simple – why am I doing this? – do we need a new framework? – NO!
  • staff a team with people who have shipped software (have a track record)
  • fix price your consultants and enforce that they make delivery with acceptance tests – pay when they pass
  • reward people on delivery, not how long they work
  • tangible requirements – story on the front, acceptance criteria on the back, start with acceptance tests, they are more valuable
  • envisioning – what the developer sees is not what the customer wants
  • agile great for prototyping – building small requirements on the fly
  • a backlog filled with lumpy requirements will burn out designers and product owners – original Scrum paper said sprint 0 should be 3 months – envisioning
  • architects – not a job, a role – should be able to code
  • some companies pay senior developers same as VP
  • extreme design – four hours to design software and hardware and cost it – after a few, you can get close and find out quickly what you don’t understand
  • serious engineering needs design
  • API first – design the architecture, should always be able to get architecture from the code (push a button)
  • need API’s versioned in the code
  • want to close gap between needs and solution
  • a picture is 1,000 words, a table 200 and a diagram 50
  • table driven programming – easily understood, easy to refactor, easy to consistency check (look for missing data), easy to version and diff (just data), data driven can be changed live in a running instance
  • integration – talk to old systems use RSS/ATOM feed (almost all old systems will give a feed for each transation) then you can talk it without custom api’s, REST/JSON your services or use ODBC as a simple interface (its not just for databases), use mashup tools to deliver an integrated application view
  • use scripts to save time & dollars – most software gets thrown way even before turned on, C# and Java too heavyweight, Ruby, Python, PHP, Groovy, Clojure, can easily leverage cloud services and existing services
  • productive languages – LINQ and Reactive LINQ (Haskell underneath), Erlang (good for swicthing and moving traffic, highly efficient), F#, Ruby, Scala (a better Java), Kleisli (bioinformatics), Clojure
  • Everybody should read “The Wizard Book”, Structure and Interpretation of Computer Programs by Abelson, Sussman, and Sussman
  • hardware is cheap, cloud is cheaper – all interesting data is in memory, databases are just journals and don’t really exist, Google does everything by brute force search (eg. translation), speed now means complicated stuff can be done easier and cheaper
  • data driven – massive storage means we can store it all very cheaply, run smart algorithms to determine best value customers because we have all of the data, recommendation engines (NetFlix), Net Promoter, complex event processing (open intelligence), real time financials (no end of month financials, know state of company all of the time)
  • query oriented programming (QOP), Greenplum, Q, Aleri (basically extended SQL dialects with functions) and other functional languages
  • array VM’s – better than object VM’s – always boxed, simpler garbage collection, support all data types (array of stuff is an array of stuff), VM can be small (just an interpreter), arrays are column stores already and trivially serialised, take less space and programs concise and compact
  • best practices for functional programming – agile , refactoring tools, FindBugs / Lint
  • challenges for functional programming – get out of math phobia, different way of thinking, need to write literate and understandable code, think in functions
  • yesterday and tomorrow is always wrong – if you get more bang for buck and competitive advantage, why use existing technologies, if you believe IT is strategic you need to do something strategic with it and dare to be different
  • lost of old things work, lots of new things work well, get out of agile and OO box

AAFTT Workshop 2009 (Chicago)

Agile AllianceI had the great pleasure to attend the Agile Alliance Functional Testing Tools (AAFTT) workshop on the Sunday before the Agile 2009 conference in Chicago, and share discussion with some of the best minds in the testing community from around the world.

The location was right across the road from the Willis Tower (better known by its previous name, the Sears Tower). Some of the notable attendees amongst many others included:

There were at least 4 tracks to choose from, these are the notes from the ones I participated in.

Screencasting

Small group discussion led by Jason Huggins about a different way of thinking about test artefacts (basically producing an iPhone commercial)

Photo 3 of 4 from #agile2009 in Chicago at the pre-conference... on Twitpic

  • the Rails screencast sold Rails because it sold the idea and then the product sold itself
  • now, with YouTube, etc, we have the tools available
  • used to be RTFM, not it is WTFV
  • ideal is to produce automated tests like the iPhone commercial, instead of a test report
  • use the “dailies” concept, like in the movies
  • perhaps the movie should be at a feature level, because the video should be interesting
  • best suited for happy path testing, is a way to secure project funding and money, remember that the iPhone commercial does not show the AT&T network being down
  • there is a separation between pre-project and during testing
  • tools currently exist, including the Castanaut DSL
  • part of the offering of Sauce Labs, currently recording Selenium tests
  • from the command line utility vnc2swf, created an API called Castro
  • at the moment you need to clean up the screens that are recorded
  • the advantage, being VNC, is that you can use all sorts of hardware, including the iPhone
  • suggest that you use something like uLimit to stop runaway videos, especially when being run in an automated test, to limit the size of the directory or the length of the video
  • suggest make a rule that no test is longer than five minutes
  • given the current tools are written in Python, DocTest is good for testing

Lightning Talks on Tools

I came in mid-way through this session, but caught some of the tools being discussed at the end

  • some tools are too hard to get passed the basic level, but quick to setup
  • tests are procedural, engineers tend to over-engineer

Robot IDE (RIDE)

  • most tools have a basic vocabulary to overcome
  • IDE is worth looking at
  • Robot has a Selenium plugin, but it is easy to write your own framework

Twist

  • specify tests as requirements, looks like a document, stored as text, write whatever you want
  • refactoring support as a first level concept
  • out of the box support for Selenium and Frankenstein (Swing)
  • write acceptance test – brown shows not implemented, allows developer to know what to implement, turns blue when done
  • refactoring concept “rephrase”
  • supports business rule tables (ie. Fitnesse for data driven tests)
  • support to mark a test as manual and generate the same reports
  • commercial software, licenced in packs
  • plugins to Eclipse, but don’t need to be familiar with this unless you are developing the automation

WebDriver

  • been around for three years

UltiFit

  • Ultimate Software, internal currently, allows to select Fitnesse tests, setup and teardown, close browser windows, nice GUI, etc…
  • uses TestRunner under the covers

SWAT

  • been around for two years, more traction now that Lisa Crispin works for Ultimate Software
  • simple editor for SWAT (& somewhat Fitnesse)
  • has a database access editor
  • uses Fitnesse syntax
  • there is a recorder, only good for teaching, people get lazy and don’t refactor
  • can take screenshots, borrowed from WatiN
  • can’t run SWAT when Fitnesse is running as a server
  • SWAT is a C# library at its core
  • can run macros, tests from other tests
  • run script – write script (eg. JavaScript) to help things that are hard to test

High Performance Browser Testing / Selenium

Jason Huggins led this conversation which was more a roundtable debate than anything else. The group discussed how we can get tests running quicker and reduce feedback times considerably.

This discussion led to a couple of the quotes of the workshop from Jason Huggins:

  • “Selenium IDE is the place to start with Selenium, but it is Selenium on training wheels”
  • “Record/playback testing tools should be clearly labeled as “training wheels”
  • “What to do with the Selenium IDE, no self respecting developer will use it.” Thinking of renaming the IDE to Selenium Trainer.
  • Amazing how many people in the testing community are red, green colour blind”

When Can / Do You Automate Too Much?

This started as a discussion on testing led by Brandon Carlson…

  • get your business people to write the tests – they will understand how hard it is, have seen outcome that amount of scope reduced because they have to do the work

…but ended up as a great discussion on agile approaches and rollout, discussing a number of war stories led by Dana Wells and Jason Montague from Wells Fargo

  • still early in their agile deployment
  • wish to emulate some of the good work done by some of the early agile teams
  • estimate in NUTs (Nebulus Units of Time)

Miscellaneous and Other Links

Some other miscellaenous observations from the workshop:

  • a number of sessions were recorded
  • of those using Windows laptops, a large percentage were running Google Chrome
  • Wikispaces is good to setup a quick wiki

A number of posts about the workshop have been posted since including:

And you can view the photos that I took from the event at: http://www.flickr.com/photos/33840476@N06/sets/72157622521200928/

Wrap up from CITCON Brisbane

CITCONI attended the CITCON (Continuous Integration and Testing Conference) in Brisbane last weekend and had an awesome time discussing a range of topics with the most passionate in this field.

Deep in CITCON discussion

Deep in CITCON discussion

I have added my notes to the conference wiki,but my takeaways from the sessions I attended are:

Elements of Enterprise Continuous Integration

Jeff Frederick led a discussion based around the Elements of Continuous Integration maturity model:

  • for teams that are already doing continuous intgration, it gives you a target to obtain
  • is obnoxious after insane (where to for teams that are already at the top level)?
  • tooling makes continuous integration trivial now (when Cruise Control was released many people thought it crazy that you might build on every release, not its a given)
  • the model was developed because people assume what is possible is based around their personal experiences
  • the model shows the industry norms and targets, and if your team is not at these levels you are behind the curve

The discussion branched out around the following ideas:

  • scrum does not prescribe continuous integration, but continuous integration is a good development technique
  • that it should be acknowledged that there is a difference between project builds and full product builds (which can take days)
  • I raised the idea that perhaps there should be an element around team principles, and that things like performance (and more importantly, the team realisation that performance should be monitored and improved) should be an indicator to maturity (there was much debate about this!)
  • a number of industries potentially have their continuous integration processes audited, such as defence, gaming  and financial organisations that have Sarbanes-Oxley requirements
  • it was acknowledged that most large organisations have teams at different levels on the maturity scale (this is certainly my experience)
  • dynamic languages and don’t really build or deploy. This then raised discussion that dynamic languages are not compiling as opposed to not building, and that in many cases one man consultants can manage their deployment process in a much more lightweight manner
  • parallel to CMMI, is there a payoff to getting to insane?
  • maturity is often determined when we move from dropping code to testers versus testing the development build (where testers are writing the code while the code is being developed)
  • where is the line that determines that the build is complete? It should be the entire team, not just the developers or the QA team
  • the QA team is traditionally where much of the auditing happens, therefore many testers are reluctant to change as they have built up processes to deal with audits over a number of years

For the record, the cutting-edge agile teams I have worked with over the last few years were at the following levels:

  • Building (Intermediate)
  • Deploying (Intermediate)
  • Testing (Insane)
  • Reporting (Intermediate)

We still have work to do!

Virtualisation & CI

I used the “law of two feet” during this session, but was interested to hear that many people are using virtualisation very effectively in their test labs, and that it makes getting environments and data ready for testing much easier.

Long Build Times

The discussion was well-established by the time I got to this session, but some of the key points for me from the discussion were:

  • question as to when static analysis checks should be run in the build – the consensus that running them first means you get the quickest feedback
  • longer builds should be run nightly so as not to hold up developers
  • prioritising build queues or using different machines sounds like a good idea, but nobody is doing it
  • you can reuse functional tests for performance tests, but targeting specific tests seems to work better
  • Atlassian use JMeter for performance tests and have a variety of Maven and Ant builds, but use Maven for managing repositories
  • Ant is still well regarded, Idea support is awesome, many people do not understand the power of custom ant tasks or the ant idoms
  • the build should be regarded as part of your code
  • discussion about using a Java build tool, such as Hammer and why we can’t articulate why it seems wrong
  • not enough people understand Maven, usually there is “one guy” on the team
  • Vizant is a good tool to graph the build
  • EasyAnt combines all of the Ant idioms plus Ivy

Is Scrum Evil?

Jeff Frederick led a discussion that he is led at previous CITCON’s around the world

The team first debated why Scrum is Evil. During this discussion I really thought the whole agile movement was done for. Jeff asked the group to finish the sentence Scrum Is Evil because…:

  • it becomes an excuse
  • that’s not Scrum
  • tested as a silver bullet
  • hides poor personal estimation
  • master as dictator, project manager
  • two days to agile master certification
  • daily standup equals agile
  • agile by the numbers
  • is dessert first
  • you lose the baby with the bathwater
  • Scrum teams don’y play well with others including customers
  • it has certification
  • is the new RUP

Jeff then proposed a way to think about Scrum adoption as outlined in Geoffrey Moore’s “Crossing The Chasm”. The early adopters had success while the early majority are putting their faith in training everybody as Certified Scrum Masters (a problem that appears to be a far greater issue in Europe than Australia).

Then, just as though all hope had gone, Jeff asked the group to finish the sentence Scrum is Good beacuse…:

  • people can get it
  • an easy introduction
  • a good starting point
  • it is better than a cowboy shop
  • people can actually follow it
  • improves visibility
  • blockers are highlighted
  • testers can start work early
  • provides a forum for communication
  • can engage customers in a much richer way
  • states there should be a facilitator
  • results focussed
  • makes everybody responsible for end result
  • better communication from end result

The key outcome by the group was “Scrum is not evil… people are evil”

This was a great way of trying to tease out the issues and advantages to using an agile process and one that we may be able to use in the enterprise with teams who have been on training but appear to be resistant to change.

Seeding Test Data

A good discussion about ways to seed test data

  • Erik Petersen introduced the group to GenerateData.com, a free site that generates real adddresses and data based on factors a random amount of times that you can then inject into SQL – the site looks awesome!
  • others in the group mentioned LiquiBase that can be used to version the database, is designed for database management but can be used to seed data
  • Unitils is used to setup data scripts
  • one suggestion was to build a reset database function into the system
  • HSQL (Hypersonic) is a good way to create databases from Hibernate, in memory

The discussion got a little more generic and talked about:

  • Roo, a java Grails-like platform
  • WebObjects by Apple is a lot better than it used to be

Extending CI Past Traditional Dev & Release Process

I led this discussion, and whilst it focussed mainly on different usages that I have been involved with (with assistance from Paul O’Keeffe and Paul King), we also had a good discussion about Tableaux and the build process at Atlassian.

Conclusions

A great open conference attended by people passionate enough to give up their Saturday to talk about continuous integration and testing.