BDD In Action (Book Review & Summary)

BDDInActionI have known John Ferguson Smart for a number of years, bumping into him at various conferences since 2009. When I heard he was writing a book on BDD I was both excited and apprehensive – a number of people have attempted to cover BDD over the years to varying levels of success. I must say I was pleasantly surprised with the outcome – “BDD In Action“.

I have long been an advocate of Test Driven Development (TDD). I have long subscrbed to the definition of TDD being developer test driven at the code level, ATDD being storycard level and BDD being the approach of testing behaviour (most popularly using the Given… When… Then… format. I do understand this is not quite the definition of Dan North, Gojko Adzic or John Smart (the terms are all interchangeable) but one thing I have appreciated more in recent years (particularly after spending time with Gojko) is the importance of the conversation.

Review

BDD In Action kicks off with an in-depth explanation of what BDD is and why it is important before a good end-to-end example. One of the highlights of the book for me was chapter 3 which has some good coverage of different techniques for requirements analysis. Whilst it could be argued that these are not really BDD, they are included as good examples on how to ensure you are working on requirements that are adding business value. The approaches include Feature Injection, Impact Mapping and the Purpose-Based Alignment Model as well as Real Options and Deliberate Discovery.

John has always extended how I define the core roles on the development team (the three amigos) and this is described within the book as well. The second section of the book explains requirements gathering, through to specifications and then basic automation of a DSL at the scenario level. It seems to imply that the tester should be able to write the executable specification and the base automation, although this does not match my experience of most testers in the field.

Section three of the book covers the coding aspects of BDD, including the automation of tests at the UI level as well as the unit testing level. It goes into a fair amount of detail on how different tools work in different languages as well as a fairly comprehensive overview of TDD. The final section of the book introduces living documentation, mainly using tools like Thucyidides as well as how BDD fits into the continuous delivery world.

The book is full of diagrams as well as comprehensive code examples that are well explained and relevant. One of the main advantages of this book is that is not aimed at any tool – in fact it covers a number of tools and languages in a reasonable amount of detail. The other standout books on the subject have either covered the process in great detail (Specfication By Example) or a tool in detail (The Cucumber Book). This does a very reasonable job of both.

This advantage is also its disadvantage – I would hope testers and business analysts would read this book but may be discouraged by the amoutn of code examples which start from very early in the book. On the flipside, there is a good coverage of requirements tools at the beginning of the book that may discourage some developers. I hope that in this world of cross functional teams that this is not the case.

Overall this is a very well written book that covers the full spectrum of BDD (and TDD, ATDD and SBE). It is also good to see a book that has Australian examples in it for a change, including the Sydney train system and the Queensland Health payroll project.

My full book review and interview with John is available on InfoQ.

Summary

Here are my notes from the book:

The Basics

  • BDD was born – It was a response to a triple conundrum: programmers didn’t want to write tests; testers didn’t want programmers writing tests; and business stakeholders didn’t see any value in anything that wasn’t production code (quote from Dan North)
  • BDD is a mechanism for fostering collaboration and discovery through examples – the real goal is to use software to create business impact
  • 2011 edition of the Standish Group’s annual CHAOS Report found that 42% of projects were delivered late, ran over budget, or failed to deliver all of the requested features and 21% of projects were cancelled entirely
  • BDD practitioners use conversations around concrete examples of system behavior to help understand how features will provide value to the business
  • Queensland Health Department – initial budget for the project was around $6 million, cost the state over $416 million since going into production and would cost an additional $837 million to fix. This colossal sum included $220 million just to fix the immediate software issues
  • “When the terrain disagrees with the map, trust the terrain” (Swiss Army proverb)
  • one important benefit of BDD is that it provides techniques that can help you manage this uncertainty and reduce the risk that comes with it
  • Behavior-Driven Development (BDD) is a set of software engineering practices designed to help teams build and deliver more valuable, higher quality software faster
  • North observed that a few simple practices, such as naming unit tests as full sentences and using the word “should,” can help developers write more meaningful tests, which in turn helps them write higher quality code more efficiently. When you think in terms of what the class should do, instead of what method or function is being tested, it’s easier to keep your efforts focused on the underlying business requirements.
  • Acceptance-Test-Driven Development (ATDD) is now a widely used synonym for Specification by Example
  • A feature is a tangible, deliverable piece of functionality that helps the business to achieve its business goals
  • In Gherkin, the requirements related to a particular feature are grouped into a single text file called a feature file. A feature file contains a short description of the feature, followed by a number of scenarios, or formalized examples of how a feature works. Each scenario is made up of a number of steps, where each step starts with one of a small number of keywords (Given, When, Then, And, and But).
  • Given describes the preconditions for the scenario and prepares the test environment. When describes the action under test. Then describes the expected outcomes. The And and But keywords can be used to join several Given, When, or Then steps together in a more readable way:
  • Executable specifications are about communication as much as they are about validation
  • Don’t write unit tests, write low-level specifications
  • Benefits – Reduced waste, Reduced costs, Easier and safer changes, Faster releases
  • Disadvantages – BDD requires high business engagement and collaboration, works best in an Agile or iterative context, doesn’t work well in a silo, poorly written tests can lead to higher test-maintenance costs
  • Spock is a lightweight and expressive BDD-style testing library for Java and Groovy applications. You write unit tests in the form of “specifications,” using a very readable “given … when … then” structure similar to that used in the JBehave scenarios. The >> sign in Spock is shorthand for saying “when I call this method with these parameters, return these values.”

Starting at Requirements

  • Business Analysts will find it useful to identify four things: 1.  Why is the software being built (what is the project’s vision statement)? 2.  How will the project deliver value to the organization (what are the project’s business goals)? 3.  What stakeholders are involved in the project, and how will the project affect them? 4.  What high-level capabilities should the software provide for stakeholders to enable them to achieve their business goals more effectively
  • 1.  Hunt the value. 2.  Inject the features. 3.  Spot the examples
  • In his book Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers, Geoffrey A. Moore proposes a template for a good product vision statement
  • You can also write goals using the following “In order to … As a … I want to …” format: In order to increase ticket sales by 5% over the next year As the Flying High Sales Manager I want to encourage travellers to fly with Flying High rather than with a rival company
  • Some business managers use the SMART acronym: business goals should be Specific, Measurable, Achievable, Relevant, Time-bound
  • The goals of most commercial organizations are, by definition, ultimately financial in nature. In fact, almost all business goals can be grouped into one of the four following categories: Increasing revenue, Reducing costs, Protecting revenue, Avoiding future costs
  • An impact map is a mind-map built during a conversation, or series of conversations, between stakeholders and members of the development team. The conversation centers around four types of questions: Why? Who? How? What?
  • Purpose Based Alignment Model – a feature will fall into one of four categories: Differentiating, Parity, Partner, Minimum impact
  • BDD places great emphasis on building “software that matters” and defines several processes for turning client requirements into something that developers can use to code against and that accurately reflects the core values of the software the client wants, that a project is meant to deliver and the features that will be able to deliver this value.
  • The aim of Feature Injection is to flesh out the minimum set of features that will provide the most benefit to stakeholders in terms of achieving their business goals
  • Project vision, a short statement that provides a high-level guiding direction for the project
  • As a software developer, your job is to design and build capabilities that help the business realize these goals. A capability gives your users the ability to achieve some goal or fulfill some task. A good way to spot a capability is that it can be prefixed with the words “to be able to
  • Features are what you actually build, and they’re what deliver the value
  • Business goal succinctly defines how the project will benefit the organization or how it will align with the organization’s strategies or vocation
  • Repeatedly ask “why” until you get to a viable business goal. As a rule of thumb, five why-style questions are usually enough to identify the underlying business value (known as “popping the why stack)
  • “Don’t tell people how to do things, tell them what to do and let them surprise you with their results”. George S. Patton
  • Not all features are equal. Some features will be areas of innovation, requiring specialized domain knowledge and expertise and adding significant value. Others, such as online payment with credit cards, might be necessary in a market, but won’t distinguish your product from the competition

Building Features

  • Dan North, “Introducing Deliberate Discovery” (2010), http://dannorth.net/2010/08/30/introducing-deliberate-discovery.
  • In BDD terms, a feature is a piece of software functionality that helps users or other stakeholders achieve some business goal
  • User story is a way of breaking the feature down into more manageable chunks, user stories are essentially planning artifacts
  • Features are expressed in business terms and in a language that management can understand. If you were writing a user manual, a feature would probably have its own section or subsection
  • Dan Goodin, “Anatomy of a hack: even your ‘complicated’ password is easy to crack,” http://www.wired.co.uk/news/archive/2013-05/28/password-cracking
  • Real Options in three simple points: Options have value. Options expire. Never commit early unless you know why
  • Deliberate Discovery is the flip side of Real Options – starts with the assumption that there are things you don’t know. Real Options help you keep your options open until you have enough information to act; Deliberate Discovery helps you get this information
  • Three Amigos.” Three team-members—a developer, a tester, and a business analyst or product owner—get together to discuss a feature and draw up the examples

Executable Specifications

  • Scenario starts with the Scenario keyword and a descriptive title: Scenario:
  • The Then step is where the testing takes place—this is where you describe what outcome you expect. A common anti-pattern among new BDD practitioners is to mix the When and Then steps
  • Tables can be used to combine several similar examples more concisely in a single scenario, or to express test data or expected results in a more succinct way
  • Scenarios are organized in feature files
  • One of the core concepts behind BDD is the idea that you can express significant concrete examples in a form that’s both readable for stakeholders and executable as part of your automated test suite
  • Scenarios are stored in simple text files and grouped by feature. These files are called, logically enough, feature files
  • At the top of a feature file is a section where you can include the description of the corresponding feature
  • The title should describe an activity that a user or stakeholder would like to perform
  • Dan North’s article, “What’s in a story,” for some interesting tips on writing well-pitched stories and scenarios: http://dannorth.net/whats-in-a-story/
  • In JBehave, the Narrative keyword is used to mark the start of an optional, free-form description
  • In Gherkin, you use the Feature keyword to mark the feature’s title. Any text between this title and the first scenario is treated as a feature description
  • The Given step describes the preconditions for your test – be careful to only include the preconditions that are directly related to the scenario
  • The When step describes the principal action or event that you want to do
  • The Then step compares the observed outcome or state of the system with what you expect
  • Both Gerkin and JBehave, any of the previous steps can be extended using and.
  • Good habit to keep “Given … When … Then” clauses concise and focused. If you’re tempted to place two conditions in the same step, consider splitting them
  • In Gherkin, you can insert a comment, or comment out a line, by placing the hash character (#) at the start of a line. In JBehave, a comment line starts with !–
  • Having a lot of similar scenarios to describe a set of related business rules is a poor practice; the duplication makes the scenarios harder to maintain
  • Data from a table is passed into each step via the field names in angle brackets
  • Presenting data in tabular form can make it easier to spot patterns
  • Good scenarios are declarative, not imperative. They describe the requirements in terms of what a feature should do, not how it should do it
  • The Background keyword lets you specify steps that will be run before each scenario in the feature. You can use this to avoid duplicating steps in each scenario, which also helps focus attention on the important bits of each scenario. In JBehave, you can do something similar with the GivenStories keyword
  • In JBehave, feature files conventionally use the .story suffix, whereas the Gherkin-based tools use the .feature suffix
  • The role of a scenario is to illustrate a feature, and you place all the scenarios that describe a particular feature in a single file, usually with a name that summarizes the feature
  • Useful to relate a feature or an individual scenario back to the corresponding issue, both for information and so that reporting tools can use this data to create a link back to the corresponding issue. In JBehave, you can do this using the Meta keyword.
  • Some BDD tools (Cucumber, in particular) also let you write hooks—methods that will be executed before or after a scenario with a specific tag is executed

Automation

  • Tools like JBehave and Cucumber can’t turn a text scenario into an automated test by themselves; they need your help.
  • Step definitions are essentially bits of code that interpret the text in feature files and know what to do for each step
  • The test automation library will read the feature files and figure out what method it should call for each step
  • Step definitions interpret the scenario texts and call the test automation layer to perform the actual tasks
  • The test automation layer interacts with the application under test:
  • If all of the steps succeed, then the scenario will succeed. If one of the steps fails, then the scenario will fail
  • Thucydides (http://thucydides.info) is an open source library that adds better-integrated and more-comprehensive reporting capabilities to conventional BDD tools such as JBehave and Cucumber. The specialty of Thucydides is taking the test results produced by BDD tools like JBehave and turning them into rich, well-integrated living documentation
  • JBehave (http://jbehave.org) is a popular Java-based BDD framework that was originally written by Dan North. In JBehave, you write step definition methods in Java or in other JVM languages such as Groovy or Scala.
  • Easiest way to build and run a JBehave/Thucydides test suite is to use Maven
  • JBehave step definitions are just annotated Java methods that live in ordinary Java classes. JBehave uses an @Given, @When, or @Then annotation
  • Cucumber is a very popular BDD tool from the Ruby world
  • Cucumber-JVM is a more recent Java implementation of Cucumber, which allows you to write step definitions in Java and other JVM languages
  • For pure Python solutions, there are currently three tools available: Lettuce (http://pythonhosted.org/lettuce), Freshen (https://github.com/rlisagor/freshen), and Behave (http://pythonhosted.org/behave). Behave is the most stable, best documented, and most feature-rich of the three.
  • For a .NET environment, your best option for BDD is SpecFlow (http://specflow.org). SpecFlow is an open source Visual Studio extension that provides support for Gherkin scenarios
  • Unit testing is well supported in JavaScript, and low-level BDD unit-testing libraries like Jasmine and Mocha are widely used including Cucumber-JS (https://github.com/cucumber/cucumber-js), which is probably the best known of the JavaScript BDD libraries, and Yadda (https://github.com/acuminous/yadda), an alternative to Cucumber-JS that allows more flexibility in the scenario wording. Cucumber-JS relies on Node.js and npm

Coding

  • Most reliable way to set up your test database is to automatically reinitialize the database schema before each test. The next-best way to prepare test data is to automatically reinitialize the database schema every time you run the test suite. This is faster than reinitializing the database before each scenario, but it means that each scenario is responsible for deleting any test data that it creates, which isn’t without risk
  • most BDD tools provide “hooks” that allow you to perform actions before and after each scenario and at other strategic points in the test suite lifecycle.
  • When you write automated acceptance criteria, using layers can help you isolate the more volatile, low-level implementation details of your tests from the higher level, more stable business rules
  • Gojko Adzik, “How to implement UI testing without shooting yourself in the foot,” http://gojko.net/2010/04/13/how-to-implement-ui-testing-without-shooting-yourself-in-the-foot-2/
  • The Business Rules layer describes the requirement under test in high-level business terms – scenario in a feature file using either a table or a narrative structure
  • Business Flow layer. This layer represents the user’s journey through the system to achieve a particular business goal
  • The Technical layer represents how the user interacts with the system at a detailed level—how they navigate to the registration page, what they enter when they get there, how you identify these fields on the HTML page, and so forth
  • Page objects—classes that hide the technical details about HTML fields and CSS classes behind descriptively named methods
  • Only need a web test for two things: Illustrating the user’s journey through the system and illustrating how a business rule is represented in the user interface
  • Screenshots from automated web tests can be a valuable aid for testers, and they’re also a great way to provide illustrated documentation describing how the application behaves.
  • Selenium WebDriver provides good support for Page Objects. The Page Objects design pattern can to help make automated web tests more readable and easier to maintain
  • Mobile apps can be tested effectively using Appium (http://appium.io/), a WebDriver-based automation library for mobile apps, and the Page Objects pattern is applicable for any type of GUI.
  • It’s virtually impossible to do good BDD-style acceptance testing with Record-Replay tools
  • HtmlUnit for Java (http://htmlunit.sourceforge.net), Webrat for Ruby (https://github.com/brynary/webrat), and Twill for Python (http://twill.idyll.org) send HTTP queries directly to the server, without having to start up an actual web browser
  • PhantomJS (http://phantomjs.org) provides a more accurate browser simulation, because it renders the HTML like a real browser would, but does so internally
  • HtmlUnit uses the Rhino Java-Script implementation, which isn’t used by a real browser. PhantomJS uses WebKit, which may have different behavior than Firefox or Internet Explorer
  • Several open source libraries for different platforms that can help you build on WebDriver to write web tests more efficiently and more expressively, including Thucydides, Watir, WatiN, and Geb
  • There are many open source and commercial load-testing tools, and most can be scripted. Popular open source options in the Java world include SoapUI (www.soapui.org), JMeter (http://jmeter.apache.org/), and The Grinder (http://grinder.sourceforge.net)
  • number of more BDD-flavored unit-testing tools have emerged over recent years that make these techniques easier and more intuitive to practice. Tools like RSpec, NSpec, Spock, and Jasmine
  • There are two main flavors to fluent assertions. The first typically uses the word “assert,” whereas the second uses terms like “should” or “expect. The first approach comes from a more traditional unit-testing background and focuses on testing and verification. The second is more BDD-centric: the words “should” and “expect” describe what you think the application should do, regardless of what it does currently, or if it even exists

Living Documentation

  • BDD reporting completes the circle that started with the initial conversations with business stakeholders
  • Testers also use the living documentation to complement their own testing activities, to understand how features have been implemented, and to get a better idea of the areas in which they should focus their exploratory testing
  • In BDD terms, a feature can be considered ready (or done) when all of its acceptance criteria pass
  • Cucumber Reports (www.masterthought.net/section/cucumber-reporting) provides more presentable reports
  • Thucydides provides feature-level reports, either directly with JBehave or with test results imported from other tools such as Cucumber, SpecFlow, and Behave
  • Organize living documentation to reflect the requirements hierarchy of the project, organize cross-functional concerns by using tags
  • Some automated acceptance-testing tools such as FitNesse and Concordion (http://concordion.org) give you even more flexibility in how you organize the living documentation. FitNesse (http://fitnesse.org/) uses wiki pages to let the team, including business analysts and even users, write their own acceptance criteria in a tabular format

Continuous *

  • Each executable specification should be self-sufficient. Executable specifications should be stored in version control. You should be able to run the executable specifications from the command line, typically using a build script
  • Executable specifications shouldn’t depend on other specifications to prepare test data or to place the system in a particular state. Each specification should be able to run in isolation, and each specification should set up the test environment in the initial state it require
  • Automated build process needs to be able to run the right set of executable specifications for a given version of the applicatio
  • Automated acceptance criteria should be considered a form of source code and stored in the same source code repository as your application code
  • There are many build-scripting tools, and your choice will typically depend on the nature of your project. In the Java world, you might use Maven, Gradle, or Ant. For a JavaScript-based project, you could use Grunt or Gulp. In .NET, it might be MSBuild or NAnt, and so
  • CI relies heavily on good automated testing: without automated testing, a CI server is little better than an automated compilation checker
  • For teams practicing BDD, a CI server also acts as a platform for automatically building and publishing living documentation
  • the build pipeline is typically made up of a series of quality gateways, which run different sorts of tests and checks
  • This simplest way to publish up-to-date living documentation is to store it directly on the CI build server. Almost all of the BDD tools we’ve discussed in this book produce HTML reports, and most CI servers let you store HTML reports produced as part of a build
Advertisement

Interview and Book Review: BDD In Action

InfoQ“BDD In Action” is a book that aims to cover the full spectrum of BDD practices from requirements through to the development of production code backed by executable specifications and automated tests.

BDDInActionSource: Interview and Book Review: BDD In Action

Agile Australia 2012 Day 2 Review

Day 2 of Agile Australia 2012, and another busy day of MC’ing and attending sessions.

The first (hastily rescheduled) keynote session was from Roy Singham from ThoughtWorks.

From Agile Australia 2012

The second keynote was supposed to be Mark “Bomber” Thompson from the Essendon Football Club but he was an unexplained no show. After an impromptu thankyou speech from me and breaking the conference for an early break, James Hird arrived to substitute and did an impromptu talk. As a result of the scheduling changes, I unfortunately did not get to see much of either session.

From Agile Australia 2012

How Lonely Planet Used Agile With SAP and Delighted Customers

I sat in the back of this session delivered by Ed Cortis from Lonely Planet. His slides are available here.

From Agile Australia 2012
  • failed, needed twice as many people after implementation
  • ran net promoter scores internally, -40!
  • attempted Agile customer management – planning meetings took 3 hours, attendance dropped, SAP team became prioritisers
  • NPS dropped to about -35
  • changed team structure and in-sourced, positive NPS
  • got agile working – 4 week sprint, 40 minunte presentation, stakeholders turn up because if you are not there you don’t get prioritised
  • developed a prioritisation matrix – business value versus effort, colour coded cards for skillset, sets order for prioritisation
  • pre work is required for the meeting – know how many points of effort for every available person
  • prioritisation board – built the backlog as part of the session
  • no spreadsheets!

The Trouble With Time Machine

I was MC for this session delivered by Matthew Hodgson from Zen Ex Machina. He gets extra marks for working Doctor Who and bow tie references into the talk. His slides are available here.

From Agile Australia 2012
  • UX people are time travelers
  • time machine pattern – work an iteration or more ahead of the development team
  • UX is primarily about design, we are in two different worlds
  • embed the time machine pattern within Scrum
  • personas – focus on the pragmatic face of our users (David Hussman) – synthesise what we understand at the moment
  • added to GWT… Given I am a role AND I VALUE, When… Then…
  • grooming is the forgotten ceremony
  • involved the users in planning poker – got clear perspective in the context of their environment]
  • demo became a cognitive walk through

Emerging Paradigms in Software Testing

I was MC for this session for Kristan Vingrys from ThoughtWorks. I have known Kristan for a number of years, and I resonate very closely with his views on testing and testers. His slides are available here.

From Agile Australia 2012
  • you have to build quality into the product
  • ATDD is a good way to break down the barriers between developers and testers
  • need to change focus to preventing defects rather than finding defects – measure yourself that more defects is bad
  • fast feedback – embrace continuous integration, automation and the test pyramid
  • involve everyone – crowd source your problems, tests are an asset, version control your test cases
  • change focus from how I prevent this going into production onto how I get this into production
  • build pipeline- stage build to run different tests in different stages in the pipeline
  • tester needs to inform the team of quality, not be responsible for quality
  • target testing to things that are changing, not just scatter gun
  • it’s about the principles, not the practices
  • test code is code – treat it like any other code
  • it’s important to know what you are not covering, more than what we are covering (Model Based Testing)

Design Eye For A Dev Guy

I was MC for this session delivered by Julian Boot from Majitek. This was one of the highlight sessions that I attended at the conference and as I remarked when thanking Julian, it reaffirmed how much I don’t know about good design. His slides are available here.

From Agile Australia 2012
  • you gotta love it, you gotta be able to do it and it needs to deliver a bag load of cash
  • people now expect a fit and finish, design is now expected
  • people over process, not everyone is a good designer so let people play to their strengths as weaknesses get in the way of excellence – need to understand it though
  • design is related to visual processing – what we see is what we design, design can be taught
  • highlight individual items – contrast, colour, shape, white space, underlining
  • grouping – proximity, continuity, enclosure, connection
  • proportion, substance and harmony are important
  • subtle changes dramatically affect the visualization
  1. use a grid like CSS Grid and Twitter Bootstrap
  2. focus on data over labels – make the data bigger, keep your headings close to your data so you don’t get lost
  3. hierarchy of actions, but use them properly
  4. colour – use a designer, but if not use 3 colours in one shade and two others (using three grey is the best pro tip and two others)
  5. let design be your brand, don’t overuse the brand

Agile Executive: The Naked Truth!

I was MC for this session led by Kelly Waters from ThoughtWorks and author of the All About Agile blog. I unfortunately did not get to see much of this presentation, the slides for which are available here.

From Agile Australia 2012

Agile Development on Large Legacy Architecture

I was MC for this session delivered by Tony Young from Integrated Research. This session was designated as “Expert” but there is nothing in this that I could see that made it that level. His slides are available here.

From Agile Australia 2012
  • teams find it hard to focus at 7-8 people and they saw parallel development, sweet spot was 5+/- 1
  • changed because competitors moving faster and customers questioned our quality
  • used agile guidelines, not rules – had must dos and bendys
  • product team deliver using Scrum and give to a QA team that uses Kanban !
  • the peer pressure to try is key
  • use Lego board for backlog to see resource impacts

Other Stuff

One of my colleagues who presented a talk on day 2 was Colin McCririck (who is the Executive Manager of a team I coached for some time) and he spoke on Leadership Secrets for Agile Adoption).

Rosie X recorded an interview with me during the conference which was a lot of fun.

Renee Troughton and I took some time out from talks to record a podcast interview with Ilan Goldstein for the Agile Revolution.

Renee also recorded a podcast with Kim Ballestrin on Cynefin.

We also recorded a wrapup podcast.

I also did some short interviews for InfoQ, which resulted in a wrap-up story.

Episode 9: Day 4 at Agile 2011 Salt Lake City

The Agile Revolution Podcast

Agile 2011Craig spent the day milling around a number of presentations today and talks about how technologies link together, delighting customers, visualisations, ATTD for start-ups, Jeff Patton’s User Story Mapping and flirting with your customer.

TheAgileRevolution-9 (7 minutes)

View original post

AAFTT Workshop 2011 (Salt Lake City)

Agile AllianceThe Agile Alliance Functional Testing Tools Workshop (AAFTT), held the day before the Agile 2011 conference in Salt Lake City, was once again one of the highlights of the conference. Organised by Jennita Andrea and Elisabeth Hendrickson, it was as always a wide variety of participants with a passion for testing and testing tools. Here are my notes from the day held on August 7, 2011.

From AAFTT 2011
From AAFTT 2011
From AAFTT 2011


Energy Kickoff & Networking

The session was facilitated by Ainsley Nies, and all of the official session notes are stored on the AAFTT wiki: http://aaftt.agilealliance.org:8080/display/AAFTT/agile2011.

From AAFTT 2011
From AAFTT 2011

We started the day with some networking and sharing some areas of passion. Some of these included:

The theme of the AAFTT is: “Advancing the state of the art and the practice of Acceptance Test Driven Development”.

From AAFTT 2011

Ainsley started walking the circle to explain the day and how open space works, but frankly it make me feel a little dizzy! She went on to explain that Harrison Owen invented the open space idea as he noticed the real content at conferences was the passionate conversations. The rules of open space are:

  • whoever shows up are the right people
  • do not hang on to pre-conceived ideas
  • it starts when it starts
  • discussion does not need to be over until it’s over
  • wherever it happens is the right place
From AAFTT 2011

The law of mobility and responsibility (also known as the law of two feet) is if you are not learning or contributing where you are, go some place where you will. Also, butterflies and bumblebees cross pollinate ideas.

From AAFTT 2011
From AAFTT 2011

Finally, we were warned to be prepared to be surprised.

From AAFTT 2011
From AAFTT 2011

Developers are testers and testers are developers – how do we dissolve and combine the roles 

This was the first session that I attended.

From AAFTT 2011
  • there are two mindsets – offence and defence, testers are defence
  • job is not to find defects but to prevent defects – build quality in
  • define quality and what does it mean to us
  • startups don’t often have the problem – multiple skills required
  • what is the biggest impediment – are we missing the skill
  • there is no team of quality anymore – drive quality through the organization
  • functional testers tend to exploratory test and drive from the UI, technical analysts tend to multiple-skill
  • you need to have a team focus and a product focus
  • don’t start with practices but start with a common vision (eg. zero defects)
  • fear of losing identity if you dissolve roles
  • understanding the historical roles sometimes helps understands why things are the way they are
  • need time – Lisa Crispin mentioned that in her company they were going out of business because the system was not good quality, so management were smart to support the initiative
  • helps if everybody on the team has experienced the entire value chain and needs to understand the value of everybody’s piece of the chain – tendency to optimise the piece of the chain you understand
  • developers often underestimate the precision of data and scenarios and developers underestimate the difficulty of some requests
  • personality issues often get in the way
  • mostly about having the right people – need to let some people go
  • we assign labels to roles which create barriers – break down on teams but need to break down at the HR level
  • payroll is also an issue – need to compensate for people taking on more responsibility
  • need to put queue limits on the testing queue to drive behaviours
  • pairing with developer if they do not understand the scenarios
  • some people have the questioning mindset, some have the practical focus – need both to make sure you ship a quality product
  • mini waterfall problem – long tail feedback loop, change workflow that developer needs to work with tester, avoid lean batching problem
From AAFTT 2011

ATDD Patterns

Jennitta Andrea led this session about the work so far in this space:

Last Mile Tools

Elisabeth Hendricksen led this session on tools that are attempting to solve the problem at the last mile.

From AAFTT 2011
From AAFTT 2011
  • NUnitLiz Keogh – were using Fitnesse but added another level of complication, wrote a DSL that separates tests to make it easier read, WiPFlash is the automation tool, examples are on the website, can call the fixtures from another testing tool like Fitnesse, capture scenarios on a wiki first to get the best out of the automation tool
From AAFTT 2011
From AAFTT 2011
  • SpecFlow – Christian Hassa – similar to Cucumber, scenarios written as steps that are bound to execution, uses Gherkin parser (this is a plus as a number of tools use this)
  • SpecLog – maps of your product backlog, capture results of collaboration with the business (Jeff Patton’s story maps), data stored in a single file, stories are initially mapped to a feature file but ultimately get linked to a feature tree
  • SpecRun is under development currently, not bound to SpecFlow or test runner/execution, currently Windows only
From AAFTT 2011
  • Limited RedJoseph Wilk – uses the probability of failure to run those tests first in Cucumber, can then get failure statistics at a feature level, working on a refactoring tool at the moment
  • Relish – publish Cucumber features to a website
From AAFTT 2011
  • The Smallest Federated WikiWard CunninghamJSON for data scrubbing, thin columns to display well on mobile, refactoring is the number one edit so allow it to drag and drop refactor, fit for any analytic or outcome-oriented endeavor, sponsored by Nike, under very early development, meant to take spreadsheet data to the next level
From AAFTT 2011
From AAFTT 2011

Business Rules

Mary Gorman led this discussion.

From AAFTT 2011
  • business rules – conference website has rules, such as group pack for 5 registrations, what happens to the sixth person, what if someone pulls out
  • need to capture these to describe what our system does
  • business rules manifesto – Mary gives a copy to everyone she work with
  • separation of concerns – a rule is separate from the action which makes the process more brittle and more difficult to test
  • rules are a form of requirements and live beyond the building
  • one process is to extract the rules of a legacy system and then the regression tests – code archaeology
  • the business does not always know the rules of the system or how they got there – rules get added to the system over time or evolve and documentation is unlikely to get updated
  • one insurance company had spent $100 million dollars to bring in a business rule engine, returned investment in two years due to being able to be able to look for conflicting rules
  • put analysis of rules in the hands of developers for way too long
  • simplest part of business rules is having a glossary
  • rules engine enables our rules in productions, and use examples to ensure the engine works correctly
  • testing could look like this – given this data when these rules are applied then I expect this output
  • you need both rules and examples to test them – you need enough examples for now, need to be different paths, decision points, infliction points rather than different values
  • examples are not as expressive as arithmetic, but they are not as understandable
  • lots of rules that we do not think of as business rules because they are baked into the process eg. security access, database schemas
  • “business logic is not” (Martin Fowler)
  • you can’t read English as if it were rules, so we need to use examples
  • the worst systems are the ones that do not have a manual override, humans are usually the best at determining this
  • lots of business rules change due to jurisdiction
  • something will always fall to the bottom – rules need to be valued on risk and value – where is the tipping rule
  • rules are the expression of intent
  • Mars issue – crashed, six week window too costly to fix
  • guts to keep it simple – reporting system (Ward Cunningham) – resisted urge to put in a formula system, wait for requests from users, got 6 requests, sold system based on simplicity of the system
From AAFTT 2011
From AAFTT 2011

Other Sessions

As with any conference, there are always sessions you would have liked to have got along to.

Richard Lawrence led a discussion on Static Analysis for Gherkin which turned into a discussion on design patterns for Cucumber.

From AAFTT 2011
From AAFTT 2011
From AAFTT 2011

George Dinwiddie led a discussion about conversations between roles:

From AAFTT 2011
From AAFTT 2011
From AAFTT 2011

My mate Jason Montague led a session on Building Conditions Conducive for ATDD Adoption.

From AAFTT 2011
From AAFTT 2011

Closing Circle

We shared some takeaways in the closing circle, he were some that stood out at me:

From AAFTT 2011
  • issues with dealing of people was a theme
  • what are good ways to express a large amount of test data
  • challenge to get corporations over the hump to release data, plus have good tests and examples around the rules
  • testing needs to be a nation, not just a community
  • it’s time we got more respect in our organisations, it’s time we show more respect to those we work with
  • teams need to dependent on the production of the build
  • federated wikis could help solve the test ownership problem

As for me, my comment was the day had renewed my energy again. ATDD is hard, and as a community we need to try harder.

Podcast

Finally, I recorded a short audio podcast for The Agile Revolution wrapping up AAFTT.

Specification By Example (Book Review & Summary)

I was lucky enough to be a reviewer on Specification By Example by Gojko Adzic, and the final version was recently released to print by Manning. And I was stoked to see not only my name in the acknowledgements, but that my quote made it to the cover of the book. The following is my brief review and notes from the book.

Review

“I love this book. This is testing done right.” That is my quote on the back cover of the book, and I meant every word of it. Having been a quality advocate in the agile space for a few years now, this is the first book I have read in a long time which had me nodding my head all of the way through, as it resonated with my ideas on how development teams need to reconsider specifications and testing.

The book starts out by summarising why specification by example is so important and outlines some key patterns for success and then, through examples throughout the book, steps through the patterns pointing out the warning signs along the way. The key steps are to ensure the culture is fit, then approach specification in a collaborative manner, use examples and automate and finally evolving a living document / specification.

I really appreciated the fact that the examples were not just the run of the mill greenfield Java web applications that are used in most books. There is a good sampling of different organisations, most of which are using this technique on existing legacy applications on a variety of different platforms. The book is an easy read for the entire team, which means it can (and should) be required reading for the developer, tester, analyst and project manager. I have encouraged many of my teams to take a look at the book, and a couple of my colleagues have indicated this book helped convince and reinforce why this approach is so valuable.

My only concern when reviewing was the fact that the title of this book may not standout to testers and developers (not perhaps as much as Acceptance Test Driven Development or ATDD might). Currently the community has a number of similar approaches with similar names, although I must acknowledge that the specification by example tag has grown on me over the last few months.

The book does not expend much effort talking about tools in this space, by design, I think this fact makes the book more readable and accessible to a wider audience, but that said it suggests to me that there is still a gap for a good text that matches specification by example to particular tools like Concordion, Fitnesse and the like.

Overall, this book is a definite must read for any teams (particularly agile teams) who are trying to balance or find a decent approach to specifications and testing. It is a good balance of patterns and real case studies on how testing and specifications should be approached in an agile world. It would make my list of Top 5 must read testing books and Top 10 must read agile books. And now I know what the proper name is for the cats eyes that are embedded in the freeway!

Finally, I had some other suggestions for summaries for the book that did not make it to cover, but they are just as relevant of my feelings about the book:

  • “One of the best Agile related books I have ever read. Buy it, read it, recommend it to your colleagues.”
  • “This book sums up the right way to attack requirements and testing while delivering to your customer. A must read for all agile teams.”
  • “I loved this book. I could not stop raving about it to my colleagues. It’s testing done right”

Summary

Here are my key notes from the book:

  • a people problem, not a technical one
  • building the product right and building the right product are two very different things, we need both to be successful
  • living documents – fundamental – a source of information about system functionality that is as reliable as the programming language code but much easier to access and understand
  • allows easier management of product backlogs
  • proceed with specifications only when the team is ready to start implementing an item eg. at the start of an iteration
  • derive scope from goals – business communicate the intent and team suggest a solution
  • verbose descriptions over-constrain the system – how something should be done rather than just what is to be done
  • traditional validation – we risk introducing problems if things get lost in translation between the business specification and technical automation
  • an automated specification with examples, still in a human readable form and easily accessible to all team members, becomes an executable specification
  • tests are specifications, specifications are tests
  • consider living documentation as a separate product with different customers and stakeholders
  • may find that Specification By Example means that UAT is no longer needed
  • changing the process – push Specification By Example as part of a culture change, focus on improving quality, start with functional test automation, introduce a new tool, use TDD as a stepping stone
  • changing the culture – avoid agile terminology, management support, Specification By Example a better way to do UAT, don’t make automation the end goal, don’t focus on a tool, leave one person behind to migrate legacy scripts (batman), track who is/isn’t running automated tests, hire someone who has done it before, bring in a consultant, introduce training
  • dealing with signoff and tracebility – keep specifications in a version control system, get signoff of living documentation, get signoff on scope not specifications, get signoff on slimmed down use cases, introduce use case realisations
  • warning signs – watch out for tests that change frequently, boomerangs, test slippage, just in case code and shotgun surgery
  • F16 – asked to be built for speed but real problem was to escape enemy combat – still very successful 30+ years later
  • scope implies solutions – work out the goals and collaborately work out the scope to meet goals
  • people tell you what they think they need, and by asking them ‘why’ you can identify new implicit goals they have
  • understanding why something is needed, and who needs it, is crucial to evaluating a suggested solution.
  • discuss, prioritise and estimate at goals level for better understanding and reduced effort
  • outside-in design – start with the outputs of the system and investigate why they are needed and how the software can provide them (comes from BDD)
  • one approach is to get developers to write the “I want” part of the storycard
  • when you don’t have control of scope – ask how something is useful, ask for an alternative solution, don’t only look at lowest level, deliver complete features
  • collaboration is valuable – big all team workshops, smaller workshops (three amigos), developers and analysts pairing on tests, developers review tests, informal conversations
  • business analysts are part of the delivery team, not customer representatives
  • right level of detail is picking up a card and saying ‘I’m not quite sure’, it pushes you to have a conversation
  • collaboration – hold introductory meetings, involve stakeholders, work ahead to prepare, developers and testers review stories, prepare only basic examples, overprescribing hinders discussion
  • one of the best ways to check if the requirements are complete is to try to design black-box test cases against them. If we don’t have enough information to design good test cases, we definitely don’t have enough information to build the system.
  • feature examples should be precise (no yes/no answers, use concrete examples), realistic (use real data, get realistic examples from customers), complete (experiment with data combinations, check for alternate ways to test) and easy to understand (don’t explore every combination, look for implied concepts)
  • whenever you see too many examples or very complicated examples in a specification, try to raise the level of abstraction for those descriptions
  • illustrate non-functional requirements – get precice performance requirements, use low-fi prototypes for UI, use the QUPER model, use a checklist for discussions, build a reference example for things that are hard to quantify (such as fun) to compare against
  • good specifications – should be precise and testable, not written as a script, not written as a flow
  • watch out for descriptions of how the system should work, think about what the system should do
  • specifications should not be about software design – not tightly coupled with code, work around technical difficulties, trapped in user interface details
  • specifications should be self explanatory – descriptive title and short paragraph of the goal, understood by others, not over-specified, start basic and then expanded
  • specifications should be focussed – use given-when-then, don’t explicitly detail all the dependencies, put defaults at the technical layer but don’t rely on them
  • define and use an ubiquitous language
  • starting with automation – try a small sample project, plan upfront, don’t postpone or delegate, avoid automating existing manual scripts, gain trust with UI tests
  • managing test automation – don’t treat as second-grade code, describe validation. don’t replicate business logic, automate along system boundaries, don’t check business logic through the UI
  • automating user interfaces – specify interaction at a higher level (logging rather than filling out the login page), check UI functionality with UI specifications, avoid record and playback, setup context in a database
  • test data management – avoid using pre-populated data, use pre-populated reference data, pull prototypes from the database,
  • Bott’s Dott’s are the lane markers on the roads that alert you when you move out of your lane, continuous integration has that function in software, run it with Specification By Example and you have continuous validation
  • reducing unreliability – find most annoying thing and fix it, identify unstable tests, setup dedicated validation environment, automated deployment, test doubles for external systems, multi-stage validation, execute tests in transactions, run quick checks for reference data, wait for events not elapsed time, make asynchronous processing optional, don’t use specification as an end to end validation
  • faster feedback – introduce business time, break long tests into smaller modules, avoid in-memory databases for testing, separate quick and slow tests, keep overnight tests stable, create a current iteration pack, parallelise test runs
  • managing failing tests – sometimes you can’t fix tests – create a known regression failures pack, automatically check disabled tests
  • easy to understand documentation – avoid long specifications, avoid lots of small specifications for a single feature, look for higher level concepts, avoid technical automation concepts
  • consistent documentation – evolve an ubiquitous language, use personas, collaborate on defining language, document building blocks
  • organize for easy access – by stories, functional areas, UI navigation routes, business processes, use tags instead of URLs