Episode 194: Business Agility Sparks at Lithespeed with Sanjiv Augustine

The Agile Revolution Podcast

Renee and Craig are at Agile2019 in Washington, DC and catch up with Sanjiv Augustine, author of “Managing Agile Projects” and “Scaling Agile” and founder and CEO of Lithespeed:

  • Craig’s InfoQ interview with Sanjiv
  • “Leadership is about managing change while management is about managing complexity… you need to be both”
  • Lithespeed = flexible speed
  • Reinventing Organisations” by Frederick Laloux and the Morningstar model
  • The Agile community is excited about the benefits of the teal level but not the responsibility
  • Business Agility Sparks – need to shift left from IT to the business and shift right to DevOps and shift up into leadership
  • What makes you successful at the executive level is working with others
  • Agile Value Management Office (VMO) is a cross functional leadership team that manages the flow of work from end to end
  • PMO is more oriented towards best practices process, whereas…

View original post 149 more words

Advertisement

Agile 2009: Agile Tool Hacking – Taking Your Agile Development Tools To The Next Level

I'm speaking at Agile 2009

I’m speaking at Agile 2009

The presentation by myself and Paul King from Agile 2009 called “Agile Tool Hacking – Taking Your Agile Development Tools To The Next Level” is available on SlideShare.

Agile 2009: How To Make Your Testing More Groovy

I'm speaking at Agile 2009

I’m speaking at Agile 2009

The presentation by Paul King and myself from Agile 2009 called “How To Make Your Testing More Groovy” is available on SlideShare.

Agile 2009 Day 4 Review

Agile 2009The last main day of talks at Agile 2009, and once again lost the morning to preparation and presenting a talk with Paul King.

Here is an overview of the sessions I got to:

Agile Tool Hacking – Taking Your Agile Development Tools To The Next Level

The session I presented with Paul King, we got close to a full house and the session feedback forms were overwhelmingly positive. The slides are available in a separate post.

P1000086
P1000089

The Kanban Game

A full house to this session run by Tsutomu Yasui gives some validation to the fact that Kanban is gaining traction with the agile community. All the details and materials for the game are available at http://www.yattom.jp/trac/public/wiki/ProjectGames/TheKanbanGameEn. I only sat in on the first half of the session so I could fit in some other last minute talks.

Agile 2009 Kanban Game

Agile User Experience Design Emergent Practices

I had an aim to get to at least one talk by Jeff Patton (especially for bragging rights for one of my work colleagues, Teale Shapcott)! I actually got to have a brief conversation with Jeff later in the evening which was awesome.

Agile 2009 Jeff Patton

  • adapting to agile difficult for UX practitioners – Jeff Patton came late to usability but early to agile usability
  • five stages to agile adoption (salesforce.com) anger, denial ,bargaining, depression, acceptance

Homonyms

  • design – agile (how to build product), designer (what to build based on user needs)
  • iteration – agile (short time box to build software), usability (builf representation of product idea for evaluation and change)
  • story – agile (short description of what user might want built), usability (agile design for goal)
  • customer – agile (someone who writes a user story), usability (a person who buys a product)
  • small bit of software – agile (developer can build in a few days), usability (something a user can complete is a single sitting)
  • test – agile (means complete and meets acceptance criteria), usability (user can use the software and it meets their needs)

Then:

  • usability practitioners view of design and development – understand business need, understand user need, personas, create and validate high level design, create and validate UI design, create and communicate design specification, develop software, usability test finished product
  • you could do all that for a sprint right? – agile changes usability practice but does not have to threaten it
  • patterns have emerged as usability practitioners have adapted – had to go postal or figure it out – great idea is not a pattern, great idea that multiple people use is a pattern (at least 3 companies)

The emerging best practices are:

  1. usability designers are part of product owner or customer team – in drivers seat, part of the planning, part of product owner team or the product owner. Product owners already take multiple roles, product owners are thinking about this release and the next release
  2. research, model, design up front (but only just enough) – learnt how to cut up work, high level design but just enough, task model (but agile people think they are stories), usability people need to be connected to backlog, own and leverage it
  3. chunk your design work – break up design work to perform incrementally throughout development, organise story into a map that helps communicate structure of the system (see The new user story backlog is a map on Jeff’s blog), organise the backlog (don’t just prioritise – communicate with user about what we are seeing)
  4. parallel track development to work ahead and follow behind (see Lynn Miller – Case Study of Customer Input for a Successful Product on agileproductdesign.com) – time machine essential for product owner and usability team, design and coded features pass back and forth between tracks (design ahead, look at stuff already built and stuff that is being built now)
  5. buy design time with complex engineering stories – product owners responsible for scheduling, sometimes highest value is to put a story that is easy to design but hard for developers to build to buy time! (Lynn Miller talks about SketchUp File – Save As as easy to design but took ages for developers to develop)
  6. cultivate pool of users for continuous user validation – (see Heather Williams – The UCD Perscpective on agileproductdesign.com), Salesforce have a person that coordinates this, keep feedback fresh by rotating every few months
  7. schedule continuous user research in a separate track from development – Kitchen Stories a silly Swedish movie has usability connotations, research is continuous, not just a phase, schedule visits with users ahead of how we know why we want to be there
  8. leverage time with users for multiple activities – do some usser interviewing, do prototyping, show and review current software (one mand band), use RITE to iterative UI (rapid iterative testing and evaluation) (see numerous RITE articles on agileproductdesign.com), use time before sprint to refine design, test something and fix it to burn down failures
  9. prototype in low fidelity – prototype in public so people can see what you are doing, look at Balsamiq as a tool
  10. treat prototype as a specification – have a discussion
  11. designer developers iterate design outside development iteration  (eg. CSS, HTML and visual design), “art is never finished, only abandoned”  (Da Vinci)
  12. become a design facilitator – designers do collaboration and facilitation, practices like design studio and sketchboard technique to get developers involved, sick of developers armchairing their design (get them to sketch it out, developers get to weigh in good ideas, developers get their design ripped apart, usability people get people to read their designs)

Finally, most usability designers won’t go back after doing agile!

Agile By The Numbers: What People Are Really Doing In Practice

I was keen to go and see Scott Amber speak, you can view the session or view the data. According to Scott, this is what people are doing in practice and this talk is exploring some myths.

Agile 2009 Scott Ambler

Majority of organisations doing agile?

Majority of teams doing agile?

  • in 76% of organisations, 44% of project teams doing agile
  • BUSTED
  • numbers claiming to be doing agile, can’t test this theory, expect number is high
  • how do you measure agile?

Pretty much all development in agile?

  • agile practices that most effective – CI (65%), daily standup (47%), TDD, (47%) iteration plan, refactoring, retrospectives, pair programming, stakeholder participation, shippable software, bundown tracking
  • practices that want to adapt – almpot all technical – acceptance and developer TDD at top of list
  • PLAUSIBLE

Agile is just for small teams?

  • 1-5 and 6-10 success, starts to taper off for teams 11 and up, but success at all sizes of teams
  • BUSTED

Does not apply to regulatory situations?

  • 33% need to apply to legislation
  • BUSTED

Agile and CMMI don’t work together?

  • yes 9%, only small amount of people doing it
  • no statistical differenece between CMMI and non-CMMI agile projects
  • BUSTED

Agile process empirical?

  • teams collect and act on metrics, 51% collect but do it manually (according to Scott Ambler, don’t trust manual metrics as they are behind and altered to tell a better story and meet bureaucracy), 26% no and 19% majority automated
  • CONFIRMED

Agile teams doing greenfield development?

Becoming a Certified Scrum Master is a good idea (2 days)?

  • 78% think certification is meaningless
  • nobody respects this, shame on certification trainers, better way to earn a living, step up, 2 days on a business card is not a good idea, preach less and act more
  • BUSTED
  • Ambler certified for a good laugh

Most agile teams are co-located?

  • 42% co-located – good thing, reduces risks, 17% same building, 13% driving distance, 29% very distant
  • a third of teams have geographic sistribution issues
  • BUSTED, majority of teams distributed in some way

Agile teams don’t provide up front estimates?

  • majority of teams do up front estimates
  • need estimate to tell senior management to get project off the ground
  • 36% reasonable guess by experienced person
  • BUSTED

Agile teams just start coding

  • on average takes almost 4 weeks to warm up – modeling, set up environment, design, …
  • BUSTED

Agile follows common development guidelines?

  • practice in XP
  • 49% project / enterprise conventions (19% enterprise level conventions)
  • 22% UI convention, 25% data conventions, expect lower than development because not as cool as code
  • PLAUSIBLE (but borderlne) – room for improvement

Rights and responsibilites are part of agile culture?

  • 58% defined for development team vs 35% for stakeholders
  • PLAUSIBLE

Agile test often and test early?

  • developer TDD 71%, 52% still doing reviews / inspections, 45% end of lifecycle testing, acceptance TDD 40%, one third of teams have independent team who look at system independently
  • CONFIRMED – doing testing throughout lifecycle

Agile don’t do up front requirements modelling?

  • 76% do this, need to come up with stack of cards now
  • 52% capture in word processor, 45% capture as tests
  • BUSTED

Agile don’t do upfront architecture?

  • 70% high level architecture modeling
  • metaphor is a total waste of time
  • organising a conference is just like organising a conference…
  • BUSTED

Agile write interim documentation?

  • 56% yes
  • CONFIRMED

Agile produce supporting documentation?

  • 70% write these, minimal amount of stuff that need to be developed
  • CONFIRMED
  • sometimes when compared, agile write more

Agile works better than traditional?

  • hell yes!
  • all approaches reasonably close 65% vs 80%
  • quality much better
  • functionality delivered higher
  • make money – good, but hacking better off
  • time much better
  • so similar, but better way to spend money wisely
  • CONFIRMED

Finally:

Open Space – Scrum Is Evil…

Jeff Frederick ran his Scrum Is Evil session that I had first seen at CITCON in Brisbane earlier in the year. It was interesting to see that the outcomes were exactly the same half way around the world!

Agile 2009 Scrum Is Evil

Conference Banquet & Keynote User Interface Engineering

It’s very hard to take notes in a banquet with the lights dimmed, but Jared M. Spool gave a very entertaining keynote on User Interface Engineering, including some iPod vs Zune bashing and an old Apple video on future design.

Here is another post I found from this session: http://www.agilitrix.com/2009/09/user-interface-engineering-agile-2009-banquet/

The night was finished off with a Chicago Blues band and some conversation late into the night at the hotel bar!

Agile 2009 Day 3 Review

Agile 2009One of the problems of presenting a double session at Agile 2009 is that miss out on a bunch of the great talks that are going on at the conference at the same time. Added to that, the (very) last minute preparations that I was doing with Paul King meant that I only got to sit in on one session (apart from our own).

Automated Deployment with Maven & Friends – Going The Whole Nine Yards

This was a good overview by John Smart of using Maven as a build tool as well as how you might use tools such as Cargo and Liquibase and scripting languages like Groovy to automate your deployment process. I was hoping John would have the silver bullet to linking the Jira release button to a deployment script, however it appears the only way of doing this still is via a plugin for Bamboo.

How To Make Your Testing More Groovy

The session I presented with Paul King, we got a reasonable turnout for a technical double session and the session feedback forms were overwhelmingly positive. The slides are available in a separate post.

Agile 2009 Groovy Testing Paul King
Agile 2009 Groovy Testing Craig Smith

Dinner with Manning & John Hancock

I had the pleasure of having dinner with Todd Green from Manning, Greg Smith (co-author of Becoming Agile) and Paul King (co-author of Groovy In Action). As the technical proof-reader for Becoming Agile and knowing Paul King, I also got an invite for traditional deep-dish Chicago pizza.

Afterwards, Paul and I treked up the “Magnificent Mile” and up 95 floors to the Signature Room in the John Hancock Center (Chicago’s fourth tallest building but best observation deck according to the locals). The views were amazing (the pictures don’t do justice to the city lights that carry on into the distance!)

Chicago John Hancock Signature Room

Agile 2009 Day 2 Review

Agile 2009Day 2 of Agile 2009, and Johanna Rothman welcomed everybody to the conference and advised that they had 1,350 participants this year from 38 countries. Furthermore, they had 1,300 submissions that they brought down to 300 presentations.

The sessions I attended on Day 2 were as follows:

Keynote: I Come To Bury Agile, Not To Praise It

Alistair Cockburn kicked off his keynote with live bagpipes, you can view the session or download the slides.

Agile 2009 Keynote Alistair Cockburn

  • software development is a competitive game – positions, moves, strategies
  • conflicting subgoals – deliver software, setup for next game (refactor, document) – moves are invent, decide, communicate
  • situations almost never repeat
  • as number of people double, communications change fundamentally (crystal clear project classification scale)
  • Jeff Patton suggests to video the whiteboard design, rich, 5-7 minutes sweet spot
  • always trying to simulate two people at a whiteboard
  • distance expensive – 12k per year penalty
  • speed – can people detect issues, people care to fix it, can they effectively pass information
  • craft teaches us to pay attention to skills and medium (language)
  • programming changes every 5 years, need to keep up with cycle
  • learn skills in 3 stages – shu (learn a technique, most people learn by copying, one shu does not fit all!, kick people out of shunning box), ha (collect techniques, look for clues) and ri (invent / blend techniques, help guide with ri level responses)
  • everybody is waiting on a decision, looks like a manufacturing queue
  • continuous flow, small batches of work
  • lean – watch queues, not enough resources
  • you want knowledge to run ahead of cost – start of project grow knowledge and reduce risk then business value, need to balance
  • at end of project, trim tail to deliver or delay to get better
  • Tom DeMarco – Slack (agile organization)
  • don’t like end of project retrospectives, too late, inside the project you can change anything, after delivery, 2 weeks can be too often because nothing has changed

Release Planning (The Small Card Game)

I had been recommended by numerous people to get along to this tutorial being run by Chet Hendrickson and Ron Jeffries (one of the original XP’ers and both authors of the purple Extreme Programming Installed) and I wasn’t disappointed.

Agile 2009 Release Planning Game

The session ran sort of like this:

  • we ask the product ogres to put information onto cards
  • this is an important project – managers in clouds who have managers in clouds have stated it must be done in six months
  • sort cards into 6 columns, need all 45 cards done in six months
  • Round 1 – plan out the project for 6 months) – our team just put 6 columns and layed the cards out evenly (8, 8, 8, 7, 7, 7), some teams went a little light at the beginning and end, another team decided to do everything in 4 months, another team everything in 1 month!
  • Round 2-  nature (Chet) said we got 5 out of 8 cards done , so replan the next 5 months (this number was different for different tables). We asked if all stories were of equal effort, but nature did not know
  • Round 3 – nature said we got 6 cards done, so, now, how long will the project take? What if you were told that the number on the upper right hand corner is effort and you can get 10 done per month (we had a total of 90)
  • At this point, some teams put small stories at the end of each iteration and put more valuable stories at the beginning (customer value, we were told, was the number in the lower left)
  • Round 4 – now we need to decide which month to ship (we chose two months)
  • Round 5 – given we now know the value, we were told not to replan, take total and how on a burnup chart to see burn
  • Round 6 – replan using cost and value (we did some maths and got 6:1. 4:1 and 3.5:1, maximum value in column 1 was 75, then 45 and then 30)
  • the team that ships every month gets the same value sooner
  • fewer products cannot meet this than you realise
  • how long will it take us and how much is it worth are the fundamentals
  • value is simple if you use simple values (we used 3, 6, 9)
  • dependencies are far less common than we believe

Facilitation Patterns & Antipatterns

This was a workshop led by Steven “Doc” List from ThoughtWorks and involved some great playing cards that I am still hoping may get sent my way one day.

UPDATE 13/10/2009: About 12 hours after posting this, a deck of cards arrived in the post at work. Many thanks Steven and ThoughtWorks for keeping your promise and sending the cards through!

  • facilitation about leading the group not running the group
  • want to enable decisions
  • leave bias, prejudice, opinions at the door, otherwise get somebody else to do it
  • meetings should be collaborative and enjoyable, but must have an agenda

Patterns (these are behaviours not identities)

  • Switzerland – neutrality, whether facilitator or participant need to decide if you are being neutral, a participant that is neutral not adding value, good value as a facilitator
  • Guide – show the way, avoid potholes and pitfalls, help move through the process by the way I interact with the group and help the group interact
  • Curious George – always aks questions for no particular purpose
  • Sherlock Holmes – seeking data and information to reach a conclusion, passion for information
  • Benevolent Dictator – always for own good, but always taking control, believe have more experience than the rest of team, always believe they know best but with a good heart (like relatives)
  • Repititor – more he tells you, the more likely you will get it
  • Professor Moriarty (the evil genius) – manipulating other people to do work for him, cooerce other people to ask questions, manipulation
  • Gladiator – all about combat, being right is more important than what they are right about, enjoy getting into an argument, always one on one so rest of group usually disengages, loud, active, don’t give up easily
  • Superhero – here to rescue rather than how to do things, bring special skills, knowledge and powers so you obviously want to use them, will always standup or represent you whether you need them to do that or not
  • Orator – champion of not being done, wants to be heard all of the time
  • Conclusion Jumper – smart, mean well, want to move on quicker, jump to what they believe is the conclusion

How to deal with these behaviours, do the facilitation four step

  1. Interrupt – what is relevant to controlling the group
  2. Ask – “Make it a question, do you mind if I ask Charlene…”
  3. Redirect – redirect the conversation
  4. Commit – live up to the commitment
  • ground rules – work agreements, how we choose to behave, usually get 5 or 6 when you ask the group, put them on a wall, need group to be self managing, don’t want to be a policeman, unless you have to
  • starfish – keep doing, start doing, stop doing, do more of, do less of – look for idea clusters, useful anytime not just retrospectives, useful because there is no room for many roles because people are writing things down
  • circle of questions – around in a circle, ask question to person next to you, usually have to cut it as it will keep going, eliminates dominination as everybody gets to ask and answer, pre-emptive or remedial
  • margolis wheel – circle of chairs outward and outside circle inwards. Inside are answers. Each person gets input from 6 people and ask 6 people, can be lengthy
  • parking lot – facilitator does not own it (can’t determine what goes in or out), should ask “should we park this”, must be dealt with before the end of the meeting (see Collaboration Explained – Jean Tabaka)

For more information:

Finally, from some of the questions at the end:

  • remote faciliation is harder, Jean Tabaka has a virtual seating chart, 4 step always works
  • antipattern – people expect the boss to run a meeting, but they always have an opinion or axe to grind

I also found the following blog post on this session: http://www.selfishprogramming.com/2009/08/31/agile-2009-facilitation-patterns-and-antipatterns/

Can You Hear Me Now… Good!

This session was on ways to deal with distributed project teams and was delivered by Mark Rickmeier.

One problem on distributed projects – communication breakdown

  • developers assume requirements
  • testers assume
  • sloppy handoffs
  • waste
  • people working on wrong things or different things
  • management decide on incorrect data
  • breakdown in relationships (people on team make it successful)

Agile processes can solve these issues – distributed requires more effort but agile team and communication processes mitigate the risks

How to organise teams

  • dysfunctional when skills are together in different locations
  • functioning slightly better – developers and testers together and customers and analysts together
  • most effective – cross fucntional teams in both locations (expensive and difficult to do)

Five p’s of communication

  • purpose – dialogue vs discussion – what is purpose of discussion – ideas or to make a decision
  • preparation – plan ahead, agree core hours and don’t schedule outside of that without warning, understand key dates
  • process – have im fallback options because phone systems fail, announce roll call so you know who is on the other end of the phone
  • participation – know, see, hear your audience, interact and share the same data
  • capture next steps, send reminder to ensure agreements are met (cultural wording can cause problems)

Tools

  • IM – extremely useful
  • star phone for speakerphone
  • video conference – two camera, one on audience and one on whiteboard
  • web conferencing multi-view
  • interactive whiteboard – skype to take control in blank powerpoint page

All tools improve communication

Distributed release planning – don’t do it distributed, try and get at least a subset of team together

  • share vision from stakeholders and build trust in the release plan
  • get people together to share context and get to know everybody
  • the challenge is that it is expensive to get people to travel – always do at the outset if that is all you can afford

Iteration planning  – planning poker distributed? – planningpoker.com

Sign up for iteration as a team, use online tool like Mingle to update card statues prior to standup

Daily standup – local participant can see reactions of people and can see the card wall

  • have a local team standup and distributed cross team huddles with end of day handoff
  • distribute team standup, cross team huddle and end of day huddle
  • distributed daily standup – use camera, remember that it is about issue identification not remediation
  • challenge that overlap times are not good, beware of the personal cost of people
  • information from standup feeds the entire team

Retrospective

  • hard, can have many us vs them issues
  • worst thing you can do is one location or nothing at all
  • individual retrospectives better if ideas are shared
  • best is collaborative using CardMeeting or Google Spreadsheet – multiple tabs for likely topics, use tagcloud to capture popular topics in Google Docs, get people to write cards ahead of time to save valuable time

Closing thoughts

  • look at staffing
  • get good communications infrastructure
  • kick off team in one location
  • get to know people to move them from them to us

More details can be found at offshore.thoughtworks.com

ThoughtWorks Open Office

My original plan for Tuesday night was to attend that Chicago Groovy User Group with Paul King (but I mixed up the times and did not catch Paul in the corridors), so I decided to get along to the ThoughtWorks open office instead (at their offices on the 25th floor of the Aon Center, the third tallest skyscraper in Chicago).

Agile 2009 Thoughtworks Open Office

Martin Fowler and Jim Highsmith both spoke, and the Agile PMI community was launched. I got to marvel at the original Cruise Control instance that was still running after all of these years and some great conversation was had with the rest of the Australian (and ex-patriot Australian) attendees.

Agile 2009 Day 1 Review

Agile 2009Once again I was extremely lucky to get two talks accepted at Agile 2009 (with Paul King) and the support from Suncorp to send me along to speak. Whilst its a been quite a number of weeks since the conference, I wanted to ensure that I posted my notes and comments. This year, being my second attendance, I found the hallway discussions all the more valuable and had many awesome conversations with friends made last year as well as new friends just met. Added to this, Chicago exceeded my expectations as the host city.

Once again, the number of simultaneous sessions made the decisions extremely difficult on what to attend.

The sessions I attended on day 1 were as follows:

Using the Agile Testing Quadrants to Plan Your Testing Efforts

This session on the testing stage was delivered by Janet Gregory, one of the authors of Agile Testing. The slides are available on the Agile 2009 site.

Testers should be part of release planning and think about:

  • scope
  • test infrastructure and test tools / automation
  • how much documentation, is it too much, can I extract it from somewhere

Iteration planning:

  • plan for done, acceptance tests
  • priorities of stories, which stories to do first, connect with developers
  • budget for defects unless you are a high performing team

Need to acceptance test the feature, not just the story.

We then did a collaboration tools exercise, and some of the tools used by the audience were:

  • desk check / show me – when a developer thinks they have finished coding, get together and take a look
  • wikis, conference calls, GreenHopper, etc
  • daily standup – share when things are done, if you find them ineffective
  • project cards – used for story management and documenting conditions for acceptance
  • sticky notes and pens for a co-located team
  • demonstration every week or end of every iteration
  • FIT tool, used for demos
  • walking and talking
  • pairing
  • generated artefacts from the CI server
  • instant messaging
  • puzzle / chocolates on desk to encourage talk, “free to developers if they come and ask a question”
  • rolling desks on wheels, so they can switch configuration
  • rolling whiteboards
  • JIT (Just In Time) meetings as required
  • mind mapping software that hooks up to Jira
  • retrospectives
  • team review story and write tests together
  • nobody said “email”!, no email!
  • recorded chat room, so conversation is recorded

Waterfall test pyramid, upside down, very unstable – Functional Tests –> API Tests –> Unit Tests (heavy functional tests based on GUI, very few unit tests).

Automated test pyramid (Mike Cohn) – unit tests / component tests are the base layer, require testable code that we can hook into below the GUI at API layer, GUI tests are most brittle because UI changes so do as few of these as possible, right at the top you might need a handful of manual tests.

Agile testing quadrants change the way you think about testing – use to classify tests, what the purpose of the test is (why are we ariting these tests), tests will cross boundaries.

Agile testing quadrant – can be used as a collaboration tool (developers will understand how they can help), emphasizes the whole team approach (no “pass this to the QA team”, whole team is responsible for testing), use to defne doneness (use for planning, what needs to be done, has estimate allowed for the amount of testing we wish to complete).

Quadrant 1 – technology facing tests that support the team, TDD supports the design of the team, tester has feeling of comfort

  • unit tests test the developer intent, individual tests on a method, small chunks of code, fast feedback mechanism, code is doing what it should do
  • TDD tests internal code quality, if developers test correctly it flows all the way through and makes easier to test functionally
  • base for regression suite, if you are going to spend any time on automation, “put it here”, return on investment is better the lower you go in the pyramid

Quadrant 2 – where the acceptances tests live, supporting the team in natural language, helping the team deliver better software, use paper prototypes to talk to customers rather than big GUI, acceptance test upfront helps define the story, use examples to elicit requirements (easiest way to get clarification from the customer, always ask “not sure what you mean” or “give me an example”, pair testing (ask for feedback as soon as possible)

  • the examples can become your tests, write upfront and ensure that developer makes them pass when they develop code, use tools such as Fit / Fitnesse, Cucumber, Ruby / Watir
  • examples help customer achieve advance clarity, focus on external quality (facing the business), want the tests to spark a conversation with the developers
  • BDD use of given (preconditions), when, then as opposed to tabular formats in Fitnesse, useful for workflows
  • Janet polled the room and only about a dozen people in the room give their acceptance tests to the developers prior to the story being developed
  • if no automation tool, write up a manual sheet, give it to the developers and have a conversation before the card starts

Quadrant 3 – user acceptance testing, critiquing the product, getting the customer to look at the system

  • exploratory testing – time box these sessions to reassess about how far you wish to go, following instincts and smells with a purpose, touring (eg. the money tour) as defined by James Whittaker and James Bach (in the book Exploratory Software Testing), this is where you find majority of bugs so testers should spend the majority of their time here (which is why you need a good base of automated tests)
  • collaboration testing – forge a relationship with the developers so you know what they are developing,
  • remember your context to determine how much testing is enough (eg. mission critical software vs an internal application)
  • attack stories using different personas – Brian Marick likes to create evil personas (eg “pathological evil millionaire”) or use impatient internet user vs grandma who clicks every link on the internet

Quadrant 4 – non functional tests (should be part of every story (eg. is there a security or performance aspect), ility testing, security testing, recovery, data migration, infrastructure testing, do as much as possible upfront although sometimes you will need environments that will not be available to the end

  • non functional requirements may be higher than fucntional (eg Air Canada seat sale might need critical performance)

Test plan matrix – big picture of testing against functions for release, usually on a big whiteboard, use colours (stickies) to show progress, benefit is in the planning in what we need to do testing wise but also appeases management because they like to see progress, gives idea of where you are going

Can use a lightweight plan, put risks on a white page, 35 of the 37 pages of the IEEE test plan are static, so put that information somewhere else

Test coverage – think about it so the team knows when the testing is done, burn down chart will be enough if you test story by story, when thinking risk ensure you include the customer (they may have different opinion of risk

Summary:

  • think big picture – developer following a GPS only needs to know next 2 weeks, but tester is a navigator and needs the map
  • include the whole team in planning and test planner
  • use the quadrants as a checklist (put them on the wall)
  • consider the simplest thing, especially in relation to documentation
  • think about metrics – one man team might be good enough to just know they passed
  • visible, simple, valuable

Janet also mentioned the following throughout the session:

I also stumbled across a related blog post on this session at: http://agile2009.blogspot.com/2009/08/agile-testing-quadrants.html

What Does an Agile Coach Do?

This session was delivered by Liz Sedley & Rachel Davies, authors of the new book Agile Coaching. The slides are available on the Agile 2009 site.

This was a hands-on workshop and involved some good discussions on how to deal with different coaching scnarios.

Zen & the Art of Software Quality

This session was delivered by the legendary Jim Highsmith. The slides are available on the Agile 2009 site.

  • “There Is No More Normal” – John Chambers, Cisco CEO, Business Week, 2009
  • business strategy needs to be more adapting to change than performing to plans
  • mixed messages – be flexible but conform to a plan – dilemma faced by many agile teams
  • Artful Making” – Rob Austin – describes $125 million software failure
  • 1994 there was 82% software failures, 68% in 2009 (success defined as on time, on budget, all specified features) – Standish is measuring the wrong thing, not a good measure
  • cancellation of a project should not be a failure, it is a good thing
  • current environment – schedule is more important than value
  • Beyond Budgeting” – Hope/Fraser – not a good book, but good ideas
  • Measuring & Managing Performance in Organisations” – Austin – all measurements are dysfunctional, get a different outcome than you expected
  • if budget 100 and you achieve 100, better than budget is 120 and you achieve 110 – which would a performance management system reward (the 100, even though latter is better achievement)
  • beyond budgeting – make people accountable for customer outcomes, create high performance climate based on relative success amongst others
  • trust, honesty and intentions are better than measurements
  • performance tends to improve while people figure out the system, but under pressure people focus on measurement goals rather than outcomes
  • earned value (time + cost) has nothing to do with value, does not have anything to do with what is delivered to the customer
  • we need to move from scope/cost/quality to value/quality/constrants (scope/cost/schedule)
  • core benefit from agile has been value and quality
  • everybody comes to work to do good quality, but never well defined
  • Zen & The Art of Motorcycle Maintenance” – Pirsig – quality ideas
  • is quality objective or in the eye of a beholder, people have different ideas
  • need extrinsic quality (value) and intrinsic quality (so you can deliver quality tomorrow)
  • Applied Software Measurement” – Capers Jones  – 95% defect removal rate the sweet point for quality
  • experience is doubling staff quadruples the number of defects – BMC were able to kick this trend using agile
  • difficult errors take time to find – longer the worse quality of the code
  • first year of product release the quality might be OK, but then adding new features more important than fixing software debt, over time the cost of change increases and accumulated technical debt harder to fix, but the more debt the higher the pressure to deliver
  • strategies – do nothing, replace (high cost/risk), incremental refactoring, commitment to innovate – best way but hard to sell politically – downward cycle from vicous cycle to a virtuous cycle (55% said easier to support agile developed products)
  • productivity in features you don’t do, 64% of software features never used, what if we put 25% of that money into refactoring or leaning agile
  • agile value curve – if doing high value first we can ask the quation do we have enough to release the product?
  • need to reduce the margincal value of our stories
  • if you don’t have time to estimate value, you don’t have time to estimate cost
  • philosophy – value is an allocation not a calculation (cost is a calculation), so use value points and allocate from the top down – value points need more thought than ranking – additional information when you look at 25 story point card worth only 2 value points, also demonstrates that value is important, should be able to do this fairly quickly
  • value and priority are different – a low value card high on priority might be a guide, pick a cap for the value
  • value points like story points are value
  • story point is calculation of cost, value point is allocation of revenue
  • Intel has 17 standard measures of value, help to determine as a guide
  • value in Chinese means smart/fast
  • value – is product releasable – always ask the business owner or product manager that question – example that a product could be released when it was 20% complete
  • parking lot diagram – empasizes capabilities we are delivering to customer in their own language, show progress and value deliverd by number of stories done / done
  • Gantt chart shows task complete to a schedule
  • questions – can we release, what is value-cost ratio (do we need to continue or do something else that is higher value), what is product quality, are we within acceptable constraints
  • how do you determine if you are in a technical debt hole – using qualitative measures in your code
  • ask the queston – do you know why it takes 3 months to make a change, explain the technical debt curve, start to show people that quality matter (eg. automated testing becomes a time accelerator)

Ice Breaker & Freshers Fair

The Fresher’s Fair at the Ice Breaker had a number of great groups including Kanban, Usability and CITCON. I stumbled across the following poster that was a long way from home…

Agile 2009 CITCON Brisbane

AAFTT Workshop 2009 (Chicago)

Agile AllianceI had the great pleasure to attend the Agile Alliance Functional Testing Tools (AAFTT) workshop on the Sunday before the Agile 2009 conference in Chicago, and share discussion with some of the best minds in the testing community from around the world.

The location was right across the road from the Willis Tower (better known by its previous name, the Sears Tower). Some of the notable attendees amongst many others included:

There were at least 4 tracks to choose from, these are the notes from the ones I participated in.

Screencasting

Small group discussion led by Jason Huggins about a different way of thinking about test artefacts (basically producing an iPhone commercial)

Photo 3 of 4 from #agile2009 in Chicago at the pre-conference... on Twitpic

  • the Rails screencast sold Rails because it sold the idea and then the product sold itself
  • now, with YouTube, etc, we have the tools available
  • used to be RTFM, not it is WTFV
  • ideal is to produce automated tests like the iPhone commercial, instead of a test report
  • use the “dailies” concept, like in the movies
  • perhaps the movie should be at a feature level, because the video should be interesting
  • best suited for happy path testing, is a way to secure project funding and money, remember that the iPhone commercial does not show the AT&T network being down
  • there is a separation between pre-project and during testing
  • tools currently exist, including the Castanaut DSL
  • part of the offering of Sauce Labs, currently recording Selenium tests
  • from the command line utility vnc2swf, created an API called Castro
  • at the moment you need to clean up the screens that are recorded
  • the advantage, being VNC, is that you can use all sorts of hardware, including the iPhone
  • suggest that you use something like uLimit to stop runaway videos, especially when being run in an automated test, to limit the size of the directory or the length of the video
  • suggest make a rule that no test is longer than five minutes
  • given the current tools are written in Python, DocTest is good for testing

Lightning Talks on Tools

I came in mid-way through this session, but caught some of the tools being discussed at the end

  • some tools are too hard to get passed the basic level, but quick to setup
  • tests are procedural, engineers tend to over-engineer

Robot IDE (RIDE)

  • most tools have a basic vocabulary to overcome
  • IDE is worth looking at
  • Robot has a Selenium plugin, but it is easy to write your own framework

Twist

  • specify tests as requirements, looks like a document, stored as text, write whatever you want
  • refactoring support as a first level concept
  • out of the box support for Selenium and Frankenstein (Swing)
  • write acceptance test – brown shows not implemented, allows developer to know what to implement, turns blue when done
  • refactoring concept “rephrase”
  • supports business rule tables (ie. Fitnesse for data driven tests)
  • support to mark a test as manual and generate the same reports
  • commercial software, licenced in packs
  • plugins to Eclipse, but don’t need to be familiar with this unless you are developing the automation

WebDriver

  • been around for three years

UltiFit

  • Ultimate Software, internal currently, allows to select Fitnesse tests, setup and teardown, close browser windows, nice GUI, etc…
  • uses TestRunner under the covers

SWAT

  • been around for two years, more traction now that Lisa Crispin works for Ultimate Software
  • simple editor for SWAT (& somewhat Fitnesse)
  • has a database access editor
  • uses Fitnesse syntax
  • there is a recorder, only good for teaching, people get lazy and don’t refactor
  • can take screenshots, borrowed from WatiN
  • can’t run SWAT when Fitnesse is running as a server
  • SWAT is a C# library at its core
  • can run macros, tests from other tests
  • run script – write script (eg. JavaScript) to help things that are hard to test

High Performance Browser Testing / Selenium

Jason Huggins led this conversation which was more a roundtable debate than anything else. The group discussed how we can get tests running quicker and reduce feedback times considerably.

This discussion led to a couple of the quotes of the workshop from Jason Huggins:

  • “Selenium IDE is the place to start with Selenium, but it is Selenium on training wheels”
  • “Record/playback testing tools should be clearly labeled as “training wheels”
  • “What to do with the Selenium IDE, no self respecting developer will use it.” Thinking of renaming the IDE to Selenium Trainer.
  • Amazing how many people in the testing community are red, green colour blind”

When Can / Do You Automate Too Much?

This started as a discussion on testing led by Brandon Carlson…

  • get your business people to write the tests – they will understand how hard it is, have seen outcome that amount of scope reduced because they have to do the work

…but ended up as a great discussion on agile approaches and rollout, discussing a number of war stories led by Dana Wells and Jason Montague from Wells Fargo

  • still early in their agile deployment
  • wish to emulate some of the good work done by some of the early agile teams
  • estimate in NUTs (Nebulus Units of Time)

Miscellaneous and Other Links

Some other miscellaenous observations from the workshop:

  • a number of sessions were recorded
  • of those using Windows laptops, a large percentage were running Google Chrome
  • Wikispaces is good to setup a quick wiki

A number of posts about the workshop have been posted since including:

And you can view the photos that I took from the event at: http://www.flickr.com/photos/33840476@N06/sets/72157622521200928/