Day 2 at Agile Australia 2010 and another full day of sessions and hallway discussions.
Keynote: Martin Fowler
Martin Fowler gave a “suite” of keynote presentations, including a great impersonation of “Uncle Bob” Martin! He wrote about his trip on his blog.
- agile manifesto has been way more successful than they thought when they wrote it
- semantic diffusion – the message of agile has changed from the original thoughts, but that is the price of success
- The New Methodology
- agile is about adaptive planning and people first
- predictive planning requires stable requirements – very rarely happens as requirements change
- “a late change in requirements is a competitive advantage” – Mary Poppendieck
- plan should be a current understanding of where things are
- adaptive planning needs evolutionary design – the technical side is important
- Frederick Winslow Taylor – father of planning in factories, invented scientific management, invented to control factory workers who were considered lazy and stupid
- if integrating hurts, do it more often
- every developer should commit to the mainline at least once a day – avoid the big scary merge
- Continuous Integration article and Continuous Integration book by Paul Duvall
- continuous delivery
- commit tests need to run in less than 10 minutes – need to stub out databases, look out for slow connections
- acceptance tests are more end to end but they run much slower, you don’t want to wait at commit time
- all environments should be automatically configured, binaries should be stored in an artefact repository
- commit > acceptance > performance > deploy
- deploying to production should not really be treated any different to other environments – sometimes you may need to have a manual deploy step – this should be a business decision because software should always be production ready
- Continuous Delivery book by Jez Humble and continuousdelivery.com
- notion of tradeable quality – trading quality for speed / cost – makes sense when buying a car but software is different
- external quality and internal quality – internal quality is invisible to customers, customers can choose external quality because they can see it
- design stamina hypothesis – can trade design for speed but only up to a certain point – this hits you in weeks not months
- technical debt term speaks to non technical people – paying off the debt and reduce the interest costs
- technical debt cards at least help you have the conversation
- inevitable that you will look at your code in future and think you should have done it better – this is still technical debt
You Can Take Your Agile Maturity Assessment And…
I was asked to be the MC for this session and had the privilege of introducing Marina Chiovetti and Jason Yip from ThoughtWorks. Their slides are available here.
- easy to get started in agile – invest in training, coaches and experienced practitioners – but after a year or two, how do you know if you are investing in the right thing
- looking for what’s really happening – what projects are hiding behaviors that we want to fix
- you can look agile and game the results – what are you looking for, practices are not everything
- create a culture for problem solving and learning – why don’t we have maturity assessments for these things?
- poster child project so people can look at it and see it really works
- successful project, everybody is happy, Prozac is cheaper, are you actually delivering?
- data is important but more important to understand the facts – verify the numbers reflect reality
- spider charts – depends on who is assessing, self assessment is only against what you know, so new teams score themselves high because they only know what they know, experienced assessors also look differently so you will get different results
- disconnect between doing agile and getting to profit – what are we using agile to get better at?
- tick boxes versus understanding the situation – will only change for the assessment or don’t understand why they are doing the change so the change won’t stick
- uncover the behaviours you don’t want to see, even if the teams are successful
- different teams need different measures of success – measure the drivers of success which will be different for each team
- different assessors will yield different results
- agile maturity used like a stick – use across organization, not to manage
- describe your true north (lean) – provide direction so you can determine your next step
- what does the organization value in measures
- make work visible – window into the process
- celebrate failure
- enterprise 2.0 bullseye
Monster Builds
I also had the privilege of introducing Chris Mountford from Atlassian, his slides are available here:
- manager was developer – mxd
- long builds cause low velocity – context switching, task entanglement, build breakage confusion
- tame monster builds through optimization – measure so you know what to fix, look at network and disk access, use repositories like maven, co-locate servers, then measure again
- matrix culling – remove things you don’t want to support – browsers, operating systems, databases, dropped user editions, ancient application servers, old versions of Java – Atlassian dropped build from 60 to 18 hours
- power to wait ratio – unit to functional tests – be selectively continuous
- parallelise your tests – Amdahl’s Law – benefit of running more processes in parallel
- false negatives are expensive – run builds against known good versions of the system to check to see if the infrastructure is broken
Building Quality Into The Lifecycle – All The Way Through
Sharon Robson delivered this presentation, her slides are available here:
- agile makes it real, now, here
- nobody knows what testing is, it’s hard to define
- in agile you are all testers you just don’t know it yet
- ISO 9126 defines software quality – her favourite standard
- cannot define done until you can define what is good
- need to define the what if questions up front, and we also need the answers – what are the things the product should not do
- a good metric for government departments – “will it make the newspaper” or even worse “will it make the front page”
- technical debt – “don’t apply for the mortgage”
- testing is a growth industry – the amount of regression grows every iteration
- testing is about information, risk and making sure it is good
- testing is pervasive – we should be testing everything all the time – process, people, tools, requirements
- everyone in an agile team is a tester and knows what is right and wrong – just need to empower them
- test all your development artefacts – look at everything with your tester eyes on
- agile testing techniques are no different to traditional testing techniques
- story cards are not relevant as artefacts – tests should tell you what the system does
- we live in a world of regression, it is the responsibility of the entire team
- automation is a must have – don’t automate everything but you need technology to allow you to cover a lot of ground very quickly
Open Space
I sat in for a while on the open space topic of testing in agile, which was led by Rene Maslen. Some good conversation and discussion in this group.
Panel: Paying Down Technical Debt
It was great to be asked to be on a technical panel, especially with the likes of Martin Kearns, Adam Boas and Andy Marks with Marina Chiovetti looking after the moderation. Probably most nerving was seeing Martin Fowler sitting in the front row!
Marina initially set the scene for technical debt. Ward Cunningham first drew the comparison between technical complexity and debt in a 1992 experience report: “Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.” The Agile community has faced a lot of hard questions about how a methodology that breaks development into short iterations can maintain a long-term view on issues like maintainability. Does Agile unintentionally increase the risk of technical debt?
We then kicked into a number of questions:
- What does technical debt look like? How would I know if I saw it?
- What are some of the common causes of technical debt on teams you’re working with?
- How do you prioritise technical debt against new feature development? How do you construct a roadmap for paying back technical debt?
- What is the cost of tech debt? How can we monetise it?
- I’ve heard of one team that had a technical debt backlog that was prioritised beside the product backlog. The CEO set a target that the tech debt backlog couldn’t grow beyond 10% of the size of the product backlog, and as soon as it got bigger than 10% the team had to start focusing on paying down the tech debt.
So, how do you monitor technical debt and manage it so that it doesn’t spiral out of control? Can you give us some examples or techniques you’ve used? - Technical debt sounds pretty, well, techie. How can we expect business/product owners to understand its impacts? And how can business owners align their objectives to managing tech debt?
Distributed Agile Projects
After the close of the conference, I had a nice long chat with Siva Dorairaj from Victoria University of Wellington about his PhD research into Distributed Agile Projects. I look forward to seeing the outputs from his research.