Agile 2009 Day 1 Review

Agile 2009Once again I was extremely lucky to get two talks accepted at Agile 2009 (with Paul King) and the support from Suncorp to send me along to speak. Whilst its a been quite a number of weeks since the conference, I wanted to ensure that I posted my notes and comments. This year, being my second attendance, I found the hallway discussions all the more valuable and had many awesome conversations with friends made last year as well as new friends just met. Added to this, Chicago exceeded my expectations as the host city.

Once again, the number of simultaneous sessions made the decisions extremely difficult on what to attend.

The sessions I attended on day 1 were as follows:

Using the Agile Testing Quadrants to Plan Your Testing Efforts

This session on the testing stage was delivered by Janet Gregory, one of the authors of Agile Testing. The slides are available on the Agile 2009 site.

Testers should be part of release planning and think about:

  • scope
  • test infrastructure and test tools / automation
  • how much documentation, is it too much, can I extract it from somewhere

Iteration planning:

  • plan for done, acceptance tests
  • priorities of stories, which stories to do first, connect with developers
  • budget for defects unless you are a high performing team

Need to acceptance test the feature, not just the story.

We then did a collaboration tools exercise, and some of the tools used by the audience were:

  • desk check / show me – when a developer thinks they have finished coding, get together and take a look
  • wikis, conference calls, GreenHopper, etc
  • daily standup – share when things are done, if you find them ineffective
  • project cards – used for story management and documenting conditions for acceptance
  • sticky notes and pens for a co-located team
  • demonstration every week or end of every iteration
  • FIT tool, used for demos
  • walking and talking
  • pairing
  • generated artefacts from the CI server
  • instant messaging
  • puzzle / chocolates on desk to encourage talk, “free to developers if they come and ask a question”
  • rolling desks on wheels, so they can switch configuration
  • rolling whiteboards
  • JIT (Just In Time) meetings as required
  • mind mapping software that hooks up to Jira
  • retrospectives
  • team review story and write tests together
  • nobody said “email”!, no email!
  • recorded chat room, so conversation is recorded

Waterfall test pyramid, upside down, very unstable – Functional Tests –> API Tests –> Unit Tests (heavy functional tests based on GUI, very few unit tests).

Automated test pyramid (Mike Cohn) – unit tests / component tests are the base layer, require testable code that we can hook into below the GUI at API layer, GUI tests are most brittle because UI changes so do as few of these as possible, right at the top you might need a handful of manual tests.

Agile testing quadrants change the way you think about testing – use to classify tests, what the purpose of the test is (why are we ariting these tests), tests will cross boundaries.

Agile testing quadrant – can be used as a collaboration tool (developers will understand how they can help), emphasizes the whole team approach (no “pass this to the QA team”, whole team is responsible for testing), use to defne doneness (use for planning, what needs to be done, has estimate allowed for the amount of testing we wish to complete).

Quadrant 1 – technology facing tests that support the team, TDD supports the design of the team, tester has feeling of comfort

  • unit tests test the developer intent, individual tests on a method, small chunks of code, fast feedback mechanism, code is doing what it should do
  • TDD tests internal code quality, if developers test correctly it flows all the way through and makes easier to test functionally
  • base for regression suite, if you are going to spend any time on automation, “put it here”, return on investment is better the lower you go in the pyramid

Quadrant 2 – where the acceptances tests live, supporting the team in natural language, helping the team deliver better software, use paper prototypes to talk to customers rather than big GUI, acceptance test upfront helps define the story, use examples to elicit requirements (easiest way to get clarification from the customer, always ask “not sure what you mean” or “give me an example”, pair testing (ask for feedback as soon as possible)

  • the examples can become your tests, write upfront and ensure that developer makes them pass when they develop code, use tools such as Fit / Fitnesse, Cucumber, Ruby / Watir
  • examples help customer achieve advance clarity, focus on external quality (facing the business), want the tests to spark a conversation with the developers
  • BDD use of given (preconditions), when, then as opposed to tabular formats in Fitnesse, useful for workflows
  • Janet polled the room and only about a dozen people in the room give their acceptance tests to the developers prior to the story being developed
  • if no automation tool, write up a manual sheet, give it to the developers and have a conversation before the card starts

Quadrant 3 – user acceptance testing, critiquing the product, getting the customer to look at the system

  • exploratory testing – time box these sessions to reassess about how far you wish to go, following instincts and smells with a purpose, touring (eg. the money tour) as defined by James Whittaker and James Bach (in the book Exploratory Software Testing), this is where you find majority of bugs so testers should spend the majority of their time here (which is why you need a good base of automated tests)
  • collaboration testing – forge a relationship with the developers so you know what they are developing,
  • remember your context to determine how much testing is enough (eg. mission critical software vs an internal application)
  • attack stories using different personas – Brian Marick likes to create evil personas (eg “pathological evil millionaire”) or use impatient internet user vs grandma who clicks every link on the internet

Quadrant 4 – non functional tests (should be part of every story (eg. is there a security or performance aspect), ility testing, security testing, recovery, data migration, infrastructure testing, do as much as possible upfront although sometimes you will need environments that will not be available to the end

  • non functional requirements may be higher than fucntional (eg Air Canada seat sale might need critical performance)

Test plan matrix – big picture of testing against functions for release, usually on a big whiteboard, use colours (stickies) to show progress, benefit is in the planning in what we need to do testing wise but also appeases management because they like to see progress, gives idea of where you are going

Can use a lightweight plan, put risks on a white page, 35 of the 37 pages of the IEEE test plan are static, so put that information somewhere else

Test coverage – think about it so the team knows when the testing is done, burn down chart will be enough if you test story by story, when thinking risk ensure you include the customer (they may have different opinion of risk

Summary:

  • think big picture – developer following a GPS only needs to know next 2 weeks, but tester is a navigator and needs the map
  • include the whole team in planning and test planner
  • use the quadrants as a checklist (put them on the wall)
  • consider the simplest thing, especially in relation to documentation
  • think about metrics – one man team might be good enough to just know they passed
  • visible, simple, valuable

Janet also mentioned the following throughout the session:

I also stumbled across a related blog post on this session at: http://agile2009.blogspot.com/2009/08/agile-testing-quadrants.html

What Does an Agile Coach Do?

This session was delivered by Liz Sedley & Rachel Davies, authors of the new book Agile Coaching. The slides are available on the Agile 2009 site.

This was a hands-on workshop and involved some good discussions on how to deal with different coaching scnarios.

Zen & the Art of Software Quality

This session was delivered by the legendary Jim Highsmith. The slides are available on the Agile 2009 site.

  • “There Is No More Normal” – John Chambers, Cisco CEO, Business Week, 2009
  • business strategy needs to be more adapting to change than performing to plans
  • mixed messages – be flexible but conform to a plan – dilemma faced by many agile teams
  • Artful Making” – Rob Austin – describes $125 million software failure
  • 1994 there was 82% software failures, 68% in 2009 (success defined as on time, on budget, all specified features) – Standish is measuring the wrong thing, not a good measure
  • cancellation of a project should not be a failure, it is a good thing
  • current environment – schedule is more important than value
  • Beyond Budgeting” – Hope/Fraser – not a good book, but good ideas
  • Measuring & Managing Performance in Organisations” – Austin – all measurements are dysfunctional, get a different outcome than you expected
  • if budget 100 and you achieve 100, better than budget is 120 and you achieve 110 – which would a performance management system reward (the 100, even though latter is better achievement)
  • beyond budgeting – make people accountable for customer outcomes, create high performance climate based on relative success amongst others
  • trust, honesty and intentions are better than measurements
  • performance tends to improve while people figure out the system, but under pressure people focus on measurement goals rather than outcomes
  • earned value (time + cost) has nothing to do with value, does not have anything to do with what is delivered to the customer
  • we need to move from scope/cost/quality to value/quality/constrants (scope/cost/schedule)
  • core benefit from agile has been value and quality
  • everybody comes to work to do good quality, but never well defined
  • Zen & The Art of Motorcycle Maintenance” – Pirsig – quality ideas
  • is quality objective or in the eye of a beholder, people have different ideas
  • need extrinsic quality (value) and intrinsic quality (so you can deliver quality tomorrow)
  • Applied Software Measurement” – Capers Jones  – 95% defect removal rate the sweet point for quality
  • experience is doubling staff quadruples the number of defects – BMC were able to kick this trend using agile
  • difficult errors take time to find – longer the worse quality of the code
  • first year of product release the quality might be OK, but then adding new features more important than fixing software debt, over time the cost of change increases and accumulated technical debt harder to fix, but the more debt the higher the pressure to deliver
  • strategies – do nothing, replace (high cost/risk), incremental refactoring, commitment to innovate – best way but hard to sell politically – downward cycle from vicous cycle to a virtuous cycle (55% said easier to support agile developed products)
  • productivity in features you don’t do, 64% of software features never used, what if we put 25% of that money into refactoring or leaning agile
  • agile value curve – if doing high value first we can ask the quation do we have enough to release the product?
  • need to reduce the margincal value of our stories
  • if you don’t have time to estimate value, you don’t have time to estimate cost
  • philosophy – value is an allocation not a calculation (cost is a calculation), so use value points and allocate from the top down – value points need more thought than ranking – additional information when you look at 25 story point card worth only 2 value points, also demonstrates that value is important, should be able to do this fairly quickly
  • value and priority are different – a low value card high on priority might be a guide, pick a cap for the value
  • value points like story points are value
  • story point is calculation of cost, value point is allocation of revenue
  • Intel has 17 standard measures of value, help to determine as a guide
  • value in Chinese means smart/fast
  • value – is product releasable – always ask the business owner or product manager that question – example that a product could be released when it was 20% complete
  • parking lot diagram – empasizes capabilities we are delivering to customer in their own language, show progress and value deliverd by number of stories done / done
  • Gantt chart shows task complete to a schedule
  • questions – can we release, what is value-cost ratio (do we need to continue or do something else that is higher value), what is product quality, are we within acceptable constraints
  • how do you determine if you are in a technical debt hole – using qualitative measures in your code
  • ask the queston – do you know why it takes 3 months to make a change, explain the technical debt curve, start to show people that quality matter (eg. automated testing becomes a time accelerator)

Ice Breaker & Freshers Fair

The Fresher’s Fair at the Ice Breaker had a number of great groups including Kanban, Usability and CITCON. I stumbled across the following poster that was a long way from home…

Agile 2009 CITCON Brisbane

AAFTT Workshop 2009 (Chicago)

Agile AllianceI had the great pleasure to attend the Agile Alliance Functional Testing Tools (AAFTT) workshop on the Sunday before the Agile 2009 conference in Chicago, and share discussion with some of the best minds in the testing community from around the world.

The location was right across the road from the Willis Tower (better known by its previous name, the Sears Tower). Some of the notable attendees amongst many others included:

There were at least 4 tracks to choose from, these are the notes from the ones I participated in.

Screencasting

Small group discussion led by Jason Huggins about a different way of thinking about test artefacts (basically producing an iPhone commercial)

Photo 3 of 4 from #agile2009 in Chicago at the pre-conference... on Twitpic

  • the Rails screencast sold Rails because it sold the idea and then the product sold itself
  • now, with YouTube, etc, we have the tools available
  • used to be RTFM, not it is WTFV
  • ideal is to produce automated tests like the iPhone commercial, instead of a test report
  • use the “dailies” concept, like in the movies
  • perhaps the movie should be at a feature level, because the video should be interesting
  • best suited for happy path testing, is a way to secure project funding and money, remember that the iPhone commercial does not show the AT&T network being down
  • there is a separation between pre-project and during testing
  • tools currently exist, including the Castanaut DSL
  • part of the offering of Sauce Labs, currently recording Selenium tests
  • from the command line utility vnc2swf, created an API called Castro
  • at the moment you need to clean up the screens that are recorded
  • the advantage, being VNC, is that you can use all sorts of hardware, including the iPhone
  • suggest that you use something like uLimit to stop runaway videos, especially when being run in an automated test, to limit the size of the directory or the length of the video
  • suggest make a rule that no test is longer than five minutes
  • given the current tools are written in Python, DocTest is good for testing

Lightning Talks on Tools

I came in mid-way through this session, but caught some of the tools being discussed at the end

  • some tools are too hard to get passed the basic level, but quick to setup
  • tests are procedural, engineers tend to over-engineer

Robot IDE (RIDE)

  • most tools have a basic vocabulary to overcome
  • IDE is worth looking at
  • Robot has a Selenium plugin, but it is easy to write your own framework

Twist

  • specify tests as requirements, looks like a document, stored as text, write whatever you want
  • refactoring support as a first level concept
  • out of the box support for Selenium and Frankenstein (Swing)
  • write acceptance test – brown shows not implemented, allows developer to know what to implement, turns blue when done
  • refactoring concept “rephrase”
  • supports business rule tables (ie. Fitnesse for data driven tests)
  • support to mark a test as manual and generate the same reports
  • commercial software, licenced in packs
  • plugins to Eclipse, but don’t need to be familiar with this unless you are developing the automation

WebDriver

  • been around for three years

UltiFit

  • Ultimate Software, internal currently, allows to select Fitnesse tests, setup and teardown, close browser windows, nice GUI, etc…
  • uses TestRunner under the covers

SWAT

  • been around for two years, more traction now that Lisa Crispin works for Ultimate Software
  • simple editor for SWAT (& somewhat Fitnesse)
  • has a database access editor
  • uses Fitnesse syntax
  • there is a recorder, only good for teaching, people get lazy and don’t refactor
  • can take screenshots, borrowed from WatiN
  • can’t run SWAT when Fitnesse is running as a server
  • SWAT is a C# library at its core
  • can run macros, tests from other tests
  • run script – write script (eg. JavaScript) to help things that are hard to test

High Performance Browser Testing / Selenium

Jason Huggins led this conversation which was more a roundtable debate than anything else. The group discussed how we can get tests running quicker and reduce feedback times considerably.

This discussion led to a couple of the quotes of the workshop from Jason Huggins:

  • “Selenium IDE is the place to start with Selenium, but it is Selenium on training wheels”
  • “Record/playback testing tools should be clearly labeled as “training wheels”
  • “What to do with the Selenium IDE, no self respecting developer will use it.” Thinking of renaming the IDE to Selenium Trainer.
  • Amazing how many people in the testing community are red, green colour blind”

When Can / Do You Automate Too Much?

This started as a discussion on testing led by Brandon Carlson…

  • get your business people to write the tests – they will understand how hard it is, have seen outcome that amount of scope reduced because they have to do the work

…but ended up as a great discussion on agile approaches and rollout, discussing a number of war stories led by Dana Wells and Jason Montague from Wells Fargo

  • still early in their agile deployment
  • wish to emulate some of the good work done by some of the early agile teams
  • estimate in NUTs (Nebulus Units of Time)

Miscellaneous and Other Links

Some other miscellaenous observations from the workshop:

  • a number of sessions were recorded
  • of those using Windows laptops, a large percentage were running Google Chrome
  • Wikispaces is good to setup a quick wiki

A number of posts about the workshop have been posted since including:

And you can view the photos that I took from the event at: http://www.flickr.com/photos/33840476@N06/sets/72157622521200928/

Agile Academy Innaugral Meetup

MeetupAgile AcademyI attended the first meetup of the Agile Academy tonight to hear Jeff Smith (CIO of Suncorp) share his story of the agile journey.

Here are my takeaways:

  • students have passion, conviction & coherence of thought but it is not necessarily youth that gives them this it is more an acessible learning environment
  • the premise behind agile is simplicity
  • the job of a leader is not to motivate people, it is to inspire them, your staff should come to work with a passion
  • there was not much in the way of agile training out there, so decided to develop within and then share with the community
  • intention is to get others to contribute to the Agile Academy, to make it better and fill the gaps
  • have learnt there is value in coaches
  • good leadership is scarce

Key success factors according to Jeff:

  • use passion and ideas not threats and bureacracy
  • tell a narrative of who you are and what want to become
  • entrepeneurship is not for the few
  • need resistance and humility, have courage to change our mind (never give up the right to be wrong) and proviude an environment for this to happen
  • create a good listening organisation, you have to ask!
  • change almost never fails when it is too early
  • connect and challenge people for the collective wisdom – the one thing youy can’t copy is culture
  • aim for success not perfection
  • do what you believe in and paint a good picture of it (then people will follow)

From the Q&A sessions:

  • needed a simple way to describe agile to the business and the board, painted a picture and a simple message. Need to train the business and seed successes in different places and put a support structure behind it. You also need the courage to stop projects.
  • “…better to piss people off in the beginning than try and make them happy in the end”
  • listen to the leaders (they are not necessarily at the top)
  • you need to balance people, technology and processes
  • the challanege for the first year was to change peoples belief processes, now Suncorp need to mature
  • Scrum can be used anywhere in the organisation (it was used to roll out eLearning by the HR area)
  • a project over 6-7 months means that you visibility of what you are trying to achieve
  • the less people on a project the better it works
  • have to accept what you have and not use that as an excuse for not inventing
  • decided the best approach was to aggregate people by business people skill, by technical skill and then by city (as opposed to one centralised testing centre for example)
  • the entire team needs to be part of a connected ecosystem

Barcamp Brisbane III: The Search For Flock Wrapup

Barcamp BrisbaneHere is my better late than never wrapup of Barcamp Brisbane III (held last weekend at the East Brisbane Bowls Club), a worthwhile meetup of locals willing to share their skills with others.

Discussion at Barcamp Brisbane III

Discussion at Barcamp Brisbane III

From the lightning talks that I attended:

Speed Intro

A new concept was the speed introductions, one minute to introduce yourself to someone you don’t know. Worked quite well, met a bunch of new people and got a good vibe of the different passions of the attendees.

Favourite Cloud Applications

Session presented by Michael Rees.

None of the applications were particularly new to me, but these were my takeaways for more of a look:

  • Gliffy – online diagram and flowchart editor
  • Slideshare – for uploading of presentations
  • Prezi – requires Flash, but is a big space to show text and is a new and interesting way to show presentations
  • Delicious – not new, but Michael uses it as a home page
  • Archive.org – from the guys that brought us the wayback machine, they have a service that allows you to upload audio and video maintaining the original resolution (although the upload is quite slow apparently)
  • Evernote – I have used OneNote for convenience of late, but a place to store and search for notes on multiple platforms and phones, this is a service that Michael pays for as well
  • Online Storage – LiveMesh is a favourite, gives you 5GB that syncs anywhere. He also mentioned Live Sync P2P, SkyDrive and Amazon S3

Introduction to Git

Attended two sessions on Git, a discussion and then an online overview.

  • A local repository, distributed
  • Competitors are Mercurial and Bazaar
  • Git is not as good on Windows environments right now
  • Github uses for public hosting
  • Gitgui is a an interface to Git, amongst many others

This is certainly the next generation of version control, but I have concerns on how to get this working in the enterprise especially since I have enough trouble convincing people to commit let alone to commit often. Can see its potential for open source and independent or small developers however.

Groovy, Griffon & Grails

Paul King and Bob Brown gave a good introduction to the G3 technologies. I was especially interested in Griffon, since I hadn’t spent any time looking at it previously.

Government 2.0

Was interested to listen into the discussion about PublicSphere / Government 2.0 by Des Walsh, and the opportunities it may present. Des has posted a more in-depth post here: http://deswalsh.com/2009/07/19/government-2-0-at-barcamp-brisbane/

Drupal Hosting

Discussion about Drupal Hosting:

  • Open Atrium – intranet website incorporating wiki, forum, internal-twitter – theme around existing modules
  • Aegir – Drupal Hosting System
  • Suspect that much like Linux, we will see many distributions in future
  • Acquia are the Red Hat of the Drupal world
  • Drupal 7 has gone full TDD with 80% coverage

Fish Shell

An online demonstration of the Fish Shell, which can be best described as a shell that adds a bunch of added functionality to bash, such as better history and visualisations.

Conclusion

A good way to get a launch into some new and interesting technologies and meet some new people. There is apparently another planned before the end of the year.

Wrap up from CITCON Brisbane

CITCONI attended the CITCON (Continuous Integration and Testing Conference) in Brisbane last weekend and had an awesome time discussing a range of topics with the most passionate in this field.

Deep in CITCON discussion

Deep in CITCON discussion

I have added my notes to the conference wiki,but my takeaways from the sessions I attended are:

Elements of Enterprise Continuous Integration

Jeff Frederick led a discussion based around the Elements of Continuous Integration maturity model:

  • for teams that are already doing continuous intgration, it gives you a target to obtain
  • is obnoxious after insane (where to for teams that are already at the top level)?
  • tooling makes continuous integration trivial now (when Cruise Control was released many people thought it crazy that you might build on every release, not its a given)
  • the model was developed because people assume what is possible is based around their personal experiences
  • the model shows the industry norms and targets, and if your team is not at these levels you are behind the curve

The discussion branched out around the following ideas:

  • scrum does not prescribe continuous integration, but continuous integration is a good development technique
  • that it should be acknowledged that there is a difference between project builds and full product builds (which can take days)
  • I raised the idea that perhaps there should be an element around team principles, and that things like performance (and more importantly, the team realisation that performance should be monitored and improved) should be an indicator to maturity (there was much debate about this!)
  • a number of industries potentially have their continuous integration processes audited, such as defence, gaming  and financial organisations that have Sarbanes-Oxley requirements
  • it was acknowledged that most large organisations have teams at different levels on the maturity scale (this is certainly my experience)
  • dynamic languages and don’t really build or deploy. This then raised discussion that dynamic languages are not compiling as opposed to not building, and that in many cases one man consultants can manage their deployment process in a much more lightweight manner
  • parallel to CMMI, is there a payoff to getting to insane?
  • maturity is often determined when we move from dropping code to testers versus testing the development build (where testers are writing the code while the code is being developed)
  • where is the line that determines that the build is complete? It should be the entire team, not just the developers or the QA team
  • the QA team is traditionally where much of the auditing happens, therefore many testers are reluctant to change as they have built up processes to deal with audits over a number of years

For the record, the cutting-edge agile teams I have worked with over the last few years were at the following levels:

  • Building (Intermediate)
  • Deploying (Intermediate)
  • Testing (Insane)
  • Reporting (Intermediate)

We still have work to do!

Virtualisation & CI

I used the “law of two feet” during this session, but was interested to hear that many people are using virtualisation very effectively in their test labs, and that it makes getting environments and data ready for testing much easier.

Long Build Times

The discussion was well-established by the time I got to this session, but some of the key points for me from the discussion were:

  • question as to when static analysis checks should be run in the build – the consensus that running them first means you get the quickest feedback
  • longer builds should be run nightly so as not to hold up developers
  • prioritising build queues or using different machines sounds like a good idea, but nobody is doing it
  • you can reuse functional tests for performance tests, but targeting specific tests seems to work better
  • Atlassian use JMeter for performance tests and have a variety of Maven and Ant builds, but use Maven for managing repositories
  • Ant is still well regarded, Idea support is awesome, many people do not understand the power of custom ant tasks or the ant idoms
  • the build should be regarded as part of your code
  • discussion about using a Java build tool, such as Hammer and why we can’t articulate why it seems wrong
  • not enough people understand Maven, usually there is “one guy” on the team
  • Vizant is a good tool to graph the build
  • EasyAnt combines all of the Ant idioms plus Ivy

Is Scrum Evil?

Jeff Frederick led a discussion that he is led at previous CITCON’s around the world

The team first debated why Scrum is Evil. During this discussion I really thought the whole agile movement was done for. Jeff asked the group to finish the sentence Scrum Is Evil because…:

  • it becomes an excuse
  • that’s not Scrum
  • tested as a silver bullet
  • hides poor personal estimation
  • master as dictator, project manager
  • two days to agile master certification
  • daily standup equals agile
  • agile by the numbers
  • is dessert first
  • you lose the baby with the bathwater
  • Scrum teams don’y play well with others including customers
  • it has certification
  • is the new RUP

Jeff then proposed a way to think about Scrum adoption as outlined in Geoffrey Moore’s “Crossing The Chasm”. The early adopters had success while the early majority are putting their faith in training everybody as Certified Scrum Masters (a problem that appears to be a far greater issue in Europe than Australia).

Then, just as though all hope had gone, Jeff asked the group to finish the sentence Scrum is Good beacuse…:

  • people can get it
  • an easy introduction
  • a good starting point
  • it is better than a cowboy shop
  • people can actually follow it
  • improves visibility
  • blockers are highlighted
  • testers can start work early
  • provides a forum for communication
  • can engage customers in a much richer way
  • states there should be a facilitator
  • results focussed
  • makes everybody responsible for end result
  • better communication from end result

The key outcome by the group was “Scrum is not evil… people are evil”

This was a great way of trying to tease out the issues and advantages to using an agile process and one that we may be able to use in the enterprise with teams who have been on training but appear to be resistant to change.

Seeding Test Data

A good discussion about ways to seed test data

  • Erik Petersen introduced the group to GenerateData.com, a free site that generates real adddresses and data based on factors a random amount of times that you can then inject into SQL – the site looks awesome!
  • others in the group mentioned LiquiBase that can be used to version the database, is designed for database management but can be used to seed data
  • Unitils is used to setup data scripts
  • one suggestion was to build a reset database function into the system
  • HSQL (Hypersonic) is a good way to create databases from Hibernate, in memory

The discussion got a little more generic and talked about:

  • Roo, a java Grails-like platform
  • WebObjects by Apple is a lot better than it used to be

Extending CI Past Traditional Dev & Release Process

I led this discussion, and whilst it focussed mainly on different usages that I have been involved with (with assistance from Paul O’Keeffe and Paul King), we also had a good discussion about Tableaux and the build process at Atlassian.

Conclusions

A great open conference attended by people passionate enough to give up their Saturday to talk about continuous integration and testing.

ASWEC 2009: Experiences from Agile Projects Great & Small

My presentation from Australian Software Engineering Conference (ASWEC) 2009  that I delivered with Paul King called “Experiences from Agile Projects Great and Small” is available on Slideshare.

Agile 2008 Review

It was a great honour to be accepted for two experience report talks at the Agile 2008 conference in Toronto, Canada with my colleague Paul King. Initially Paul was going to attend and present on my behalf, but days before the conference I got the OK by my employer to both present and attend my first international Agile conference (and my first trip overseas from Australia as well!)

Paul and I presented two sessions: “Agile Project Experiences: The Story of Three Little Pigs” and “Technical Lessons Learned Turning the Agile Dials to Eleven!

I have some more detailed notes buried in a pile somewhere and will post if and when I find them, but this is a retrospective deck I presented when I returned home to a number of internal brown bag forums as well as the Brisbane XP User Group.

Agile 2008 – Technical Lessons Learned Turning the Agile Dials to Eleven!

My presentation with Paul King from Agile 2008 called “Technical Lessons Learned Turning the Agile Dials to Eleven!” is available on Slideshare.

Developer practices for traditional and agile Java development are well understood and documented. But dynamic languages – Groovy, Ruby, and others – change the ground rules. Many of the common practices, refactoring techniques, and design patterns we have been taught either no longer apply or should be applied differently and some new techniques come into play. In this talk, techniques for agile development with dynamic languages are discussed. How should we better apply refactoring techniques? What new aspects do we need to think about?

The session was recorded and is available on InfoQ.

The corresponding paper is available via the IEEE and is also available in full via the Agile Academy.

Agile 2008 – Agile Project Experiences: The Story of Three Little Pigs

My presentation with Paul King from Agile 2008 called “Agile Project Experiences: The Story of Three Little Pigs” is available on Slideshare.

Over the last few years, we have aggressively applied agile practices on a number of projects with success. These successes, however, have not been achieved without challenges and lessons learnt along the way. This experience report specifically highlights examples from three different projects of varying sizes in this period in the same organisation (three little pigs) where in all cases the pigs were well and truly committed.

Some of the key successes from the example projects will also be discussed.

The corresponding paper is available via the IEEE and is also available in full via the Agile Academy.