George Dinwiddie led this session which turned into a lively discussion! I had proposed what I thought was a related session on Specification By Example and had combined them, but the conversation never really had a chance of getting onto that topic!
George expects the business people to be able to read and understand the tests
non-programmers should not be writing automation, it is the programmers responsibility
wants to be able to extract working tests into a step definition rather than needing to rewrite in Ruby (George Dinwiddie)
there is a difference between a specification and testing (Christian Hassa), this is a fundamental shift
building a DSL – talk about terminology and how we explore our domain – essential step
you don’t create a DSL, you build it
not a problem with the toolset but our training in thinking in a procedural way rather than an example way of thinking (Corey Haines
testers new to automation create large scripts because it’s their only hope in creating some sort of repetition (@chzy), it does not take a lot of effort and most business people are open to working this way
enable non-programmers by getting them to come work with us every day (Woody Zuill)
George is helping people make a transition, don’t want people to throw away what they have,
ideal is not to have step definitions call step definitions, Cucumber community is becoming a community of programmers and are moving away from this
Robot Framework is more keyword driven, more aligned to non-programmers, you can also make a mess, “it is a double edged sword” (Elisabeth Hendrickson)
testers like to test the negative cases, should they be expressed at a high level or expressed as a unit test by pairing developers and testers
if you are testers and you cannot write simple Ruby scripts, then you have no place on my team (Corey Haines), this opinion is probably shared by the Cucumber community (George disagreed…)
need to use the same design patterns in both Robot and Cucumber (@chzy)
in an environment that is test centric and BDD, Cucumber is the tool (usually environments with little to no QA), in a business centric environment where you an get the business involved Robot Framework is your tool
Corey works in environments where there is very few Cucumber specifications per scenario, backed by lots of unit tests
Cucumber came out of environments where the team is predominantly developers, hence the desire to drill down to Ruby code sooner
at a large household name company – theyexpect testers to be more technical, happening more in the industry, eliminated the role of tester due to different pay grades (@chzy)
moving traditional organizations to a collaborative way of working is hard (@chzy)
wants simple refactorings that are are a bridge from one place to another (George Dinwiddie)
at a startup Joseph was at, tests were taking up to 8 hours to run and costs for distributed architecture was high
Forward Internet (London) – let developers do what they want – by not testing they could be faster and more interactive than their competitors – did testing in Production, a risk that sometimes things could fail – testing should not block deployment
in some situations it is just worth hacking it out, particularly in a lean startup
if it is faster to rewrite rather than maintain it, then don’t write tests (Fred George via Corey Haines)
a big question of this is the skill level of your developers – do you have the skill level to make the choice to not do it (Corey Haines), primary impact of success is the skill level of your developers
Scribd – were having trouble with test speed and found out the developers were scared of breaking the PDF (which is the heart of the business) – they separated the PDF out to speed up development (so developers weren’t worried about breaking it)
quick delivery – need the quick feedback cycle to make this work, simulate production
need effective tests – small suite of tests that are 5-10 minutes long
test what you are most scared of
Silicon Valley’s issue is hiring – Facebook is stealing developers from Google because they hire good people and enable them to just hack it out
2 software industries – small companies and large corporations, very different worlds
question everything – can only do this if you have experienced it before and understand it
A couple of years ago I received an awesome opportunity to attend James Bach deliver his Rapid Software Testing course in Adelaide. At the time I was working with Sharon Robson from Software Education to help re-develop the Agile Testing course for the Agile Academy, and she thought it might be good for us to sit in the back. The two day course was awesome (one of the best courses I have ever attended), although the animated debate between James and Sharon over breakfast in relation to ISTQB is one I will never forget either.
One of the great things about the course is that the notes are freely available from the Statisfice site (slides and appendices). Although it is the insight and passion from James that makes the course extremely worthwhile. Unfortunately I did not earn my “testing stars” from James from this course, but I did learn a lot. I recently dug out my notes from the course and here they are below.
the secret – “watch people test” – then follow the patterns
traditionally testers muddled through, as you got more experienced you just muddled better
there is lots of practices yet to be written about
James is “walking through an orchard rip with apples”
“nobody expects a tester to be right about anything” – we are in the evidence and inference business
tester tip – did you do “booja booja” testing? Your answer should be “not by that name”
you test under uncertainity and time pressure – if not you are about to be laid off!, organisations keep testers at minimum number
heuristics – essential to rapid testing, eg. walking into a foreign building – “I’ll know it when I see it”
“creep and leap” – leap is the most outrageous test you can do, creep is to gently shatter the pattern in your mind – creep and leap may fail because you don’t leap far enough or you don’t creep enough
minimum number of cases has no meaning – infinite – no light flashes when you have finished testing / understand the pattern
pattern in the test cases is just the pattern in the test cases, not the program
need to leap beyond imagination
rapid testing is not about techniques – a way of thinking, a set of skills
what do testers do? – they are the “headlights of a project”, don’t need testers in the daylight (no risks)
testers don’t ensure quality of a product, they report the quality of the product
key definitions: quality is value to some person (who matters), a bug is anything about the product that threatens its value
testers represent the people whos opinion matters
defect is a bad word legally; not sure it is a defect when you find it, assumes more than you know (emotional word: bug, issue, incident)
testing and questioning are the same thing
there is a motivating question behind each test (if not, a zombie walk)
first principle – know your mission – allows you to test what matters, gets you more focussed
we are chasing risk
quality criteria – what is important, who are users
curse of expertise – people who know a lot, don’t always see a lot (why you need developers and testers)
need an oracle / result – otherwise you are just touring (an oracle is a principle or mechanism by which you find a problem)
rapid test teams should be a team of superheroes – what is your super power? Seek test teams that have variety
critical thinking – “huh”, “really”, “so” – say these words and you are on the road to critical thinking, you have to make assumptions to get work done
“huh” = what exactly does that mean?
“really” = what are the facts, how do we know it is true?
“so” = does any of this really matter, who cares?
safety language – this desk “appears” brown, have “not yet seen” a number 127 work, when you see this language your brain keeps thinking about the problem (interim conclusion only)
if you have stopped questioning you have stopped testing (and turned yourself into a test tool)
video tape your tests – take notes at timestamps, good for audit when you need that
ask a question without asking a question – make a statement / fact and wait for a reaction
model it differently – look at it in a different way
need to have the ability to slow down your thinking and go step-by-step and explain/examine your steps and inferences
exploratory testing is about trying to de-focus – seeing things in a different way
there is no instruction you can write down that won’t require some judgement from a human
irresponsible to answer a question without knowing some context – allows you to establish a risk landscape
James remembers his testing approach as a heuristic – CIDTESTDSFDPDTCRUSSPICSTMPLFDSFSCURA (his notes go on to explain this one!)
when you hear “high level”, substitute “not really”
HICCUPS(F) heuristic, a set of patterns all testers seem can be an answer to justify why something might be: History (something has changed), Image (OK, but something makes us looks stupid), Comparable products (like another system), Claims (said in a meeting, hallway), User’s expectations (do you understand users), Product (consistency), Purpose (why and what is it trying to accomplish), Statutes (something legal), Familiarity (a familiar feeling)
Oracles – calculator (ON 2 + 2 =4; not heuristic, answer won’t be 5, burst into flames, number won’t disappear), Word saving files (came up with 37 alternatives), Notepad (this application can break, Microsoft suggested it was not a bug)
Ask for testability – give me controllability (command line version and visibility, text version of display), when developers say no send email so you have documented evidence on why didn’t or it takes so long to test
ask “is there a reason I have been brought into test this?”
ad-hoc / exploratory does not equal sloppy
testing is not the mechanical act but the questioning process, only people who have a goal of 100% automated testing are people who hate to test, don’t hear about automated programming (what about compiling?)
everybody does exploratory testing – creating scripts, when a script breaks, learning after a script runs, doing a script in a different way
exploratory testing acts on itself
“HP Mercury is in the business of avoiding blame”
script – to get the most out of an extremely expensive test cycle, for interactive calculations, auditable processes
mix scripting and exploration – what can we do in advance and what can we do as we go, James always starts at exploratory and moves back towards scripting
use a testing dashboard – break down by key components in the system, all management cares about is a schedule threat so get to the point, count the number of test sessions (uninterrupted block of testing time – 90 minutes) as management understand this (session test management), the key is simplicity, what does management usually ask for / need (usually a different measure), counts give the wrong impression, numbers out of context, number of test cases is useless, use coverage (0 = nothing, 1 = assessed, 2 = minimum only, 3 = level we are happy to ship) and status (green = no suspected problems, yellow = testers suspect problem, red = everybody nervous)
equivalence partitioning – you treat differences as if they are the same, models of technology allow us to understand risk (eg. dead pixels on a button), critical tester skill to slow your thinking down (is that a button?)
galumphing – doing something in an intential, over exuberant way (eg. skipping down the street), some inexpensive galumphing can be be beneficial, takes advantages of accidents to help you test better
many people are hired to fake testing – not to find bugs but to point fingers (“we hired testers”)
good testers build credibility
testers question beliefs (we are not in the belief business) – cannot believe anything that the developers tell you
lots of people can test – like surgery in the 14th century
reality steamroller method – maximise expenses from the value that they are going to have – record decisions, do your best to help out, let go of the result, write emails to get your hands clean (helpful, timestamp documented)
get all of the documentation and create a testing playbook – diagrams, tables, test strategy
When Suncorp started down the path of rolling out its agile program over four years ago, it was viewed by many internally and the industry with much scepticism and angst, yet now it is approaching mainstream adoption in the industry.
One of the key challenges of becoming agile was improving our approach to testing and quality.
In this talk we will talk about why we had to change, why we had to improve the “speed to cool” in relation to testing, our challenges and approach and our blueprint for the “future tester” at Suncorp.
Like our agile journey, our vision for testing has been regarded as ambitious, so join us to hear why we believe raising the profile, empowerment and skillset of testing is critical to our (and your) future success.
The STANZ (Software Testing Australia New Zealand) 2011 conference was held in Wellington and Melbourne on the last week of August (into September). I was lucky enough to be invited to speak at the Melbourne event by my good friends at Software Education, who were the promoters of the event. I rolled up on the back of a flight from Los Angeles to Brisbane (and then Brisbane to Melbourne) a little jet lagged, but got heaps from the event.
ask questions of the CEO about the vision and what the product is supposed to do, listen to customer support calls, talk to marketing, talk to the developers about what bugs they value
what are the top 10 things people love and hate about your software?
look for efficiency – use checklists instead of test cases, forget about regression testing and use the computer to be more efficient
testing is about creating value for the people who matter most, your customers
people need an emotional attachment to your product – the share market is an example of a product driven by emotion
we need to create value for our customers, but just as importantly for ourselves
we can’t just focus on business value – it’s a big stick that will erode morale
talk to your customers – what do they need, what do they like, dislike, what is missing?
talk to team – what do they like about your work, how can you be better?
self evaluation – what is new in the field, am I enjoying work, what do other team members focus on or find things that I miss?
avoid blame – excuses rather than finding and solving real problems – “we wouldn’t have this problem if we we doing agile”, “management don’t get testing”, etc… – feels good to say but is not constructive
don’t expect tools or processes to rescue you – look out for your own best interests, know the problem you are solving and use the tools/process to solve it and ensure you have a way to measure it
the key to creating value is alignment – people in different jobs or teams often have different goals
leaders – clearly articulate vision and goals to the testing team and how does that align to our goals for the product and company, leadership comes from everyone in the team, leaders need to manage the politics (an organisation with more than one person will have politics)
need to continually inject change and keep people interested
people have skills, they are not resources – find your talents and invest in it
understand your context – every team will be different
tangible quality can be measured by understanding if the stakeholders needs are met and if you are meeting ROI, intangible quality is important and not often taken seriously – would you be afraid if you mother used this, would you like your name on the splash screen?
impress the most important stakeholder – you!
most people don’t know what great testing is – you can be shocked and appalled by what most people think is good, strive to be better
tangibly getting better – learn about planning and strategy and exploit the opportunities, write good bug reports as developers really value this, be good at communicating what needs to be done and where we are going, take more responsibility and display competence in basic technical skills
intangibly getting better – be in demand for your testing service, have good problem solving ability
use external communities to develop your testing skills
work as though your favourite person in testing was coming to visit
need to be able justify your work – is your testing defensible
use repeatable or intermittent bugs as a clue to something bigger – don’t ignore the anomalies
testing is like journalism – need to do crazy things to get the story, move towards the issues, people need the news today not tomorrow
need to have a technical curiosity about what is going on in the community – what is coming down the pipe, what are the people that have the ability to change things doing?
Overall this was a refreshing session to see a passion in testing and improving skill, with some excellent sound bytes along the way.
before you can motivate a team you need to ask yourself how motivated you are – what gets you up in the morning about testing
know your testers – give your testers a testing challenge to understand how they test, also understand what they want to get out of testing
important that your test team knows that you believe in them and that they are being listened to, important that they get excited about testing again
testers are paid to think – test scenarios often go against that
think about for every test, how is it adding value to the company
testers need to take responsibility – make and defend decisions
you sometimes need to let go of your own goals – the team need to feel empowered
exploratory testing – the tester needs to decide when it is good enough, this is the way testing is and it is hard to estimate - session based test management (SBTM) and Rapid Reporter (enter your charter/objective – time stamps and records test sessions)
I really enjoyed this session, although it reminded me how many organisations still have large test separate teams.
Test Planning for Mobile Application Projects
Jonathon Kohl delivered this session, based on some Techwell articles (part 1 and part 2).
implications – power, display size, portability, connectivity, radios, large number of devices
less power than a PC – multitasking can freeze memory, interactions with O/S can have a big impact, kinetic input (tapping, touching, pinching) can have strange behaviours, needed to test using physical movement to replicate locking
connectivity – strange things happen when moving between WiFi, 3G and 4G, driving also causes issues
distribution – you do not have control of distribution in app stores, read the guidelines and understand the timelines early
mobile project issues – time pressures due to market competition, smaller applications, constant change in environments, handsets, software, very programmer centric environments so planning, testing, etc is viewed as a bat anchor, lots of competition, high risk if your application does not work as expected
testers need to prove their worth as rigid approaches will leave you behind
key is to focus on test execution rather than planning, because everything is going to change anyway
need a strategy on how you are going to test, what devices you are going to buy, how are you going to manage the devices/cables because they go missing easily (had to chain cables to a hubcap!)
find out strategies that you are targeting so you can procure equipment
emulators are useful for basic testing, better to use real device of target platform, developers would have used the emulator anyway
supporting IOS 3 to IOS 4.1 resulted in 104 combinations between multiple devices, etc – classification trees are good to explain permutations and combinations
automation is still in initial infancy – not as nice as web applications at this point
devices are being exploited to do combined activities so need to exploit this in testing
we use these devices in environments where we do not use a PC – they are addictive and are part of our lives
testing will involve leaving the office and moving around to mimic what the users are doing – determine high value because everybody will want to do this testing!
tricky to get devices that you are targeting - standing in line for the iPhone!
may need to target different carriers and plans as technologies can be different
think about logistics of storage, charging, etc…
ergonomics are an issue whe testing mobile devices – shorter work days, can be painful on fingers, people are 25% less productive on these devices than PCs
health is an issue because devices are shared and illness spreads fast – hand sanitizers, wiping devices after use, washing hands frequently
need to factor in training as there are lots of way to use devices
taking screen shots is a lot more painful than web applications
usability testing – no standards unfortunately, look for user emotions, perceived lack of performance, one of the most important things on these devices
performance testing – no real tools, can jailbreak IOS, some emlators have rudimentary tools, can affect performance of device, use stopwatches, spoof the headers, emulate on a machine using small memory footprints and look for speed
security is often a trade-off with performance
can automate using emulator in a browser, tools are rudimentary, vendors are clamouring in the space, Opera has a mobile mode
planning needs to be a parallel activity, do just enough in regulated environments, video can be good to replace test cases, need to meet their intent and needs but rather than giving them what they ask for give them something better
research your customers for your scenario tests – how they will use the app, are they locals or visitors, is it easy to understand outside context (eg. train schedules)
trick – search ” sucks” to find and exploit common problems
allow time to keep up-to-date with platform changes
remember to test technology like GPS, graphics, camera, video, sound, messaging, data
My presentation from Agile 2011 that I delivered with Adrian Smith called “The Speed To Cool: Agile Testing and Building Quality In” is available on Slideshare.
Ensuring that the approach to testing and quality is understood and appropriately valued in an agile world can be a struggle for many organisations, especially when resources are limited and our customers are expecting business value in a timely manner. In this session we will define what quality means and share a number of tools for measuring it, discuss approaches to improving the skills, empowerment and role of testing in the organisation and share why testing is the coolest role on the team and why it is everyones responsibility.
Some of the comments on Twitter included:
@BrianGress: We tend to test only what we can see. #agile2011 @adrianlsmith
@tonyrockyhorror: @smithcdau Speed to Cool was best talk I’ve seen all week. It will take a mighty effort to top it. #agile2011
Ainsley started walking the circle to explain the day and how open space works, but frankly it make me feel a little dizzy! She went on to explain that Harrison Owen invented the open space idea as he noticed the real content at conferences was the passionate conversations. The rules of open space are:
whoever shows up are the right people
do not hang on to pre-conceived ideas
it starts when it starts
discussion does not need to be over until it’s over
The law of mobility and responsibility (also known as the law of two feet) is if you are not learning or contributing where you are, go some place where you will. Also, butterflies and bumblebees cross pollinate ideas.
NUnit – Liz Keogh – were using Fitnesse but added another level of complication, wrote a DSL that separates tests to make it easier read, WiPFlash is the automation tool, examples are on the website, can call the fixtures from another testing tool like Fitnesse, capture scenarios on a wiki first to get the best out of the automation tool
SpecFlow – Christian Hassa – similar to Cucumber, scenarios written as steps that are bound to execution, uses Gherkin parser (this is a plus as a number of tools use this)
SpecLog – maps of your product backlog, capture results of collaboration with the business (Jeff Patton’s story maps), data stored in a single file, stories are initially mapped to a feature file but ultimately get linked to a feature tree
SpecRun is under development currently, not bound to SpecFlow or test runner/execution, currently Windows only
The Smallest Federated Wiki – Ward Cunningham – JSON for data scrubbing, thin columns to display well on mobile, refactoring is the number one edit so allow it to drag and drop refactor, fit for any analytic or outcome-oriented endeavor, sponsored by Nike, under very early development, meant to take spreadsheet data to the next level
I was lucky enough to be a reviewer on Specification By Example by Gojko Adzic, and the final version was recently released to print by Manning. And I was stoked to see not only my name in the acknowledgements, but that my quote made it to the cover of the book. The following is my brief review and notes from the book.
“I love this book. This is testing done right.” That is my quote on the back cover of the book, and I meant every word of it. Having been a quality advocate in the agile space for a few years now, this is the first book I have read in a long time which had me nodding my head all of the way through, as it resonated with my ideas on how development teams need to reconsider specifications and testing.
The book starts out by summarising why specification by example is so important and outlines some key patterns for success and then, through examples throughout the book, steps through the patterns pointing out the warning signs along the way. The key steps are to ensure the culture is fit, then approach specification in a collaborative manner, use examples and automate and finally evolving a living document / specification.
I really appreciated the fact that the examples were not just the run of the mill greenfield Java web applications that are used in most books. There is a good sampling of different organisations, most of which are using this technique on existing legacy applications on a variety of different platforms. The book is an easy read for the entire team, which means it can (and should) be required reading for the developer, tester, analyst and project manager. I have encouraged many of my teams to take a look at the book, and a couple of my colleagues have indicated this book helped convince and reinforce why this approach is so valuable.
My only concern when reviewing was the fact that the title of this book may not standout to testers and developers (not perhaps as much as Acceptance Test Driven Development or ATDD might). Currently the community has a number of similar approaches with similar names, although I must acknowledge that the specification by example tag has grown on me over the last few months.
The book does not expend much effort talking about tools in this space, by design, I think this fact makes the book more readable and accessible to a wider audience, but that said it suggests to me that there is still a gap for a good text that matches specification by example to particular tools like Concordion, Fitnesse and the like.
Overall, this book is a definite must read for any teams (particularly agile teams) who are trying to balance or find a decent approach to specifications and testing. It is a good balance of patterns and real case studies on how testing and specifications should be approached in an agile world. It would make my list of Top 5 must read testing books and Top 10 must read agile books. And now I know what the proper name is for the cats eyes that are embedded in the freeway!
Finally, I had some other suggestions for summaries for the book that did not make it to cover, but they are just as relevant of my feelings about the book:
“One of the best Agile related books I have ever read. Buy it, read it, recommend it to your colleagues.”
“This book sums up the right way to attack requirements and testing while delivering to your customer. A must read for all agile teams.”
“I loved this book. I could not stop raving about it to my colleagues. It’s testing done right”
Here are my key notes from the book:
a people problem, not a technical one
building the product right and building the right product are two very different things, we need both to be successful
living documents – fundamental – a source of information about system functionality that is as reliable as the programming language code but much easier to access and understand
allows easier management of product backlogs
proceed with specifications only when the team is ready to start implementing an item eg. at the start of an iteration
derive scope from goals – business communicate the intent and team suggest a solution
verbose descriptions over-constrain the system – how something should be done rather than just what is to be done
traditional validation – we risk introducing problems if things get lost in translation between the business specification and technical automation
an automated specification with examples, still in a human readable form and easily accessible to all team members, becomes an executable specification
tests are specifications, specifications are tests
consider living documentation as a separate product with different customers and stakeholders
may find that Specification By Example means that UAT is no longer needed
changing the process – push Specification By Example as part of a culture change, focus on improving quality, start with functional test automation, introduce a new tool, use TDD as a stepping stone
changing the culture – avoid agile terminology, management support, Specification By Example a better way to do UAT, don’t make automation the end goal, don’t focus on a tool, leave one person behind to migrate legacy scripts (batman), track who is/isn’t running automated tests, hire someone who has done it before, bring in a consultant, introduce training
dealing with signoff and tracebility – keep specifications in a version control system, get signoff of living documentation, get signoff on scope not specifications, get signoff on slimmed down use cases, introduce use case realisations
warning signs – watch out for tests that change frequently, boomerangs, test slippage, just in case code and shotgun surgery
F16 – asked to be built for speed but real problem was to escape enemy combat – still very successful 30+ years later
scope implies solutions – work out the goals and collaborately work out the scope to meet goals
people tell you what they think they need, and by asking them ‘why’ you can identify new implicit goals they have
understanding why something is needed, and who needs it, is crucial to evaluating a suggested solution.
discuss, prioritise and estimate at goals level for better understanding and reduced effort
outside-in design – start with the outputs of the system and investigate why they are needed and how the software can provide them (comes from BDD)
one approach is to get developers to write the “I want” part of the storycard
when you don’t have control of scope – ask how something is useful, ask for an alternative solution, don’t only look at lowest level, deliver complete features
collaboration is valuable – big all team workshops, smaller workshops (three amigos), developers and analysts pairing on tests, developers review tests, informal conversations
business analysts are part of the delivery team, not customer representatives
right level of detail is picking up a card and saying ‘I’m not quite sure’, it pushes you to have a conversation
collaboration – hold introductory meetings, involve stakeholders, work ahead to prepare, developers and testers review stories, prepare only basic examples, overprescribing hinders discussion
one of the best ways to check if the requirements are complete is to try to design black-box test cases against them. If we don’t have enough information to design good test cases, we definitely don’t have enough information to build the system.
feature examples should be precise (no yes/no answers, use concrete examples), realistic (use real data, get realistic examples from customers), complete (experiment with data combinations, check for alternate ways to test) and easy to understand (don’t explore every combination, look for implied concepts)
whenever you see too many examples or very complicated examples in a specification, try to raise the level of abstraction for those descriptions
illustrate non-functional requirements – get precice performance requirements, use low-fi prototypes for UI, use the QUPER model, use a checklist for discussions, build a reference example for things that are hard to quantify (such as fun) to compare against
good specifications – should be precise and testable, not written as a script, not written as a flow
watch out for descriptions of how the system should work, think about what the system should do
specifications should not be about software design – not tightly coupled with code, work around technical difficulties, trapped in user interface details
specifications should be self explanatory – descriptive title and short paragraph of the goal, understood by others, not over-specified, start basic and then expanded
specifications should be focussed – use given-when-then, don’t explicitly detail all the dependencies, put defaults at the technical layer but don’t rely on them
define and use an ubiquitous language
starting with automation – try a small sample project, plan upfront, don’t postpone or delegate, avoid automating existing manual scripts, gain trust with UI tests
managing test automation – don’t treat as second-grade code, describe validation. don’t replicate business logic, automate along system boundaries, don’t check business logic through the UI
automating user interfaces – specify interaction at a higher level (logging rather than filling out the login page), check UI functionality with UI specifications, avoid record and playback, setup context in a database
test data management – avoid using pre-populated data, use pre-populated reference data, pull prototypes from the database,
Bott’s Dott’s are the lane markers on the roads that alert you when you move out of your lane, continuous integration has that function in software, run it with Specification By Example and you have continuous validation
reducing unreliability – find most annoying thing and fix it, identify unstable tests, setup dedicated validation environment, automated deployment, test doubles for external systems, multi-stage validation, execute tests in transactions, run quick checks for reference data, wait for events not elapsed time, make asynchronous processing optional, don’t use specification as an end to end validation
faster feedback – introduce business time, break long tests into smaller modules, avoid in-memory databases for testing, separate quick and slow tests, keep overnight tests stable, create a current iteration pack, parallelise test runs
managing failing tests – sometimes you can’t fix tests – create a known regression failures pack, automatically check disabled tests
easy to understand documentation – avoid long specifications, avoid lots of small specifications for a single feature, look for higher level concepts, avoid technical automation concepts
consistent documentation – evolve an ubiquitous language, use personas, collaborate on defining language, document building blocks
organize for easy access – by stories, functional areas, UI navigation routes, business processes, use tags instead of URLs
Ensuring that the approach to testing and quality is understood and appropriately valued in an Agile world can be a struggle for many organisations, especially when resources are limited and our customers are expecting business value in a timely manner. In this session Craig Smith will define what quality means, share a number of tools for measuring it as well as discussing approaches to improving the skills, empowerment and role of testing in the organisation and share why testing is the coolest role on the team and why it is everyone’s responsibility.
Some of my other colleagues also presented on the night including Ben Sullivan and Brent Acworth who spoke on BDD and some work they are doing on an open source framework for JBehave and Craig Aspinall who spoke on Automated Black Blob Testing.