The Agile Political Landscape Series: PMI, IIBA, ISTQB and the Right-Leaning Independents

Please note: This is article 8 in a series that explores mapping agile certifications to what Daniel Luschwitz and I have coined the Agile Political Spectrum. The previous blogs in the series are available here:

  1. What if Agile Certifications were a Political Party?
  2. The Agile Political Landscape Series: PRINCE2 Agile and One Nation
  3. The Agile Political Landscape Series: LeSS and The Greens
  4. The Agile Political Landscape Series: DSDM and Katter’s Australian PartyDSDM and Katter’s Australian Party
  5. The Agile Political Landscape Series: DSDM and Katter’s Australian PartyDevOps and Teal Independents
  6. The Agile Political Landscape Series: Kanban and the Australian Democrats
  7. The Agile Political Landscape Series: ICAgile and the Left-Leaning Independents

A note on our political comparisons: These political comparisons are playful metaphors designed to illustrate philosophical positions on the agile spectrum. No certification body was harmed in the making of this analysis.

Every political spectrum has its right-leaning independents. Not party loyalists, not ideologues, but professionals who built their authority within an established discipline, created a constituency around it, and when the political winds shifted, didn’t abandon their ground. They absorbed the new language, translated it into terms their community already understood, and carried on.

In the agile certification world, that role belongs to PMI, IIBA and ISTQB.

The Project Management Institute (PMI), the International Institute of Business Analysis (IIBA), and the International Software Testing Qualifications Board (ISTQB) weren’t born from the agile movement. Each was built on the conviction that project management, business analysis, and software testing were distinct professional disciplines, each deserving its own body of knowledge, its own structured credential, and its own seat at the table. All three had developed substantial, globally recognised certification architectures on that premise long before agile became the dominant conversation in solutions delivery. And when that conversation became impossible to ignore, all three made the same move: they added agile to what they already offered, on their own terms.

PMI was the earliest and perhaps the most deliberate about it. The Project Management Institute had spent decades building the PMP into arguably the most recognised project management credential on the planet. When agile began reshaping how solutions were delivered, PMI had a problem: the profession it represented was being told, by increasingly loud voices, that its core assumptions were wrong. Rather than engage with that critique, PMI launched the PMI Agile Certified Practitioner, the PMI-ACP, in 2011. The message was clear: agile is a toolkit, and project managers can learn to use it. The PMI-ACP covers a broad range of agile and hybrid approaches, drawing on methodologies from across the spectrum to demonstrate that agile competency is an addition to the project manager’s repertoire, not a replacement for it. The role stayed intact. The credential adapted around it.

The absorption deepened from there. In 2019, PMI acquired Disciplined Agile from Scott Ambler and Mark Lines, and weeks later FLEX from Al Shalloway‘s Net Objectives; some of the more thoughtful independent voices in the agile community. Then came PMBOK 7 in 2021, the most complete move of all. The Guide abandoned the architecture that had defined the profession for decades – ten knowledge areas, forty-nine processes, the full procedural edifice – and restructured itself around twelve principles and eight performance domains, with a new vocabulary of value delivery, stewardship, and tailoring. Every principle could be reconciled with agile, lean, traditional, or hybrid ways of working. Presented as modernisation, it was also a reframing broad enough that almost no position within the delivery community sat outside it. The Guide no longer committed itself to a method; it committed itself to being the container within which methods live.

IIBA and ISTQB followed the same instinct. Because the question agile was really asking, whether specialist roles like the BA and the dedicated tester needed to exist in their traditional form within self-organising teams, was precisely the question neither body had any institutional interest in answering honestly. So they answered a different one. They asked how agile delivery changes the context these disciplines operate in, and built certifications around that. IIBA demonstrated this instinct directly: business analysis became “agile analysis”, the BABOK grew an agile extension, and a new certification emerged to recognise competency in delivering analysis within an agile context. ISTQB followed the same pattern. The vocabulary changed. The disciplines they described did not.

They weren’t alone in this approach. Across the professional credentialling landscape, a number of established bodies followed the same instinct, extending their frameworks to acknowledge agile without disturbing the structures those frameworks were built to protect. The pattern is consistent: take the new vocabulary, demonstrate how your discipline remains relevant within it, and issue a credential that bridges the two worlds. It’s not cynical. It’s what professional bodies do. They exist to conserve something, and they’re good at it.

So who are PMI, IIBA and ISTQB’s political counterparts?

As our discipline-first, reframe-rather-than-reform agile certifications, they map to the right-leaning independents: professionals who enter the political arena not to change the system but to make sure their constituency is protected within it. Independent of the major parties, pragmatic in their dealings, and deeply conservative in the one area that matters most: the continued relevance of the professional community they represent.

The parallels are direct. Right-leaning independents don’t arrive in parliament with a transformation agenda. They arrive with a specific brief: protect these jobs, represent this industry, make sure this community isn’t left behind by whatever the major parties decide to do next. PMI, IIBA and ISTQB carry exactly that brief. Their mandate isn’t to reimagine how solutions are built. It’s to ensure that project managers, business analysts, and testers retain a credentialled, respected place in whatever delivery model their organisations adopt. Agile is the context. It is not the cause.

This is also why the agile content within these certifications tends to feel like an additional module rather than a rewrite of the core. It is worth acknowledging that PMI made genuine strides here: the shift to a principles-based model in their later standards represented real philosophical movement, not just rebranding. But even with that evolution, the agile extensions across all three bodies still sit on top of established role structures, the knowledge domains, the professional boundaries, all remain structurally intact. Agile is introduced as a context the discipline now operates within, not a lens that reexamines whether the discipline, in its current form, is still the right tool. The project manager still manages. The BA still analyses. The tester still tests. The world changed around them, and the certification acknowledges that. The professional identity at the centre did not move.

Right-leaning independents are notably pragmatic about language. When the political winds shift, they update their messaging before they update their positions. All three bodies demonstrated this instinct. All three speak agile fluently, and they mean it, but the fluency is in service of protecting the ground they already hold. The translation is genuine. The priorities underneath it are unchanged.

For practitioners, that’s not necessarily a problem. A project manager holding the PMI-ACP brings something real to an agile context: breadth across multiple approaches, familiarity with hybrid environments, and a structured lens for managing complexity. An experienced business analyst who understands stakeholder facilitation, business value, and how to navigate complex organisational constraints brings real value to an agile team. A tester with genuine capability in risk, coverage, and quality thinking is an asset in any delivery environment. These certifications offer a structured bridge between deep existing expertise and the agile context it now operates within. That’s worth something, and it shouldn’t be dismissed.

For organisations, understand what you’re investing in: practitioners equipped to apply their discipline within agile delivery, not practitioners equipped to question whether that discipline, as currently structured, is what the team actually needs. In environments where project management, BA, and testing functions are well established and role clarity matters, these may be exactly the right credentials. In environments genuinely rethinking their operating model, the role boundaries these certifications reinforce may be part of what you’re trying to move beyond.

Right-leaning independents serve a real constituency and they serve it honestly. They’re not in parliament to lead a revolution. They’re there to make sure that when the revolution arrives, the people they represent still have a seat at the table. PMI, IIBA and ISTQB do exactly the same thing. They didn’t reshape agile. They made sure agile had room for the professionals who were already in the room. Whether that’s the credential you need depends entirely on whether your goal is to fit agile around your existing structure, or to let agile challenge it.

This article was originally published on LinkedIn by Daniel Luschwitz.

Rapid Software Testing

A couple of years ago I received an awesome opportunity to attend James Bach deliver his Rapid Software Testing course in Adelaide. At the time I was working with Sharon Robson from Software Education to help re-develop the Agile Testing course for the Agile Academy, and she thought it might be good for us to sit in the back. The two day course was awesome (one of the best courses I have ever attended), although the animated debate between James and Sharon over breakfast in relation to ISTQB is one I will never forget either.

One of the great things about the course is that the notes are freely available from the Statisfice site (slides and appendices). Although it is the insight and passion from James that makes the course extremely worthwhile. Unfortunately I did not earn my “testing stars” from James from this course, but I did learn a lot. I recently dug out my notes from the course and here they are below.

  • the secret – “watch people test” – then follow the patterns
  • traditionally testers muddled through, as you got more experienced you just muddled better
  • there is lots of practices yet to be written about
  • James is “walking through an orchard rip with apples”
  • “nobody expects a tester to be right about anything” – we are in the evidence and inference business
  • tester tip – did you do “booja booja” testing? Your answer should be “not by that name”
  • method of concommonant testing – vary x for y (eg. dimmer switches) (John Stuart Mill – A System of Logic)
  • you test under uncertainity and time pressure – if not you are about to be laid off!, organisations keep testers at minimum number
  • heuristics – essential to rapid testing, eg. walking into a foreign building – “I’ll know it when I see it”
  • “creep and leap” – leap is the most outrageous test you can do, creep is to gently shatter the pattern in your mind – creep and leap may fail because you don’t leap far enough or you don’t creep enough
  • minimum number of cases has no meaning – infinite – no light flashes when you have finished testing / understand the pattern
  • pattern in the test cases is just the pattern in the test cases, not the program
  • need to leap beyond imagination
  • rapid testing is not about techniques – a way of thinking, a set of skills
  • what do testers do? – they are the “headlights of a project”, don’t need testers in the daylight (no risks)
  • testers don’t ensure quality of a product, they report the quality of the product
  • key definitions: quality is value to some person (who matters), a bug is anything about the product that threatens its value
  • testers represent the people whos opinion matters
  • defect is a bad word legally; not sure it is a defect when you find it, assumes more than you know (emotional word: bug, issue, incident)
  • testing and questioning are the same thing
  • there is a motivating question behind each test (if not, a zombie walk)
  • first principle – know your mission – allows you to test what matters, gets you more focussed
  • we are chasing risk
  • quality criteria – what is important, who are users
  • curse of expertise – people who know a lot, don’t always see a lot (why you need developers and testers)
  • need an oracle / result – otherwise you are just touring (an oracle is a principle or mechanism by which you find a problem)
  • rapid test teams should be a team of superheroes – what is your super power? Seek test teams that have variety
  • critical thinking – “huh”, “really”, “so” – say these words and you are on the road to critical thinking, you have to make assumptions to get work done
  • “huh” = what exactly does that mean?
  • “really” = what are the facts, how do we know it is true?
  • “so” = does any of this really matter, who cares?
  • safety language – this desk “appears” brown, have “not yet seen” a number 127 work, when you see this language your brain keeps thinking about the problem (interim conclusion only)
  • if you have stopped questioning you have stopped testing (and turned yourself into a test tool)
  • video tape your tests – take notes at timestamps, good for audit when you need that
  • The Amazing Colour Changing Card Trick – look from a different angle, view things more than once

  • ask a question without asking a question – make a statement / fact and wait for a reaction
  • model it differently – look at it in a different way
  • need to have the ability to slow down your thinking and go step-by-step and explain/examine your steps and inferences
  • exploratory testing is about trying to de-focus – seeing things in a different way
  • there is no instruction you can write down that won’t require some judgement from a human
  • irresponsible to answer a question without knowing some context – allows you to establish a risk landscape
  • James remembers his testing approach as a heuristic – CIDTESTDSFDPDTCRUSSPICSTMPLFDSFSCURA (his notes go on to explain this one!)
  • when you hear “high level”, substitute “not really”
  • HICCUPS(F) heuristic, a set of patterns all testers seem can be an answer to justify why something might be: History (something has changed), Image (OK, but something makes us looks stupid), Comparable products (like another system), Claims (said in a meeting, hallway), User’s expectations (do you understand users), Product (consistency), Purpose (why and what is it trying to accomplish), Statutes (something legal), Familiarity (a familiar feeling)
  • Oracles – calculator (ON 2 + 2 =4; not heuristic, answer won’t be 5, burst into flames, number won’t disappear), Word saving files (came up with 37 alternatives), Notepad (this application can break, Microsoft suggested it was not a bug)
  • Ask for testability – give me controllability (command line version and visibility, text version of display), when developers say no send email so you have documented evidence on why didn’t or it takes so long to test
  • ask “is there a reason I have been brought into test this?”
  • ad-hoc / exploratory does not equal sloppy
  • testing is not the mechanical act but the questioning process, only people who have a goal of 100% automated testing are people who hate to test, don’t hear about automated programming (what about compiling?)
  • everybody does exploratory testing – creating scripts, when a script breaks, learning after a script runs, doing a script in a different way
  • exploratory testing acts on itself
  • “HP Mercury is in the business of avoiding blame”
  • script – to get the most out of an extremely expensive test cycle, for interactive calculations, auditable processes
  • mix scripting and exploration – what can we do in advance and what can we do as we go, James always starts at exploratory and moves back towards scripting
  • use a testing dashboard – break down by key components in the system, all management cares about is a schedule threat so get to the point, count the number of test sessions  (uninterrupted block of testing time – 90 minutes) as management understand this (session test management), the key is simplicity, what does management usually ask for / need (usually a different measure), counts give the wrong impression, numbers out of context, number of test cases is useless, use coverage (0 = nothing, 1 = assessed, 2 = minimum only, 3 = level we are happy to ship) and status (green = no suspected problems, yellow = testers suspect problem, red = everybody nervous)
  • equivalence partitioning – you treat differences as if they are the same, models of technology allow us to understand risk (eg. dead pixels on a button), critical tester skill to slow your thinking down (is that a button?)
  • galumphing – doing something in an intential, over exuberant way (eg. skipping down the street), some inexpensive galumphing can be be beneficial, takes advantages of accidents to help you test better
  • An Introduction to General Systems Thinking (Gerry Weinberg, 1974) – basic text of software testing
  • many people are hired to fake testing – not to find bugs but to point fingers (“we hired testers”)
  • good testers build credibility
  • testers question beliefs (we are not in the belief business) – cannot believe anything that the developers tell you
  • lots of people can test – like surgery in the 14th century
  • reality steamroller method – maximise expenses from the value that they are going to have – record decisions, do your best to help out, let go of the result, write emails to get your hands clean (helpful, timestamp documented)
  • get all of the documentation and create a testing playbook – diagrams, tables, test strategy
  • The Art of Software Testing (Glenford Myers) – introduced the triangle exercise
  • calendar exercise – visualise your test coverage whenever you can, plot times on a grid, bar chart, wheel
  • choose a number between 1 and 20 – 17, 7, 3 – 20 is the least popular – what about pi, floating points – choose because they look less random
  • bugs with data types (eg. string in JavaScript) and bugs in tables and labels not found by boundary tests – this is when you need to run inexpensive random testing
  • anti-random testing – heuristic – every molecule trying to get away from the other molecule – as every test is trying to do something different
  • Crazy Ivan Testing Manoeuvre – defocussing  approach, looking for approaches you weren’t looking for (The Hunt for Red October)
  • finding bugs – testing exhaustively, focus on the right risk, indulge curiosity, use a defocussing strategy
  • curiosity – urge to learn something you don’t need to know
  • good usability checklist (medical devices) – ISO 60601-1-4
  • base testing on activities (what a user does) rather than on test cases
  • playbook – table – goal, key, idea, motivation, coverage, etc… – is just a list of ideas
  • you can’t check an always – but you can test aggressively for confidence
  • stopping heuristic – piñata heuristic (when you have enough candy), cost vs value (when cost exceeds value), convention (what is expected of you), loss of mission, ship
  • basic boundary is testing is not one over / one under –> fairy tale boundary testing