Featured

Inviting Change beats Imposing Change

You know the adage “People resist change.” It is not really true. People are not stupid. People love change when they know it is a good thing.

No one gives back a winning lottery ticket.

What people resist is not change per se, but loss. When change involves real or potential loss, people hold on to what they have and resist the change.

Ron Heifetz in The Practice of Adaptive Leadership

And yet … in complex and far reaching organisational change, there will inevitably be losers as well as winners. If I am going to be a loser, and change is imposed from above — the usual approach — it is totally understandable for me to resist the change, and blame my oppressors: “This change sucks, it’s making my life worse — I hate you.”

Imposing change is asking for resistance! But there is an alternative …

Inviting change

On the other hand, if leaders wish to invite participation in the change process, they can begin by being transparent and vulnerable about the realities of the situation:

  1. “We need to change because …” ,
  2. “This is what we seek to achieve …”, and
  3. Acknowledge that there is likely no perfect solution that makes everyone better off.

So rather than impose a solution from above, I recommend throwing open the challenge to the whole group — since we will all be affected — to explore possibilities and come up with the best and fairest solution.

  1. We want to get the most creative ideas, and as many wins as possible
  2. Where there are downsides, we want them to be as fairly distributed as possible

In this way, not only does the organisation draw on the wisdom of the crowd, it generates buy-in, and the opportunity to work through the negatives.

Instead of having downside aspects imposed from above, I may end up “taking one for the team”, with much better understanding, and consent. And in some cases compensation may be part of the solution, e.g. if downsizing and redundancy is part of the solution that emerges.

Do leaders shirk responsibility by not designing the outcome? Far from it: their role shifts. Instead of imposing a solution, they

  1. articulate the challenge,
  2. supply additional context and perspective,
  3. engage and empower effective facilitators,
  4. participate in exploring options, and
  5. help hold open the co-creative space in which solutions can emerge.

Open Space

What kind of structure works best for this kind of exercise? I recommend running an Open Space internal unconference.

I highly recommend Prager Consulting to anyone wanting to build or refine their agile processes as well as using the Open Space approach to build your action plan.

Shaun Coppard, Asia Pacific Services Director, LabWare

Open Space is a method for self-organising and running a meeting or conference, where participants are invited in order to focus on a specific, pressing purpose. In our case: initiating change.

In contrast to conventional conferences where who will speak at which time is scheduled in advance, Open Space sources participants once they are physically present at the live event venue. In this sense Open Space is participant-driven and self-organised.

An Open Space unconference sets up a strong container in which self-organisation can succeed:

  1. An inspiring theme, crafted by the leaders in consultation with the facilitators
  2. An opening circle in which participants are warmly welcomed
  3. An explanation of how the day will work: including the (fairly minimal) rules of Open Space
  4. A call for topics from any participant — volunteers must be prepared to convene a session on their topic
  5. Self-organisation of the agenda: topics are combined, sequenced, and scheduled by the conveners to make an enticing program with multiple, parallel tracks
  6. The sessions take place, with participants self-selecting where they go
  7. A closing ceremony to coalesce the key learnings

My favourite aspects of Open Space are:

  • The Law of Two Feet: The obligation to leave a session where you feel you are neither contributing nor benefiting
  • The Injunction to “Be Prepare to be Surprised”

One notable aspect of Open Space in the context of change is that it sets up the opportunity for change champions to spontaneously emerge during the course of the day. The Law of Two Feet removes the obligation to sit through sessions — instead participants discover what energises them.

By gathering learnings and unlocking energy the potential for real change is maximised.

Learn more

You can learn more about Open Space here, and may also be interested in Open Space Agility and Inviting Leadership.

I am available to facilitate Open Space internal conferences in and around Melbourne for Agile, transformational change, and other challenges.

Team focus factor

One of the pre-requisites for a group to mesh into a real team is a common purpose or focus. Without clear focus prioritisation is difficult and energy and enthusiasm readily dissipates.

And yet … in complex work environments there are inevitably many other bits and pieces that need to be done beyond the #1 thing, and if we neglect these completely our organisations would fall to pieces.

As a coach I work with many teams that struggle with the cognitive overwhelm of low focus, and senior managers who understandably want teams to deliver high value pieces of work rather than “a bit of this and that”. By measuring focus (or lack there of) we can have conversations that lead to real improvement, and usefully blend qualitative insight and creativity with quantitive simplification. We can get more focussed about getting more focussed!

For organisations moving to Agile teams this quantitative view of focus can be instructive. For example, the popular Scrum approach pretty much demands a high focus factor (70% to 80%) to work properly. Too often it is imposed willy-nilly, leading to reasonable push-back. If the starting focus factor is lower than (say) 60% please consider starting with Kanban instead, and track and attempt to raise your focus factor over time. As you do, the various Scrum practices will make more sense in your context. Alternatively, you can try raising the focus factor first and go straight to Scrum, but this can be dangerous. What will be the downsides by trying to jump it up straight away?

Regardless, you will likely want to increase your focus factor (if it’s low) to improve cohesion and reduce scatteredness.

If it gets too high it can be a problem too. For teams with 90% or higher check to see if they’re living in a bubble, disconnected from the rest of the organisation.

Calculating the Focus Factor

Quantifying your focus factor is simple. Just measure (or estimate) how much work you do in whatever units you like and apportion it amongst different activities.

  1. Major focus: How much work does your team devote to the biggest item? Call that B.
  2. Total work: How much work does the team do in total? Call that T
  3. Focus factor: Take the ratio B / T and express it as a percentage.

Examples

  • 30 units of work / week on a new project; 10 units on supporting past projects: 30 / 40 = 75% focus factor
  • Support work across 5 different systems, with 30% of calls devoted to the most troublesome one: Focus factor = 30%

Does Focus Factor scale up and down?

Why, yes it does!

Is your organization or department pursuing lots of disjointed objectives?

An individual example: Coaching four different teams? Focus factor ~ 25%

Measuring focus factor across multiple scales can be used cross-scale. A group of specialists will have individuals with high individual focus factors, but may have low “team” focus. I wouldn’t necessarily use the same percentages at the individual level: often juniors need more focus while they build their skills while more experienced people need to be across more diverse and complex demands — lower focus at the task level, but hopefully coming together in a good cause!

In team-level Agile we favour T-shaped people (generalising specialists) who will focus mainly on their speciality (main focus), while also being able to pinch-hit and collaborate elsewhere. At scale, a team-of-teams will do better to have a higher level joint focus so that they collaborate more closely with nearby teams (same tribe, ART, etc.) than more distant ones.

So what?

If your team has a higher purpose or charter mapped out, is that where your focus lies?

If your focus factor is too low for comfort, you’re probably feeling scattered and may have motivation issues. Start problem-solving! What can you do to improve your focus? Track and publicise your results.

Taking action to raise the focus factor can help reduce excessive cognitive load, a peril of complex knowledge work.

If you’re running a team-of-teams (e.g. an ART in SAFe or a Tribe) what’s the picture like at scale? Are some of your teams super-focussed while others are scattered? What does this tell you about the system? Can you evolve your structure at scale to not only raise focus factor, but also reduce outliers?

Conclusion

Team focus factor is an easy-to-measure metric that tells you a bit about how your team is going that can motivate improvement, guide choice of Agile framework (Scrum or Kanban?), and can give insights at scale.

More broadly, this is an example of a light-weight measure that is designed to give insight for indirect improvement rather than being directly tied to bottom-line results.

Scrum can be great (or terrible) for teams

I have something of a love-hate relationship with Scrum. Let’s break that down …

Great Scrum

Scrum that helps:

  1. Scrum can act as a container in which a loose work-group can develop into a collaborative, high-performing team.
  2. Scrum can scaffold rapid learning and delivery cycles via the cadence of the daily scrum and the longer cadence of one to four week sprints.
  3. Scrum can help a team to focus on a main goal and prioritise effectively.
  4. Scrum can act as a container for continuous improvement.
  5. Scrum can help a team flip from scope-boxing to time-boxing.
  6. Scrum can act as a thin project management layer that wraps around technical practices.
  7. Scrum can be adopted incrementally and be adapted to fit your context.
  8. Scrum can reduce the number of meetings by folding most of them into the standard events: planning, daily scrum, review, retrospective, and refinement.
  9. Scrum can help a team remain customer and stakeholder focussed through once-a-sprint reviews and re-planning.
  10. Scrum can help a team to keep up a sustainable pace, by learning to estimate and slice up work to fit inside the sprint.
  11. Scrum can help reduce stress and perfectionism by encouraging iteration on work items.
  12. Scrum can help foster self-organisation and collaborative leadership by sharing responsibility and accountability.
  13. Scrum can give wonderful, actionable insights to help teams and organisations when used as a silver mirror, rather than as a silver bullet.

Terrible Scrum

Scrum that hurts:

  1. Scrum can be misused/abused as a whip to beat the team, forcing them to commit to too much work each and every sprint. Or worse, an increasing amount of work, by mandating an ever-increasing velocity.
  2. Unmodified Scrum works poorly when there are many significant streams of work, rather than one main focus.
  3. Unmodified Scrum works poorly when most pieces of work have external dependencies making it difficult to finish a piece within a sprint.
  4. Scrum can fall down when the group is highly specialised, with little interest or motivation in cross-skilling.
  5. Scrum can increase the amount of meetings that team-members must attend, when the Scrum events are added to rather than replace existing meetings.
  6. Scrum retrospectives can devolve into whinging sessions, when there is a lack of deeper investigation and subsequent adaptation.
  7. Scrum becomes excessively rigid when the Scrum Guide is taken at face value and all components of Scrum are treated as mandatory.
  8. Scrum can be misused as an excuse for lack of discipline when Inspect and Adapt is used as an excuse to do “whatever”.
  9. Scrum can be misused as an excuse for why “Agile doesn’t work here” because it was a poor choice to start with.
  10. Scrum can be rejected prematurely when it fails to magically fix everything immediately.

When working with a group I like to start by assessing their context, needs, and pains — and building some rapport. Depending on what I diagnose I’ll most often recommend starting with either vanilla Scrum or minimal Kanban. Over time we’ll make changes and incorporate other relevant practices to co-create a fit-for-purpose Agile approach.

Great comments and further discussion on LinkedIn.

Intrinsic Motivators: set up your CAMP

Dan Pink popularised Autonomy, Mastery, and Purpose as the big three intrinsic motivators, but we can go one better by adding (or restoring) Connection into the mix:

  • Connection: People need to experience a sense of belonging and attachment to other people. Examples: feeling part of a team, having a friend or buddy at work, sharing successes (and failures).
  • Autonomy: Our desire to be self-directed and make decisions. Example: having a say in how you work and not just doing what the boss says or following a strict process.
  • Mastery: The urge to improve. Example: acquiring new skills and refining existing ones.
  • Purpose: Doing something that has meaning and is important. Examples: making a positive difference beyond getting paid — for customers, for colleagues, or to society. A more nuanced take on Purpose with lots of examples from Simone Maus.

Why Intrinsic Motivators?

Intrinsic motivators are at work whenever we do something because we want to: we do something because at least in part we want to do it and have an affinity or attraction. The motivation comes from within.

Extrinsic motivation, by contrast, comes from the outside — carrots and sticks, rewards and punishments — and while this can work for basic and mechanical tasks by helping us focus. Fear and competition can do that!

But for more sophisticated tasks, especially those requiring creativity and cooperation, excessive extrinsic motivation is distracting and can be downright destructive.

For knowledge work especially, over-dependence on extrinsic motivators is a bad bet.

Why add Connection?

In an era where teamwork and collaboration are ever-more-important, Connection is the intrinsic motivator that helps us bond and succeed together.

Pink chose to focus on Autonomy, Mastery, and Purpose, drawing on Deci and Ryan’s Self-Determination Theory, which also emphasises three intrinsic factors:

  • Autonomy ✓,
  • Competence (rebranded as Mastery), and
  • Relatedness (Connection).

Do you see the switch? Pink dropped Relatedness/Connection in favour of Purpose! While Purpose is a fine addition, Pink left out the most interpersonal of the motivators.

It’s time to bring Connection back!

P.S. How do I use this and is there more?

Exercise: With a team or work group (e.g. in an Agile retrospective) brainstorm what’s working and what isn’t, and then sort into four categories: Connection, Autonomy, Mastery, and Purpose. See what insights emerge, and come up with a few actions or experiments to build on strengths and address deficiencies.

Extension: Notice that some things don’t fit? Try adding three more motivators — Status, Certainty, Fairness — drawn from David Rock’s SCARF model. If you find this combination works better, consider adopting SCRAMPF as an extended amalgam (credit: Andrew Long suggested this).

To fix Legacy Code you need a cocktail of techniques

Legacy code — code without good automated tests or, equivalently, code that developers are afraid to change — cannot be fixed with a magic wand or one single approach. You need to put good automated tests in and refactor to restore modularity, but it’s a Catch-22: you can’t add good unit tests to spaghetti code without refactoring first, but you can’t refactor messy code first without good automated tests.

Legacy code is a lose-lose proposition, damaging external and internal quality:

  1. From a product point of view Legacy Code slows down development, increases defects, saps morale, and reduces the ability of the organisation to seize opportunities.
  2. From a developer perspective it’s slow and unpleasant to work with and increases stress.

Legacy Code arises because early in product development thing’s aren’t so bad: developers can hold the ideas of a small system in their head(s), and it makes sense not to take the extra steps around testing and modularity because the priority is to determine whether the new product is fit for purpose.

The problem is that few teams and organisations have the discipline to throw away the initial prototype once fit has been proven. In our excitement we start adding more features, and before you know it … there’s a big ball of Legacy Code.

And the genie is increasingly difficult to squeeze back into the bottle.

Cocktail time!

No one technique will get you out of this jam, but I have found that in most situations a suitable combination of approaches — much like how a cocktail of treatments is needed to effectively treat HIV — can work very well indeed.

A technique comparison / combination chart

Here’s a comparison and combination chart of basic and advanced techniques that relate to automated testing, and improving, clarifying and simplifying design through refactoring.

Exercises codeLocalises defectsImproves designCorner casesLegacy code
Code, then write tests½ ½ 
Test Driven Design
Property Based Testing ✔✔½½✔✔
Golden Master✔✔½½✔✔
Design by Contract✔✔✔✔½
Attributes (columns) of various techniques (rows) that relate to automated testing and refactoring of legacy code

Legend:    ½: minor benefit;      ✔: good (with limits);      ✔✔: leading edge

Attributes

  • Exercises code: Drives the system and runs some sort of tests.
  • Localises defects: Indicates where the code needs to be changed
  • Improves design: Aids refactoring, especially in improving modularity
  • Corner cases: Helps find difficult to reproduce bugs
  • Legacy code: Helps untangle legacy code

Techniques

  • Code, then write tests: Test last, often driven by code coverage requirements. Inferior to TDD, because it doesn’t drive good design. Naive approach to automation.
  • TDD (Test-Driven-Design/Development): Write an automated test, make it pass by writing the code, check that all tests still pass, refactor to clean up the code, repeat. Requires discipline (pairing helps). Good for greenfields projects or adding new features. Can’t fix legacy code, because it relies on existing tests.
  • Property Based Testing: Create and runs 1000’s of random tests to check invariants of the system. When it breaks, the PBT library reduces the test to a simple version to aid with debugging. An example of an invariant: in a banking system money should neither be created or destroyed, so any legal sequence of transactions between n accounts should have the same total funds at any stage. Great for tracking down difficult to find defects, corner cases, and even intermittent defects in complex systems. [Related to model-based testing.]
  • Golden Master Testing (also known as Characteristic Testing): Treat the existing Legacy System behaviour as correct, throw a large set of random data at it, and record a text file of the output (the Golden Master). After making a small refactor — a true refactor cleans up code, but does not alter external behaviour — replay the test data and check that the new output matches the Golden Master. If there’s a difference we need to roll back and try again. If it matches we can be statistically confident that nothing was broken.
  • Design by Contract: Specify pre-conditions and post-conditions of system functions (and optionally class invariants) by systematically adding assertions to existing code. The pre-condition says what a function expects, and the post-condition expresses what it promises. A pre-condition violation means that the calling code has a defect; a post-condition violation means that the function itself has a bug. For example: a square-root function expects a non-negative real number (the pre-condition), and it returns a result that is non-negative and when squared is equal to the original argument within error (the post-conditions). These conditions can be inferred from the requirements and checked by the computer. Breaking either triggers an exception. Developers who systematically write down pre-conditions and post-conditions before implementing their functions and classes tend to write well-thought-out, modular, maintainable code.

Notice that each technique has strengths and weaknesses. Fortunately, we can combine them to good effect, and the comparison chart helps with this.

There are other techniques, like mutation-testing, model-based testing, Pact testing, that look promising, but you don’t need all the techniques in your cocktail, just enough to get the job done!

I’ve left out UI testing, because that mainly serves a different purpose: exercising the UI and demo-ing functionality from a simulated user’s point of view. Unfortunately, they tend to be quite brittle, slow, and slow to develop, so I recommend using them sparingly.

Delicious Cocktails

Here are some combinations I like:

  • Greenfields Martini: TDD + Property Based Testing. For new projects, TDD gives coverage and modularity. PBT finds corner cases and improves reliability.
  • Golden Goose: For Legacy Code use Golden Master to safely refactor, restore modularity, and then write tests. Use TDD for new features.
  • Gin and Contracts: Golden Master for Legacy Code, and Design by Contract to improve design, modularity, and localise defects.
  • Property Pina Colada: To isolate and reproduce hard-to-find defects in an existing system, use Property Based Testing, and sprinkle with Contracts as use fix defects.
  • Microservice Margarita: Golden Master + TDD + Pact Tests (related to Design by Contract) to gradually peel off microservices from an existing legacy monolith.

Conclusion

Beyond the amusing names, my point is serious. One technique is not enough. As with any craft you need to master multiple techniques, plus the experience and insight to choose the right tools for the job.

We need developers and technical leaders who understand many complementary techniques, the ability to combine them, learn new ones, and to create and sustain technical cultures in which this work gets done, not postponed.

Another dimension is to educate our product and business partners and other non-technical stakeholders, so that we don’t fall collectively into the trap of destroying our future by skimping on quality at the wrong time.

Need help with Legacy Code?

Untangling Legacy Code

Legacy code (code without good automated test coverage) is an insidious burden that slowly strangles software development velocity, kills development team morale, and ultimately destroys Business Agility by reducing speed and quality, increasing risk, and eroding culture.

Legacy code: code without automated tests, or equivalently, code that developers are afraid to change

MICHAEL FEATHERS & J B RAINSBERGER

Where does Legacy Code come from?

When a new piece of software is still small defects are relatively easy to find and fix. This, combined with time-to-market pressure encourages shortcuts, such as relying on manual testing and tolerating messy design. If the product doesn’t work out and it gets junked, there’s no harm.

But if the product lives on the right thing to do is to rewrite the early version, adding automated tests and cleaning up the internal design before it is too late. But few organisations have the maturity and discipline to forgo short-term development speed — the customers or stakeholders are crying out for new features! — for long-term technical health.

And the frog boils slowly. It won’t be immediately evident that our short-term focus is seriously damaging our future prospects.

Why the standard “fixes” fail

The standard approaches to fixing Legacy Code once the codebase has become large perform predictably poorly. The most straightforward attempt to put the genie back in the bottle — refactoring to introduce automated unit tests that would have been easy to write when the codebase was small no longer work!

Introducing automated unit tests is no longer feasible because of lack of modularity in legacy codebases. In order to introduce unit tests we need to be able to isolate a unit of code sufficiently to test it. But modularity degrades rapidly in codebases in which automated testing is omitted: introducing tests early helps maintain modularity and aids design; without it our legacy codebase almost certainly degrades into a mess of spaghetti and duplication. Instead of building on a virtuous cycle, we’re stuck with trying to reverse a downward spiral.

Restoring that modularity means that the code needs to be refactored in order to introduce tests. But refactoring without automated tests in large codebases introduces new defects that lead to surprising and expensive breakages. To refactor safely, we need automated tests, but that’s exactly what we don’t have! You see the problem: we want to introduce tests, but we need to refactor safely first, for which we would need the tests that we don’t yet have — it’s a Catch 22.

What’s needed instead of going straight for small, low-level unit tests is to proceed indirectly. We must find a way to introduce a few quick-and-dirty higher level automated tests to provide some degree of safety in the initial refactoring. This creates a coarse safety net that allows for the big investment in remedial work — i.e. the much needed refactoring and introduction of fine-grained unit tests — to proceed.

There are a couple of popular tactics that are frequently attempted at this point. Unfortunately both are fatally flawed and only give partial relief. They survive in our industry because they take a long time and give the illusion of action:

  1. Automated UI tests could help, except that they are slow to write, slow to run, and their brittleness makes them expensive to maintain.
  2. A big rewrite from scratch takes too long and meanwhile two systems need to be maintained in parallel.

Both become feasible in modified forms — a smaller number of UI tests for testing the UI and boosting stakeholder confidence, and incremental re-writing of key components — after undertaking the superior approaches to taming legacy code which I will outline next.

How to really fix Legacy Code

The first option is to re-write early. Throw away the proof-of-concept or prototype and use good practices from eXtreme Programming (XP) like pair-programming, test-driven design/development (TDD) and automated 10 minute builds to never get into a mess of legacy in the first place.

However, since this can only work for small, well-understood codebases, we need a second option, which is really a mashup of advanced techniques, which can be situationally adapted. I teach the following three, in addition to the foundational techniques of TDD and pair-programming:

  1. Golden Master testing uses randomness to provide a usable quick-and-dirty test that provides the necessary safety net for refactoring to unit tests. 
  2. Design by Contract introduces pre- and post-conditions that can pinpoint defects with even greater precision than unit tests and strengthen the system design. Finally, 
  3. Property Based Testing again uses randomness, this time with invariant program properties to find corner cases and intermittent errors.

None of these techniques is individually a panacea, but contextually appropriate combinations lead to dramatic improvements in code quality and robustness, and greatly reduce stress levels for developers and technical leaders by making it feasible to pay down technical debt by incrementally refactoring Legacy Code.

Learn more: About Taming Legacy Code


Talk to Dan about Taming Legacy Code