Questions About Test Frameworks: Q&A Part #3 with J.B. Rainsberger

This is the third chapter of our three-part Q&A blog series with J. B. Rainsberger. In this chapter he adresses questions about “Questions About Test Frameworks. The first chapter and the second chapter is in our blog in case you missed it.

On June 3, 2020 J.B. Rainsberger spoke in our remote Intro Talk about managing the various kinds of uncertainty that we routinely encounter on projects that involve legacy code. He presented a handful of ideas for how we might improve our practices related to testing, design, planning, and collaboration. These ideas and practices help us with general software project work, but they help us even more when working with legacy code, since legacy code tends to add significant uncertainty and pressure to every bit of our work. Fortunately, we can build our skill while doing everyday work away from legacy code, then exploit that extra skill when we work with legacy code.

J. B. Rainsberger helps software companies better satisfy their customers and the business that they support.

Our next remote course Surviving Legacy Code from 14-17 September 2020.

If the code base is too old even for any available test frameworks, how you handle it?

**Testing does not need frameworks. Testing never needed frameworks.** You can always start by just writing tests and refactoring them. If you do this long enough, you will extract a testing framework. If you’ve never tried it, then I recommend it! Kent Beck’s _Test-Driven Development: By Example_ included this exercise.

Every test framework began life with `if (!condition) { throw Error(“Test failure.”) }`. If you can write this, then you can build a testing framework; if this suffices, then you don’t need a testing framework. Start there!

If you can execute one part of the system in isolation from the rest, then you can write unit tests. In the early days of web browsers, we could only execute Javascript in the browser, because even so, we could (and did!) write unit tests without frameworks. We merely had to run those tests in a browser window. Eventually, someone decided to run Javascript outside the browser, which made it easier to write microtests for Javascript code. This made it _easier_ to write tests, but we were writing tests long before NodeJS existed.

If you can invoke a function (or procedure or division or block of code) and you can signal failure (such as by raising an error), then you can write tests without waiting for someone else to build a framework.

In addition, you don’t need to write your tests in the same language or environment as the running system. Golden Master technique helps us write tests for any system that offers a text-based interface. Any protocol could help us here: for example, think of HTTP as “merely” a special way of formatting requests and responses with text. If you have (or can easily add) this kind of interface or protocol to your system, then you can write tests in any language that might offer a convenient test framework. Use Python to test your COBOL code. Why not?

Finally, not all testing must be automated. As I wrote earlier, programmers have a strong habit of forgetting alternatives to techniques that they’ve found helpful. If you don’t know how to automate your tests easily, then don’t automate them yet. Instead, make them repeatable and document them. One day, someone will have a good idea about how to automate them.

You may have to write your own test framework but it can prove a daunting task.

In addition to what I wrote in the previous answer, I encourage you to follow the general advice about building any software with a Lightweight (Agile, Lean, …) approach: build the first feature that you need, then start using it, then add more features one at a time. You don’t need to build a fully-featured testing framework before you start to benefit from it. Start with `if (!assertion) throw Error()` and then use it! The testing framework SUnit was built incrementally. All the testing frameworks you know began from there. You can do it, too. Merely start, then take one step at a time.

You also need this refactoring-without-tests skill, to effectively refactor your tests!

Maybe! I don’t say you _need_ it, but it would probably help you. Your production code helps you to refactor your tests: if you change your tests and they now expect the wrong behavior, then your production code will fail that test for “the right reasons”. It doesn’t provide perfect coverage, but it helps more than you might expect. In that way, the production code helps to test the tests.

There are testing frameworks for COBOL and NATURAL. What could be older?

Indeed, the “framework” portion of testing relates to identifying tests, collecting test results, and reporting them in a unified way, as well as adding standard mechanisms for “set up” and “tear down”. We don’t need those things to start writing tests, although eventually we will probably want to have them. **Simply start writing tests, then remove duplication in any way that your programing language allows.** I don’t know what might be older than COBOL or NATURAL.

➡️ Also read our last two Q & A Blogposts with J.B. Rainsberger Part #1 “Managing the Uncertainty of Legacy Code” and Part #2 “The Risks Related to Refactoring Without Tests“! Follow us on Twitter or LinkedIn to get new posts.

The Risks Related to Refactoring Without Tests: Q&A Part #2 with J.B. Rainsberger

This is the second chapter of our three-part Q&A blog series with J. B. Rainsberger. In this chapter he adresses questions about “The Risks Related to Refactoring Without Tests. The first chapter is in our blog in case you missed it.

On June 3, 2020 J.B. Rainsberger spoke in our remote Intro Talk about managing the various kinds of uncertainty that we routinely encounter on projects that involve legacy code. He presented a handful of ideas for how we might improve our practices related to testing, design, planning, and collaboration. These ideas and practices help us with general software project work, but they help us even more when working with legacy code, since legacy code tends to add significant uncertainty and pressure to every bit of our work. Fortunately, we can build our skill while doing everyday work away from legacy code, then exploit that extra skill when we work with legacy code.

J. B. Rainsberger helps software companies better satisfy their customers and the business that they support

Our next remote course Surviving Legacy Code from 14-17 September 2020.

What we should say to project planners who are afraid to let us do refactoring without tests, because some folks in our team are not very good at refactoring and make mistakes? How to convince them it can work for some good programmers?

First, I recognize that if I were the project planner, then I would worry about this, too! I probably don’t know how to judge the refactoring skill of the programmers in the group, so I wouldn’t know whom to trust to refactor without tests. Moreover, I probably can’t calculate the risk associated with refactoring without tests, so I wouldn’t know when to trust _anyone_ to refactor without tests, even if I feel confident in their skill. Once I have thought about these things, it becomes easier to formulate a strategy, because I can ask myself what would make _me_ feel better in this situation? I encourage you to ask yourself this question and write down a few ways that you believe you could increase your confidence from the point of view of the project planner. I can provide a few general ideas here.

I encourage you to build trust by telling the project planner that you are aware of the risks, that you care about protecting the profit stream of the code base, and that you are prepared to discuss the details with them. It often helps a lot simply to show them that you and they are working together to solve this problem and not that you are doing what helps you while creating problems for them.

I would ask the project planners what specifically they are worried about, then matching my strategies to their worries. For example, microcommitting provides one way to manage the risk of refactoring without tests, because it reduces the cost of recovering from a mistake. At the same time, if the project planner worries about different risks than the ones I have thought about, then my strategies might not make them feel any more secure! If I know more about which risks affect them more or concern them more, then I can focus my risk-management work on those points, which also helps to build trust.

I would emphasize that we do not intend to do this as a primary strategy forever. We don’t feel comfortable doing it, either! Even so, we _must_ make progress _somehow_. We refactor without tests because it would be even more expensive to add “enough” tests than to recover from our mistakes. Of course, we have to be willing to explain our judgment here and we have to be prepared that we are wrong in that judgment! I am always prepared to take suggestions from anyone who has better ideas, but outside of that, they hired me to do good work and make sound decisions, so if they don’t trust me, then I must try to earn their trust or they should give my job to someone that they trust more. I don’t mean this last part as a threat, but merely as a reminder that if they hire me to do the job, but they never trust me, then they _should_ hire someone else!

How about pair-refactoring?

I love it! Refactoring legacy code is often difficult and tiring work, so pair-refactoring fits well even in places where “ordinary” pair programing might not be needed. Refactoring legacy code often alternates periods of difficulty understanding what to do next with long periods of tedious work. Working in pairs significantly increases the profit from both of those kinds of tasks.

You also need this refactoring-without-tests skill, to effectively refactor your tests!

Maybe! I don’t say you _need_ it, but it would probably help you. Your production code helps you to refactor your tests: if you change your tests and they now expect the wrong behavior, then your production code will fail that test for “the right reasons”. It doesn’t provide perfect coverage, but it helps more than you might expect. In that way, the production code helps to test the tests.

Moreover, tests tend to have simpler design than the production code. This means that we might never need to refactor tests in certain ways that feel common when we refactor production code. I almost always write tests with a cyclomatic complexity of 1 (no branching), so the risk when refactoring tests tends to be much lower than when refactoring legacy code. This makes refactoring tests generally safer.

➡️ Also read our two Q&A Blogposts with J.B. Rainsberger Part #1 The Risks Related to Refactoring Without Tests” and Part #3 “Questions About Test Frameworks“! Follow us on Twitter or LinkedIn to get new posts.

Managing the Uncertainty of Legacy Code: Q&A Part #1 with J.B. Rainsberger

In this first chapter of our three-part Q&A blog series he adressed questions that came up during his session.

On June 3, 2020 J.B. Rainsberger spoke in our remote Intro Talk about managing the various kinds of uncertainty that we routinely encounter on projects that involve legacy code. He presented a handful of ideas for how we might improve our practices related to testing, design, planning, and collaboration. These ideas and practices help us with general software project work, but they help us even more when working with legacy code, since legacy code tends to add significant uncertainty and pressure to every bit of our work. Fortunately, we can build our skill while doing everyday work away from legacy code, then exploit that extra skill when we work with legacy code.

Our next remote course Surviving Legacy Code from 14-17 September 2020.

J. B. Rainsberger helps software companies better satisfy their customers and the business that they support.

Here are some questions that came up during this session and some answers to those questions.

One of the issues is that the legacy code base consists of useful code and dead code and it’s hard to know which is which.

Indeed so. Working with legacy code tends to increase the likelihood of wasting time working with dead code before we feel confident to delete it. I don’t know how to avoid this risk, so I combine monitoring, testing, and microcommitting to mitigate the risk.

Microcommits make it easier to remove code safely because we can recover it more safely. Committing frequently helps, but also committing surgically (the smallest portion of code that we know is dead) and cohesively (portions of code that seem logically related to each other) helps. If our commits are more independent, then it’s easier to move them backward and forward in time, which makes it easier to recover some code that we mistakenly deleted earlier while disturbing the live code less. We will probably never do this perfectly, but smaller and more-cohesive commits make it more likely to succeed. This seems like a special case of the general principle that as I trust my ability to recover from mistakes more, I feel less worried about making mistakes, so I change things more aggressively. When I learned test-driven development in the early years of my career, I noticed that I become much more confident to change things, because I could change them back more safely. Practising test-driven development in general and microcommitting when working with legacy code combine to help the programmer feel more confident to delete code—not only code that seems dead.

Even with all this, you might still feel afraid to delete that code. In that case, you could add “Someone executed this code” logging statements, then monitor the system for those logging statements. You could track the length of time since you last saw each of these “heartbeat” logging messages, then make a guess when it becomes safe to delete that code. You might decide that if nothing has executed that code in 6 months, then you judge it as dead and plan to remove it. This could never give us perfect confidence, but at least it goes beyond guessing to gathering some amount of evidence to support our guesses

More testing, especially microtesting, puts more positive pressure on the design to become simpler: less duplication, better names, healthier dependencies, more referential transparency. I have noticed a pattern: as I simplify the design, I find it easier to notice parts that look irrelevant and I find it clearer that those parts are indeed dead code. Moreover, sometimes obviously dead code simply appears before my eyes without trying! This makes it safer to delete that code, using the microcommitting and monitoring as a recovery strategy in case I get it wrong.

So not all legacy code adds value to the business… but it is hard to know which part does.

Indeed so. We have to spend time, energy, and money to figure this out. I accept responsibility as a programmer to give the business more options to decide when to keep the more-profitable parts running and to retire the less-profitable parts. As I improve the design of the system, I create more options by making it less expensive to separate and isolate parts of the system from each other, which reduces the cost of replacing or removing various parts. Remember: we refactor in order to reduce volatility in the marginal cost of features, but more-generally in the marginal cost of any changes, which might include strangling a troublesome subsystem or a less-profitable feature area.

The Strangler approach describes incrementally replacing something in place: adding the new thing alongside the old thing, then gradually sending traffic to the new thing until the old thing becomes dead. Refactoring the system to improve the health of the dependencies makes this strangling strategy more effective, which gives the business more options to replace parts of the legacy system as they determine that a replacement would likely generate more profit. As we improve the dependencies within the system, we give the business more options by reducing the size of the smallest part that we’d need to replace. If we make every part of the system easier to replace, then we increase the chances of investing less to replace less-profitable code with more-profitable code.

This illustrates a general principle of risk management: if we don’t know how to reduce the probability of failure, then we try reducing the cost of failure. If we can’t clearly see which parts of the legacy code generate more profit and which ones generate less, then we could instead work to reduce the cost of replacing anything, so that we waste less money trying to replace things. This uses the strategy outlined in Black Swan of accepting small losses more often in order to create the possibility of unplanned large wins.

What do you think about exploratory refactoring? Do you use this technique sometimes?

Yes, I absolutely do! I believe that programmers can benefit from both exploratory refactoring and feature-oriented refactoring, but they need to remain aware of which they are doing at any time, because they might need to work differently with each strategy to achieve those benefits.

When I’m refactoring in order to add a feature or change a specific part of the code, I remind myself to focus on that part of the code and to treat any other issues I find as distractions. I write down other design problems or testing tasks in my Inbox as I work. I relentlessly resist the urge to do those things “while I’m in this part of the code”. I don’t even follow the Two-Minute Rule here: I insist on refactoring only the code that right now stands between me and finishing the task. Once I have added my feature, I release the changes, then spend perhaps 30 minutes cleaning up before moving on, which might include finishing a few of those Two-Minute tasks.

The rest of the time, I’m exploring. I’m removing duplication, improving names, trying to add microtests, and hoping that those activities lead somewhere helpful. This reminds me of the part of The Goal, when the manufacturing floor workers engineered a sale by creating an efficiency that nobody in the sales department had previously thought possible. When I do this, I take great care to timebox the activity. I use timers to monitor how much time I’m investing and I stop when my time runs out. I take frequent breaks—I use programming episodes of about 40 minutes—in order to give my mind a chance to rise out of the details and notice higher-level patterns. I don’t worry about making progress, because I donI ’t yet know what progress would look like—instead I know it when I see it. By putting all these safeguards in place, I feel confident in letting myself focus deeply on exploring by refactoring. I avoid distracting feelings of guilt or pressure while I do this work. I also feel comfortable throwing it all away in case it leads nowhere good or somewhere bad. This combination of enabling focus and limiting investment leads me over time to increasingly better results. As I learn more about the code, exploratory refactoring turns into feature-oriented refactoring, which provides more slack for more exploratory refactoring, creating a virtuous cycle.

What is your experience with Approval Tests, in cases where writing conventional unit tests might be to expensive?

I like the Golden Master technique (and particularly using the Approval Tests library), especially when text is already a convenient format for describing the output of the system. I use it freely and teach it as part of my Surviving Legacy Code course. It provides a way to create tests from whatever text output the system might already produce.

I get nervous when programmers start going out of their way to add a text-based interfaces to code that doesn’t otherwise need it only for the purpose of writing Golden Master tests. In this case, checking objects in memory with equals() tends to work well enough and costs less. I notice it often that programmers discover a helpful technique, then try to use it everywhere, then run into difficulties, then invest more in overcoming those difficulties than they would invest in merely doing things another way. Golden Master/Approval Tests represents merely another situation in which this risk comes to the surface.

I get nervous when programmers start choosing to write integrated tests for code where microtests would work equally well. When programmers think about adding Golden Master tests, they tend to think of these as end-to-end tests, because they often judge that as the wisest place to start. Just as in the previous paragraph, they sometimes fall into the trap of believing that “since it has helped so far, we must always do it this way”. No law prevents you from writing unit tests using Golden Master/Approval Tests! Indeed, some of the participants of my Surviving Legacy Code training independently discover this idea and use it to great effect. Imagine a single function that tangles together complicated calculations and JSON integration: it might help a lot to use Approval Tests to write Golden Master tests for this function while you slowly isolate the calculations from the JSON parsing and formatting. The Golden Master tests work very well with multiline text, such as values expressed in JSON format, but probably make the calculation tests awkward, compared with merely checking numeric values in memory using assertEquals().

When programmers use Golden Master/Approval Tests, they need to treat it as just one tool in their toolbox. This is the same as with any technique! I tend to treat Golden Master as a temporary and complementary technique. I use it when I focus on writing tests as a testing technique, even though I tend to prefer to write tests for design feedback. Not everyone does this! If you find yourself in the stage where you’re drowning in defects and need to focus on fixing them, then Golden Master can be a great tool to get many tests running early. Once you’ve stopped drowning, it becomes easier to look at replacing Golden Master with simpler and more-powerful unit tests—eventually microtests.

➡️ Also read our two Q&A Blogposts with J.B. Rainsberger Part #2 The Risks Related to Refactoring Without Tests“ and Part #3 “Questions About Test Frameworks“! Follow us on Twitter or LinkedIn to get new posts.

Classic or agile contracts: How to find the right contract type

Traditional projects and their contracts have a decisive weakness. Even though they offer a large measure of (a supposed) budget security, they can hardly keep up with the speed of the fast-moving business world.

Thus, instead of the sureness of knowing when a product is ready and what functions it has, you have the danger of receiving an outdated product that no longer meets the up-to-date requirements.

Especially with complex software problems, the classic contracts’ limits are reached quickly. This is because modern software development is becoming increasingly agile. This means: neither the product that is to be delivered nor when the product will be done is clearly defined at the beginning of the joint work. Contractually, these patterns can only be covered by contracts that make agile collaboration possible. 

In this article, you will learn what exactly makes contracts ‘agile’ and whether they are also suitable for your project.

Four crucial differences between classic and agile contracts

First of all, let’s look at the difference between classic and agile contracts. I have summarized the most important of them in a short video:

It can be stated that the two types of contracts differ in four basic points in particular:

  1. Project scope: In a classic contract for work, the project scope is usually fixed for a long period of time. This means that all requirements are collected in a specification document and processed afterward. This is not the case with an agile contract. The changes here can be made after each sprint. This allows the development team to take feedback into account promptly. However, with agile contracts, the basis is a backlog, with the effort being estimated at the beginning.
  2. Project period: In classic contracts, the time of the releases and the milestones is fixed. With agile contracts, the following applies: the project is finished as soon as the product is ready. Of course, you can also define a time period or a fixed number of sprints. You can then go into production with whatever is ready by that time.
  3. Release cycles: The release cycles for classic projects can last from several months to years.  When it comes to agile contracts, there is ideally a prototype after each cycle (sprint), which can be tested.
  4. Budget: With both contract models, the budget can be deducted according to T&M (Time & Materials), or it can be fixed. In the case of agile, this means that a certain “Run-Rate” of the team has to be budgeted.
  5. Control: Output vs. outcome applies here. For example, in the case of classic contracts, it is measured whether the development team has reached the corresponding milestone at time X. With agile contracts, it is checked whether the product meets the customer’s requirements.

In summary, it can be said that in classic contracts, budget security is clearly in focus. Agile contracts give the development team much more freedom to react to the short-term changes and to gather feedback regularly in order to ultimately develop the best product for customers.

How to find the right contract type

The choice between an agile or classic contract depends on two basic factors: the complexity of the project and the way your company works.

Complex projects need agile contracts

An agile contract is not the best choice for every project. It is crucial to think about the tasks that arise in the project and how well you can define them in advance. The following questions can help you with this:

  • Are you sure that the general situation will not change during the project?
  • Are you sure that the value of your company can be achieved exactly the way you have defined it?
  • Do the tasks consist exclusively of recurring activities?
  • Are the risks of the technical implementation low, and can the requirements be clearly formulated?
  • Is the project rather small and short-term, so that you don’t need a team for the support and further development of the product?
  • Are you buying a standardized product that does not require integration into an existing product?

If you can give a positive answer to all these questions, then a classic contract is probably enough. Your project and requirements can be precisely defined in advance. The situation is different if you can only answer “yes” to some of the questions. In this case, it is worth taking a closer look at the agile contracts.

No agile contracts without agile working methods

The complexity of your project is not the only decisive factor. If you are not able to implement agile projects in your company, an agile contract will not get you anywhere either.

In order to develop the best possible product in an agile way, there should be a vendor-customer relationship that allows close collaboration and some adaptability. The basis for this are the principles of the agile manifesto. The manifesto originally comes from software development. Nevertheless, it can also be applied to companies. 

Consider whether you have a basis for agile projects in your company and can guarantee the following points:

  • Feedback Cycles: You are able to implement fast feedback cycles (ideally every 2 to 4 weeks) and provide continuous feedback to the development team.
  • Transparency: You can ensure complete transparency during the project. This means that the development team has access to the backlog, to the progress of the implementation of different features and to the results of each feedback cycle.
  • Variable project scope: You agree that the feedback after each cycle is incorporated into further product development. Project scope and tasks can be adjusted accordingly to it. The option must also be specified contractually, for example, via a “Changes for free” clause. It allows changes to be made in the backlog as long as they do not involve any additional work.
  • Effective collaboration: You can ensure close cooperation between yourself, the development team, and the end customer. Ideally, the team works in one place to make direct and informal communication possible. It is an advantage if the vendor provides an on-site person who takes an active role in the project (not a management role that only serves as a link to the development team).

If you can implement the following points and also have complex projects, an agile contract is probably the right choice. If you have not established an agile mindset in your company yet but still want to work with agile projects, you can gradually approach this goal with the help of workshops. Feel free to contact us if you need further information. 

When is an agile approach useful?

No two projects are the same. And no two companies are alike. That’s why the same rigid framework conditions cannot always fully apply to the cooperation with your customers and partners.

Consider in advance what framework conditions you need for the respective project and whether your company lives an agile way of working. If you want to implement complex projects, an agile way of working or agile contracts offer a particularly solid basis.

Do you have further questions about the implementation of projects with agile contracts?

Or do you want to develop an environment in your company that enables working with agile methods?

Please contact me via eMail or on LinkedIn with your questions.

Approval Testing: What It Is and How It Helps You To Manage Legacy Code

Emily Bache is a Technical Agile Coach, she helps software development teams to get better at the technical practices needed to be agile, including Test-Driven Development, Refactoring, and Incremental Design. Emily is known as the author of the book, “The Coding Dojo Handbook”. For the second time, we organize a training course with Emily on Approval Testing. In this email interview we asked Emily what counts as legacy code, how to get into approval testing, and what her upcoming book will be about.

What is the optimal way of learning Approval Testing? What is the role of Gilded Rose Kata and other exercises in this process? 

Approval Testing is a style and approach to writing automated tests that changes the way you verify behaviour. Basically, the ‘assert’ part of the test. As with any new tool or approach, it helps to have actual code examples to play with when you’re learning it. Once you start to see it in action then you’re bound to have lots of questions so it’s good to have people around you to discuss it with.

The Gilded Rose Kata is a fun little exercise that I maintain. It actually comes with approval tests, as well as being translated into about 40 programming languages. Whatever your coding background and language preferences, you can try it out and see how it works for yourself. When you’ve done that, you should easily be able to find other people to discuss it with, since it’s quite a popular exercise. For example Ron Jeffries recently wrote 13(!) blog posts about his experience with it.

You talk about refactoring and handling legacy code? What is actually legacy code? How would you define it? 

Many developers struggle with code they inherited which has poor design and lacks automated tests. On their own, any one of those difficulties could probably be overcome, but in combination developers get a kind of paralyzing fear of changing the code. That’s how I would define legacy code. Code that you need to change but you’re afraid to in case you break it.

The antidote to that fear, I find, is feedback. High-quality feedback telling the developer when they are making safe changes. The feedback that gives them the confidence to improve the design and get in control. Approval testing is one way to get that feedback – you create regression tests that give you good information when behaviour changes.

What are the main things one should know before starting working with Approval Testing? 

Since it’s a style of automated testing, it helps to have experience with unit testing already, perhaps with JUnit or similar. Approval Testing is often used in larger-granularity tests too, so experience with tools like Selenium or Cucumber would give you a good perspective, although it works a bit differently. This way of testing also fits extremely well into Agile methods, BDD, and Specification by Example. If you are working in a more traditional process, you may find adding these kinds of tests will help you to increase your agility.

For which situations is Approval Testing the best solution? When shouldn’t it be used? 

If you’re facing legacy code, this can be a great addition to your strategy for getting control. I wouldn’t discount it for new development though, particularly if your system will produce some kind of detailed artifact where the user really cares about how it looks. For example I’ve seen this approach used to verify things like invoices, airline crew schedules, 3D molecular visualizations, and hospital blood test results.

Of course there are situations where I wouldn’t use Approval Testing, for example where the output is a single number – the result of a complex calculation. If you can calculate the expected result before you write the code, testing it with an ordinary assertion is a sensible approach.

Can Behaviour Driven Development be considered as the future of the industry and Approval Testing as an essential part of it? Why is it so?  

The main priority of BDD is to improve collaboration and communication so we build the right software. In my experience Approval testing promotes good conversations. I’m giving a keynote speech at Cukenfest soon, (a conference about BDD), and I’m going to be talking about exactly this topic. For the test automation part of BDD most teams use the Gherkin syntax with Cucumber or SpecFlow. I think you can use Approval testing in a similar way.

You have been working on this topic for a while  – what excites you about it? 

There is so much potential for this technique! I see a lot of legacy code out there, and I see a lot of test cases that are unnecessarily difficult to maintain. If I can spread these testing techniques to even a small proportion of all those places it will make a huge positive difference to the quality of the software in the world.

You wrote a book about Coding Dojo, what can we expect from your follow-up book? 

The motivation for my upcoming book “Technical Agile Coaching” is largely the same as for the previous one – I write for people who want to make a difference and improve the way software is built. In 2011 I published “The Coding Dojo Handbook” which is full of advice and experiences setting up a forum for practicing coding skills. You can see my new book as an expansion of those ideas, seasoned with ten years of additional experience.

The focus of the coaching method I describe in the book is specifically on technical practices and how people write code. There are two main elements to the coaching. Firstly teaching techniques via interactive exercises and code katas. Secondly coaching a whole team to work effectively together as they do mob programming.

Online Planning and Collaboration in Multiple Teams (Kostenloser 90-min. Remote-Workshop)

Ole Jepsen
Enterprise Agile Coach | Scaled Planning Advisor |

Um die Entwicklung von Produkten und Dienstleistungen zu beschleunigen und marktfähig zu bleiben, setzen viele Teams auf agile Methoden.

Bis vor wenigen Wochen war es üblich sich in großen Runden in Meetingräumen zum PI Planning, Face2Face zu treffen, um die Hürden der Planungs- und Koordinierungsaufgaben zu besprechen.

Dann kam Covid-19 und die Frage wie diese großen Planungssessions nun durchzuführen seien. Verschieben und Momentum verlieren, oder online durchführen?

Ole Jepsen wird in diesem Remote-Workshop seine Erfahrung im Bereich Online-Planung weitergeben. Sowohl aus Pre-Corona-Sicht (Teams in unterschiedlichen Ländern verteilt) und Post-Corona-Sicht (alle zu Hause am Laptop).

“Set up the collaboration board exactly as you would set up the physical conference room” – Tip by Ole Jepsen

Die Gute Nachricht ist, dass Online-Planung (PI Planning) sehr wohl online gut umsetzbar ist. Die Durchführung erfordert eine gute Vorbereitung, die richtigen Tools und ein paar Tipps und Tricks.

An wen richtet sich dieser Online-Workshop?

Idealerweise haben Sie Erfahrung in der Zusammenarbeit mit agilen Methoden, Scaled Agile, SAFe, LESS, oder haben in der Vergangenheit schon an PI-Planning-Sessions teilgenommen.

Nehmen Sie an diesem interaktiven Remote-Workshop teil und lernen, wie Sie Ihre Online-Planungs-Sessions erfolgreich durchführen können.

Weitere remote Workshops finden Sie auf unserer Trainingsseite.

Rückfragen & Kontakt

Milena Krnjic

Hinweis: Der Workshop wird in englischer Sprache abgehalten. Für die Durchführung des Workshops wird die Videokonferenz-Lösung Zoom herangezogen, sowie das Tool Metro Retro. Sie müssen keine Software für die Teilnahme installieren. Falls Sie noch keinen Metro Retro-Account haben, erstellen Sie bitte im Vorfeld einen. Ein Zoom Account ist nicht notwendig. Für Zoom verwenden Sie am besten den Browser Chrome.

Agile Teams: A Method to Enable Autonomy by Clarity of Roles

How can you enable teams to take initiative and autonomy by knowing their decision boundaries? In this blog post, I am sharing the concept for a workshop format that you can adapt to achieve that goal.

I used this method to address the following situation: within a classical organization, new cross-functional teams are put together and are now supposed to work in an agile way, but they are stuck at the beginning because they do not know what they are allowed to decide. In addition, the concept of shared responsibility in these new teams is new. A second effect is also that those teams are not stuck, but they are now willing to take initiative, decide certain things (like for example an architecture decision), but then their manager is not happy with the decision and overrules. Also leading to a stuck team.

The problem is that there are unspoken assumptions about what autonomy means between managers and the teams or other stakeholders around the team. We want to reveal those assumptions!

This blog post is based on the talks at the agile tour 2019 and at the ASQF agile night.

Important preconditions in the mindset

A basic precondition is that the organization and all the stakeholders understand the concept of small autonomous teams or the “law of the small team” as Denning describes it (Denning, 2018). Those teams act aligned with the product and corporate vision and are be able to self-direct and self-manage to achieve the goals.

Leaders and managers understand that they should enable those teams by following the concept: “Push authority to information”, one of the major principles of intent-based leadership because we know that they are the experts in their domain and can make the best decisions.

Source: Intent-Based Leadership Keynote by Jenni Jepsen @Agile Tour Vienna“

In order to that, we need to give control to the teams, which is a gradual shift, not a one-off “now you do it all” because we need to check if the competence in the team is there.

Source: Intent-Based Leadership Keynote by Jenni Jepsen @Agile Tour Vienna“

Third, it is clear that we can only manage the environment and not the people.

Workshop Format

The goal of the workshop format is to reveal who is ultimately deciding and how much of these decisions can be delegated to the team. This question can arise between for example the former line manager, the department lead, a software architect or other stakeholders.

Step 1: Key Decision Areas

First of all, we need to collect the most important decision areas where we faced problems or need clarity. It is important that you do not list all decision areas as this would end up in a huge probably Excel sheet that nobody uses later anymore.

Key decision areas can be for example:

  • Who is responsible for deciding on vacations?
  • Who can decide it is a good thing to go on home office or not?
  • Who decides ultimately about an architectural proposal?
  • Who decides how much a solution proposal can cost?
  •  Can we invest as a team in experimenting with solutions for a given problem?
  •  Can we decide on hiring external consultants?
  • Who is responsible for staffing the team?

Step 2: The RACI matrix

You can skip this step if you only need to clarify the delegation between one role and the team. If you have multiple stakeholders, the RACI matrix can help.

In the columns you list who is Responsible (R), who is Accountable (A), who needs to be consulted (C) before taking the decision and who needs to be informed (I) of the decision.

In the rows, you list the key decision areas. It is important that you do not go further in certain team roles. The team is either doing it as a team, but you do not delegate certain decisions to a certain team member.

deciding on vacations Individual Team Member Line Manager Team Team
homeoffice Team? Line Manager Team Line Manager
hiring external consultants Team Department Lead Team LeadDepartment Lead

Also, you should try to move the accountability as far as possible also to the team, not just being responsible.

Now, you create clarity who is accountable and who should do it (in most cases hopefully the team). Between those two now you can go further and clarify how far the delegation should go with delegation poker.

Thanks to Jenni Jepsen, who held an inspiring keynote at Agile Tour Vienna 2019 that motivated me to write this blog post.

Check out the upcoming training Intent-Based Leadership with Jenni Jepsen.

Step 3: Delegation Poker

Delegation is not a zero or one exercise. It is important to clarify how far a delegation should go. For example, if we hire external consultants, the department lead can expect that the team comes with a potential solution, but he keeps the budget authority and needs to sign it off. That would be delegation-level 3.

In order to clarify this, every team member and the involved manager or role who is accountable for a decision are to be discussed gets a deck of cards. Now everyone decides, how far they would expect that the accountable person delegates that decision to the team.

Like with planning poker this usually leads to good discussions and clarifications.

Step 4: Instepct & Adapt

I would suggest that you create an information radiator and use a delegation board on the wall where you see all the time what your delegation rules are. If you need to change it, do it and if you need to add further key decision areas do it on-the-job, for example during team retrospectives.

Hints and Tips

I would not necessarily start with that exercise before you set up the teams, but explain to the teams that we start to collect key decision areas on-the-job when we see that there is a problem. This avoids endless discussions before there is actually a problem.

If you face the situation that there is no real wish to put autonomy to the team, stop the exercise and work on the reasons why before.

The reason why I introduced the RACI matrix next to delegation poker is that delegation poker only allows for two roles, like manager and team playing it but the RACI matrix can show multiple stakeholders at once.

The goal is not to draw the lines but to reveal hidden assumptions and misconceptions.

Thanks to Jenni Jepsen, who held an inspiring keynote at Agile Tour Vienna 2019 that motivated me to write this blog post.

Check out the upcoming training Intent-Based Leadership with Jenni Jepsen.


Appelo, J. (2016). Managing for Happiness: Games, Tools, and Practices to Motivate Any Team. John Wiley & Sons, Inc.

Denning, S. (2018). The Age of Agile : How Smart Companies Are Transforming the Way Work Gets Done. Retrieved from

6 Questions about Intent-Based Leadership with Jenni Jepsen

You have never heard about Intent Based Leadership? Then this post is for you. Jenni Jepsen consults, writes and speaks worldwide about leadership, teams, and how to make transformations work. She was the keynote speaker at Agile Tour Vienna in 2019 and gives a two-day remote course on “Essential Intent Based Leadership” this September.

We reached out to Jenni and asked her six questions about Intent Based Leadership. If you are a manager, director, leader who wants to create environments where people succeed, then read on!

If someone heard never before about Intent Based Leadership, how would you describe it in 150 words?

Intent-Based Leadership™ is fundamentally the language leaders and teams use to communicate at work – the words we use with each other and how we ask questions – in order to give control to people, so people who are closest to the information are the ones making the decisions.With this leadership paradigm, team members come to the leader describing what they see, what they think, and what they intend to do. With Intent-Based Leadership,the culture of the organization shifts from one of permission and waiting, to intent and action. Not only does effectivity increase, people also feel motivated and are happier at work. 

As work becomes more cognitive and less physical, Intent-Based Leadership offers a how-to for organizations to redefine what leadership means in a way that creates a workplace where the passion, motivation, engagement, creativity and intellect of each member is maximized.

Are you as a manager, or head of an agile org. tired of having to always have all the answers? Check out two-day remote training course Essential Intent-Based Leadership September 2020 in Vienna.

How/When/Who developed the concept / methodology of Intent Based Leadership?

The concept of Intent-Based Leadership is the direct result of how David Marquet, former U.S. Naval submarine captain turned his ship the USS Santa Fe from worst to first in the U.S. Navy. David wrote an amazing book on how it all came to be: Turn the Ship Around!. It’s a great story, even if you skip the leadership tips! When David took over command of the USS Santa Fe, it was at the last minute. He only had three weeks to learn everything about the ship – an impossible task. When he took command, he quickly found out that if he followed the old ways of working with him giving commands in an environment where he didn’t know everything there was to know about the ship, and people following those commands blindly, people might get killed. This was when he decided to keep quiet and asked others to come to him with what they intended to do.

People implementing Intent-Based Leadership don’t have to have all the answers. When we stop “getting people to do things” and instead give control while increasing competence and clarity, we gain more engaged people who have the competency to make decisions, feel ownership and take responsibility.

Practical outcomes of Intent-Based Leadership

How is Intent Based Leadership related to Agile: Is the methodology based on Agile, can it be applied only in an agile organization?

When I first read Turn the Ship Around! in 2012 after the book was published, my partner and I (in goAgile) thought “This is it! This is a way of leading that supports Agile ways of working.” Because so much of Agile is about team members taking responsibility, about being self-organizing, about being self-directed and having clarity about where we’re headed and why, in order to make better decisions at every level in the organization. David actually did not know about the Agile community when we first contacted him. Since then, things have, obviously, taken off for David and for Intent-Based Leadership. We’re not the only ones who can see the advantages IBL brings around how to give control, and increase organizational clarity and technical competencies. In our experience, organizations that combine Agile transformation with Intent-Based Leadership reach their goals faster. It’s because IBL offers real tools to nudge people into new behaviors, and that is the key to lasting change. 

Attend our two-day remote training course and learn how to move in an Agile way to a culture where people take initiative and ownership. September 2020 in Vienna.

Can you give an example of how language increases the feeling of empowerment?

There is a lot of talk in organizations about how to empower people. What we know from neuroscience research, is that the only thing we can do is create an environment where people feel empowered. Empowering others is a contradictory statement. It says that I have the power to empower others. That is NOT what we are going for. We want people to have influence and control. And this happens when leaders create an environment where people feel empowered. 

Now, with that said… “I intend to” are the three most amazing, empowering words we can use to increase the feeling of empowerment. Rather than asking permission, just saying “I intend to…” works on both sides. For the person saying it, it is simply informing others about what the person will do. For others, it provides information ahead of time. So there is an opportunity to give more information before the action occurs. Of course, there are lots of other examples of language increasing empowerment, “I intend to” is my favorite. 

What is an example of a leadership tool that can be used to create an environment to adopt Intent-Based Leadership?

So one of the great tools from Intent-Based Leadership is called the Ladder of Leadership. It provides some simple questions leaders can ask based on how their people talk with them. For example, if someone says “Please just tell me what to do.” That person is at the lowest level on the Ladder. The leader wants to move them up the Ladder so that they will be more comfortable taking control. The question the leader asks is: “What do you see?” This is the next step on the Ladder. This allows the person to answer in a psychologically safe environment. The leader is asking for observations. Rather than jumping to “What do you intend to do?”, the leader needs to help people up the Ladder gradually. In that way, people become safe with taking more control, and over what is usually a very short time, you can move people up to the level where they come to you with what they intend to do.

Ladder of Leadership

Reading tips: If I think about attending the training, what should I read or watch, to be prepared best? (blog posts, YT videos etc.)

Of course, reading David’s book, Turn the Ship Around! is a great idea.

Here are a couple of other links to watch and read:

Attend our two-day remote training course and learn how to move in an Agile way to a culture where people take initiative and ownership. September 2020 in Vienna.

OnBrand Conference

Finally, a great conference about branding! OnBrand has been around for a couple of years, but this year they really managed to provide a solid experience. It was amazing: from the location and inspiring world-class speakers to the overall tribal feeling outside by the food trucks.

OnBrand ’18 again took place at the Sugar Factory in Amsterdam. This location has a nightclub flair that actually works! For the first few minutes, I had to remind myself that I was not entering my favourite disco in Lisbon, but that it was 8 o’clock in the morning and I was attending the best branding conference in Europe. But once you immerse yourself in this environment, you feel open and eager for new information.

As a brand manager and marketer, I could not help but view the event with an analytic eye: how do they manage the check-in? How do they introduce the event sponsors to you? What about the schedule? And – of course – where can I get coffee!? Everything was there, and done with amazing style and care.

With three main stages and more than 30 speakers overall, it was impossible to be everywhere at once, so I had to carefully choose where I wanted to be. And I want to share my insights from my favourite two talks with you, as these still resonate with me today.

The first one was by Emanuele Madeddu, the brand strategist from National Geographic. This well-known brand is 130 years old, and the #1 brand on social media and Instagram. There is a reason for this position of strength, and Emanuele summarized the 5 key aspects needed to achieve a strong and relevant brand today:

  1. Authenticity:

It is important to stay true to who you are. People can spot Bullshit immediately. People today are also interested in brands that help them with their personal growth. So, to be authentic, National Geographic chose to allow their workers to post directly on Instagram, without the need for any curating. This works because their photographers know the brand so well that there is trust and space for creativity.

  1. Communities:

When posts are made on Instagram, people immediately ask questions and interact. There is a live community returning regularly, engaging with one another and forming smaller communities. This year, National geographic launched the Facebook campaign Women of Impact, portraying female explorers and scientists at National Geographic. They also initiated the Your shot campaign, which lets amateur photographers share their pictures in an online pool, one of which gets chosen and printed on the last page of the magazine.

  1. Impact:

Brands that want to stand out must deliver an impact. You need to have a voice that is clear, loud and has an opinion. People want to relate to brands that have a point of view and stand for something. They want to feel part of the solution, to participate, and have an impact. To deliver this impact, National Geographic started the Planet or Plastic? initiative, because we all can do better to reduce the amount of single-use plastic. For this initiative, National Geographic partnered with influencers; but – top tip –  pick your influencers very carefully. People also readily share their email address with the company, as they are actually interested in the subject, and in driving change.

  1. Good Stories:

A cycle is created: the more impact, the more stories to tell, the more people are interested and invest time and money in National Geographic. This, in turn, allows the company to deliver a bigger impact. But one other thing is also very important: telling good stories. The story does not need to be about something good, but it needs to be a good story, something memorable. One of National Geographic’s best stories is their new documentary film Free Solo, about free solo climber Alex Honnold.

  1. Partnerships:

National Geographic entered into a partnership with Nike to create the documentary Breaking 2, about breaking the 2-hour record in the Marathon.

 Resuming, trust your path and stick to it.

The second talk that I did not wanted to miss was the one by Lisa Hogg, TOMS Marketing Director. TOMS is the company that donates a new pair of TOMS shoes to a child in need for each pair of shoes bought. Lisa Hogg talked to us about the TOMS mission, which is simply to do Business for Good. It all started with the revolutionary concept of being a One for One company. Today they are no longer unique in doing this, so they have had to stop and re-evaluate themselves as a brand.

They knew who they were, they just had to find out how they wanted to develop. They decided they wanted to continue with their mission, but now seeing themselves as a bridge. A bridge from the consumer to the realization of projects that fit their shared values. TOMS wants to be a platform because they truly believe that citizens can change the world. Besides donating shoes, TOMS is now helping to give sight, improve access to water, provide safe births and prevent bullying. Lisa Hogg underlines the importance of being able to prove the impact you are having with data. She also shared with us their campaign Hairdresser to the Homeless featuring Joshua Coombes, the founder of the movement #DoSomethingForNothing.

This conference made me feel proud of the work that we have been doing to improve the TechTalk  brand awareness.

Early this year we launched our #WEPARTY campaign, inviting developers from all over the world to join us on the rooftop of our building to connect. Then our 25 Years of TechTalk party followed in the summer, where we could celebrate with colleagues and business partners and simply say “Thank you”.

And just recently we launched the recruiting campaign Help us and we help together that clearly shows who we are, this time in a partnership with Caritas Wien. As part of this campaign, for each person that recommends our company to a friend who ends up joining us, TechTalk makes two donations: one to the person that recommended us, and another to a non-profit organization.

Well enough said, I’ll close as the conference started – “Let’s Brand on!”


Conference Speakers Book Tips:

Let My People Go Surfing, by Yvon Chouinard, the founder and owner of Patagonia

Homo Deus: A Brief History of Tomorrow, by Yuval Noah Harari

Conference Links:

OnBrand 18 WebsiteOnBrand 18 Agenda  | The social media wrap-up of #OnBrand18 |   OnBrand LinkedIn

SpecFlow 3 now supports .NET Core

If you don’t already know, SpecFlow bridges the communication gap between domain experts and developers. The results are specifications that are easy to read, document the behaviour of the system, and underpin the implementation.

SpecFlow is an Open Soure Project, and much of the development is driver by developers at TechTalk. SpecFlow+ is a series of components for SpecFlow that offer additional benefits. The first public preview of SpecFlow 3 finally allows testing of .NET Core projects with SpecFlow:

Support for .NET Core is now available.

In order to use SpecFlow 3, a preview version of the SpecFlow extension for Visual Studio is required. As the SpecFlow Visual Studio extension normally updates automatically whenever a new version is released, the extension will be installed by default once SpecFlow 3 is officially release – even for users who have not yet upgraded to SpecFlow 3. This new version will not be compatible will all previous versions of SpecFlow. If you are using an older version of SpecFlow, please read the information here for details on how to prevent the extension from updating once the new version is officially released.

The biggest change in this release is support for .NET Core! If you want to try out the new version, please refer to this article for details on the steps you need to perform to install the preview version.