How does pricing work for agile contracts?

When it comes to pricing for software contracts, two opposite interests collide: the contractor wants to achieve the highest hourly rate for the project possible. On the other hand, the client wants to keep project costs as low as possible and get the maximum benefit with their budget. Of course, this is only the first, contractual overview, because the reality does not always look like this.

Nevertheless, we want to find a contractual model that takes these interests into account and regards them on paper equally. We show you possible pitfalls as well as an alternative solution approach from which both parties in an agile IT project benefit.

Risk sharing: A key aspect aspect of software contracts

When working together in agile projects, the risk is inevitable for both parties. Because even if a detailed briefing takes place and requirements appear to be clear, changes can and should be allowed to occur during the implementation. An agile process model is used to limit this risk.

A substantial risk that can occur is that implementing a user story may take significantly longer than planned. It is, therefore, important to consider the possibility of risk-sharing before signing a software contract. After all, the contractor and the client bear a different share of the risk, depending on on their contract.

We will show how the risk distribution works, using two common pricing models (T&M or price per team hour and price per story point):

Price per team hour

A widely used model is billing based on team hours.. This is a classic T&M procedure where the entire risk lies on the client’s side. He pays the contractor for hours worked, regardless of the result of the work performed.

Price per Story Point

In this model, the contractor gets paid upon completion of a story point. This should motivate the contractor’s team to work efficiently. The risk with this model clearly lies on the contractor.If no completed story points are delivered, there will be no payment made.The risk which clearly remains with the client, is that they don’t have a working software, meaning the “time to market” suffers as a result.

As you can easily see these two models have a major disadvantage: they distribute the financial risk for the collaboration very unevenly. The risk lies mainly with one party.

But there is another way.In the video made for this blogpost, TechTalk’s Agile Coach Richard Brenner explains the difference in detail and presents an alternative approach.

Combining client’s and contractor

TechTalk has been working on solving this problem for many years. In order to spread the risk equally among both parties, we have developed a model that combines the two methods: price per team hour and price per story point.

Our model, “Pay per Story Point and Hour,” splits the project risks between contractor and client by combining the following components:

  1. Price per Story Point: fixed-price share per delivered functional unit, according to the solution complexity assumed at the beginning
  2. Reduced Price per Story Point: reduced fixed price per delivered functional unit, for example, if an unforeseen story point is added
  3. Price per Team Hour: Variable price share per actually performed contractor team hour.

Let’s look at a concrete example to understand this model and its effects in different scenarios. 

Calculation example for the combined TechTalk model

Let’s make the following assumptions for this example:

  • Experience shows that the effort for a story point for a team in a project is 8 hours. 
  • The price for a team hour is 100 EUR. 

Thus, the calculated sales price for a story point is 800 EUR (8 * 100 EUR per hour). 

This is divided into:

  • A fixed share of 400 EUR per Story Point
  • A variable share of 50 EUR/hour (800 EUR – 400 EUR / 8 hours per story point)

In addition, a reduced price for an unforeseen increase in complexity is set at 100 EUR per story point.

In our example, we want to split the share equally. The figure below shows the effects in comparison to a billing based exclusively on story points or hours.. When invoicing exclusively by story points or hours, the full risk always lives with one of the contract parties. This is not the case with the combined model. In this model, the risk is shared.

How can the risk be divided?

In the next step, let’s take a look at how this splitting affects three different scenarios. Let’s assume that the total number of Story Points is 1,000.

1st scenario: Exact adherence to the plan

The following services were rendered: 

  • 1,000 Story Points
  • 8,.000 hours

These were accounted for as follows:

  • 1,000 Story Points * 400 EUR = 400,000 EUR
  • 8,000 hours * 50 EUR = 400.000 EUR

This results in total costs of 800,000 EUR and an average selling price of a team hour of 100 EUR. The initial estimated costs or the sales price per team hour are thus fulfilled for both sides. 

2. scenario: 5% less complexity, 10% less effort

The following services were rendered: 

  • 950 Story Points (5 % reduction)
  • 6,840 hours (10% reduction of hours per Story Point * number of Story Points delivered)

These were accounted for as follows:

  • 950 Story Points * 400 EUR = 380,000 EUR
  • 6,840 hours * 50 EUR = 342,000 EUR

This results in total costs of 722,000 EUR. The project costs decrease for the client . The average sale  price of a team hour is around 106 EUR. The sales price for a team hour increases for the contractor.

3rd scenario: 30% more complexity, 15% more effort

The following services were rendered: 

  • 1,300 story points (30% more complexity)
  • 15,600 hours (15% more effort per Story Point * number of Story Points delivered)

These were accounted for as follows:

  • 1,000 Story Points * 400 EUR = 400,000 EUR
  • 300 Story Points * 100 EUR = 100,000 EUR (reduced price for an  unpredictable complexity)
  • 15,600 hours * 50 EUR = 780,000 EUR

This results in total costs of 1,280,000 EUR. The total costs increase for the client. The average selling price of a team hour is around 82 EUR and thus decreases for the client.

Putting it simply, this results in the following consequences for contractor and client, depending on the respective scenario:

Risk sharing for contractor and client

The importance of checkpoints

An important component of the combined model is the checkpoint. At this checkpoint, you check whether the assumptions made are still correct. For example, the checkpoint can be set after six sprints. The questions like these are to be clarified here:

  • Is the assumed efficiency of the implementation correct? 
  • Does the complexity increase significantly in the course of detailing? 
  • Were we able to check the initial assumptions and mitigate technical risks?

Experience and trust are crucial

It is important that you know the effort per story point and the speed of the development team (Velocity). Therefore, at least an initial phase should start in this particular project setup, so that a realistic assessment of story points is possible. The combined model is thus a model based on experience. And on trust. 

If experience and trust are given, this model is well suited to distribute the risk equally between the two parties. The combined model creates two contracting parties that can communicate and work with each other on equal terms.


Do you have any questions?

You want to know more about pricing for software contracts?

Do you want to develop an environment in your company that enables working with agile methods?

Please contact Richard Brenner via eMail or on LinkedIn with your questions.

Stop giving feedback, ask for it instead. Watch Jenni Jepsen’s webinar.

For many of us, being a leader means having every little thing under control, always knowing what is going on, and telling everyone what to do. However, this role model does not equal success and leadership. 

Many psychologists and managers agree that a new kind of leader should be able to shift the tasks and even give others control over something. In this scenario, not only the leader makes his or her life more comfortable, but also the team feels more motivated, responsible, and eager to do something. In other words, it is crucial for a leader to set the right environment for others to excel and act to the maximum extent of their creativity and intellect.

However, it is easier to say than to do it because the over-controlling way of behavior is hardwired in our brains. 

Neuroscience shows that “the language we use affects how our brains wire.”

That’s why we need to relearn and train our brains to behave differently: to trust, to ask for feedback, to learn that it’s okay not to have all the answers. 

This kind of leadership is called “Intent-Based Leadership“. Jenni Jepsen will explain how and why it works from the perspective of neuroscience during the workshop on September 28-29th

The crucial part of this leadership methodology is feedback. Feedback is useful and helpful. However, we need to stop giving it. Neuroscience shows that feedback works when we understand and believe that it will lead to good things for us.

We need to learn to ask for feedback because, in this case, it’s our choice to take in and use it for growth and improvement. We are thankful for the feedback then. It makes us better – that’s the point of feedback. Creating an ask-for-feedback mindset is key to it.

This way, people will feel free to share their thoughts and ideas. In order to do that, team members should have access to information. This will lead to a higher motivation level inside a working group.

Learn how to create an ask-for-feedback mindset, and why it can help to achieve excellence in your organization in this webinar by Jenni Jepsen. 

As a primer for the upcoming training course on Intent-Based Leadership, you can rewatch the online meetup we held with Jenni Jepsen in May 2020.

Hand-picked related content:

Create an Ask-for-Feedback Mindset Workshop with Jenni Jepsen from TechTalk Software AG on Vimeo.

Questions About Test Frameworks: Q&A Part #3 with J.B. Rainsberger

This is the third chapter of our three-part Q&A blog series with J. B. Rainsberger. In this chapter he adresses questions about “Questions About Test Frameworks. The first chapter and the second chapter is in our blog in case you missed it.

On June 3, 2020 J.B. Rainsberger spoke in our remote Intro Talk about managing the various kinds of uncertainty that we routinely encounter on projects that involve legacy code. He presented a handful of ideas for how we might improve our practices related to testing, design, planning, and collaboration. These ideas and practices help us with general software project work, but they help us even more when working with legacy code, since legacy code tends to add significant uncertainty and pressure to every bit of our work. Fortunately, we can build our skill while doing everyday work away from legacy code, then exploit that extra skill when we work with legacy code.


J. B. Rainsberger helps software companies better satisfy their customers and the business that they support.

Our next remote course Surviving Legacy Code from 14-17 September 2020.


If the code base is too old even for any available test frameworks, how you handle it?

**Testing does not need frameworks. Testing never needed frameworks.** You can always start by just writing tests and refactoring them. If you do this long enough, you will extract a testing framework. If you’ve never tried it, then I recommend it! Kent Beck’s _Test-Driven Development: By Example_ included this exercise.

Every test framework began life with `if (!condition) { throw Error(“Test failure.”) }`. If you can write this, then you can build a testing framework; if this suffices, then you don’t need a testing framework. Start there!

If you can execute one part of the system in isolation from the rest, then you can write unit tests. In the early days of web browsers, we could only execute Javascript in the browser, because even so, we could (and did!) write unit tests without frameworks. We merely had to run those tests in a browser window. Eventually, someone decided to run Javascript outside the browser, which made it easier to write microtests for Javascript code. This made it _easier_ to write tests, but we were writing tests long before NodeJS existed.

If you can invoke a function (or procedure or division or block of code) and you can signal failure (such as by raising an error), then you can write tests without waiting for someone else to build a framework.

In addition, you don’t need to write your tests in the same language or environment as the running system. Golden Master technique helps us write tests for any system that offers a text-based interface. Any protocol could help us here: for example, think of HTTP as “merely” a special way of formatting requests and responses with text. If you have (or can easily add) this kind of interface or protocol to your system, then you can write tests in any language that might offer a convenient test framework. Use Python to test your COBOL code. Why not?

Finally, not all testing must be automated. As I wrote earlier, programmers have a strong habit of forgetting alternatives to techniques that they’ve found helpful. If you don’t know how to automate your tests easily, then don’t automate them yet. Instead, make them repeatable and document them. One day, someone will have a good idea about how to automate them.

You may have to write your own test framework but it can prove a daunting task.

In addition to what I wrote in the previous answer, I encourage you to follow the general advice about building any software with a Lightweight (Agile, Lean, …) approach: build the first feature that you need, then start using it, then add more features one at a time. You don’t need to build a fully-featured testing framework before you start to benefit from it. Start with `if (!assertion) throw Error()` and then use it! The testing framework SUnit was built incrementally. All the testing frameworks you know began from there. You can do it, too. Merely start, then take one step at a time.

You also need this refactoring-without-tests skill, to effectively refactor your tests!

Maybe! I don’t say you _need_ it, but it would probably help you. Your production code helps you to refactor your tests: if you change your tests and they now expect the wrong behavior, then your production code will fail that test for “the right reasons”. It doesn’t provide perfect coverage, but it helps more than you might expect. In that way, the production code helps to test the tests.

There are testing frameworks for COBOL and NATURAL. What could be older?

Indeed, the “framework” portion of testing relates to identifying tests, collecting test results, and reporting them in a unified way, as well as adding standard mechanisms for “set up” and “tear down”. We don’t need those things to start writing tests, although eventually we will probably want to have them. **Simply start writing tests, then remove duplication in any way that your programing language allows.** I don’t know what might be older than COBOL or NATURAL.


➡️ Also read our last two Q & A Blogposts with J.B. Rainsberger Part #1 “Managing the Uncertainty of Legacy Code” and Part #2 “The Risks Related to Refactoring Without Tests“! Follow us on Twitter or LinkedIn to get new posts.


The Risks Related to Refactoring Without Tests: Q&A Part #2 with J.B. Rainsberger

This is the second chapter of our three-part Q&A blog series with J. B. Rainsberger. In this chapter he adresses questions about “The Risks Related to Refactoring Without Tests. The first chapter and the third chapter is in our blog in case you missed it.

On June 3, 2020 J.B. Rainsberger spoke in our remote Intro Talk about managing the various kinds of uncertainty that we routinely encounter on projects that involve legacy code. He presented a handful of ideas for how we might improve our practices related to testing, design, planning, and collaboration. These ideas and practices help us with general software project work, but they help us even more when working with legacy code, since legacy code tends to add significant uncertainty and pressure to every bit of our work. Fortunately, we can build our skill while doing everyday work away from legacy code, then exploit that extra skill when we work with legacy code.


J. B. Rainsberger helps software companies better satisfy their customers and the business that they support

Our next remote course Surviving Legacy Code from 14-17 September 2020.


What we should say to project planners who are afraid to let us do refactoring without tests, because some folks in our team are not very good at refactoring and make mistakes? How to convince them it can work for some good programmers?

First, I recognize that if I were the project planner, then I would worry about this, too! I probably don’t know how to judge the refactoring skill of the programmers in the group, so I wouldn’t know whom to trust to refactor without tests. Moreover, I probably can’t calculate the risk associated with refactoring without tests, so I wouldn’t know when to trust _anyone_ to refactor without tests, even if I feel confident in their skill. Once I have thought about these things, it becomes easier to formulate a strategy, because I can ask myself what would make _me_ feel better in this situation? I encourage you to ask yourself this question and write down a few ways that you believe you could increase your confidence from the point of view of the project planner. I can provide a few general ideas here.

I encourage you to build trust by telling the project planner that you are aware of the risks, that you care about protecting the profit stream of the code base, and that you are prepared to discuss the details with them. It often helps a lot simply to show them that you and they are working together to solve this problem and not that you are doing what helps you while creating problems for them.

I would ask the project planners what specifically they are worried about, then matching my strategies to their worries. For example, microcommitting provides one way to manage the risk of refactoring without tests, because it reduces the cost of recovering from a mistake. At the same time, if the project planner worries about different risks than the ones I have thought about, then my strategies might not make them feel any more secure! If I know more about which risks affect them more or concern them more, then I can focus my risk-management work on those points, which also helps to build trust.

I would emphasize that we do not intend to do this as a primary strategy forever. We don’t feel comfortable doing it, either! Even so, we _must_ make progress _somehow_. We refactor without tests because it would be even more expensive to add “enough” tests than to recover from our mistakes. Of course, we have to be willing to explain our judgment here and we have to be prepared that we are wrong in that judgment! I am always prepared to take suggestions from anyone who has better ideas, but outside of that, they hired me to do good work and make sound decisions, so if they don’t trust me, then I must try to earn their trust or they should give my job to someone that they trust more. I don’t mean this last part as a threat, but merely as a reminder that if they hire me to do the job, but they never trust me, then they _should_ hire someone else!

How about pair-refactoring?

I love it! Refactoring legacy code is often difficult and tiring work, so pair-refactoring fits well even in places where “ordinary” pair programing might not be needed. Refactoring legacy code often alternates periods of difficulty understanding what to do next with long periods of tedious work. Working in pairs significantly increases the profit from both of those kinds of tasks.

You also need this refactoring-without-tests skill, to effectively refactor your tests!

Maybe! I don’t say you _need_ it, but it would probably help you. Your production code helps you to refactor your tests: if you change your tests and they now expect the wrong behavior, then your production code will fail that test for “the right reasons”. It doesn’t provide perfect coverage, but it helps more than you might expect. In that way, the production code helps to test the tests.

Moreover, tests tend to have simpler design than the production code. This means that we might never need to refactor tests in certain ways that feel common when we refactor production code. I almost always write tests with a cyclomatic complexity of 1 (no branching), so the risk when refactoring tests tends to be much lower than when refactoring legacy code. This makes refactoring tests generally safer.


➡️ Also read our two Q&A Blogposts with J.B. Rainsberger Part #1 The Risks Related to Refactoring Without Tests” and Part #3 “Questions About Test Frameworks“! Follow us on Twitter or LinkedIn to get new posts.


How Do I Find the Right Agile Software Development Company for My Project?

Price is often the crucial point when it comes to selecting an agile software development partner. Different offers can be easily compared and thus you can make what seems to be a safe choice. At least at first glance. 

But if you do not take the provider’s ability into account or underestimate it, the initial savings can quickly be reversed. For example, the provider may not meet the project’s requirements, and the project team may suffer. If you scrape expenses at the wrong end, the additional cost and time will be significantly higher than the savings at the beginning. 

We will show you which quality criteria are important for an agile approach and how you can select the right agile software development vendor for your agile project with the help of a structured process.

Criteria for selection of an agile software development company 

It is crucial to deal with the topic of quality criteria before starting the selection process. Only this way, you will know what to look for during the selection interviews. In our opinion, you should definitely consider the following criteria:

  • Experience in Agility: The more experience an agile software development company has with agile methods, the better. Agile project development can only be successful if an “Agile Culture” is lived.
  • Well-established Team: An established team is preferable over a newly assembled one. Because well-established teams can often be much more productive than a new team. For longer projects, you should also take turnover in the team into account. We are familiar with this issue, especially in offshore teams, where individual developers can be replaced at any time.
  • Direct Communication: This factor is particularly important for agile projects. It must be ensured that your experts can work with the provider’s experts directly –  ideally in the same scrum team. From our own experience, problems often arise when there are too many handovers between individual team members. Decisions have to be made quickly, without long decision-making processes. 
  • Experience in the Domain: The experience in the domain also plays an important role, especially at the team members’ level. International references are often mentioned, but the corresponding team does not have this experience. 
  • Culture Fit: It is important to understand the mindset and development processes of the provider. You should check in advance to what extent this fits with your own company culture and enables a close cooperation.
  • Degree of Dependency:  It makes sense to consider how to keep the extent of dependence on your provider low in further course of the project.One possibility is to rely on open standards. It makes it easier to change the supplier at a later point or to train its own developers. 
  • System Architecture: The architecture of a new solution must fit into the existing system landscape. A completely new system with new technologies increases complexity and makes maintenance more difficult later on if the skills available are not sufficient within the organization. This also increases the dependency on the agile software vendor.
  • Maintenance: Once the software has been developed, the maintenance phase begins. It is well known that it takes much longer than the development phase. Therefore, you should review the agile software suppliers strategies for this phase and test how quickly they react to unforeseen errors such as production incidents.

In addition to the quality criteria, as a customer, you should know your most important NFRs (Non-functional Requirements), such as security, scalability, testability. This is the only way you can examine the selection process and whether providers can fulfill these and communicate them directly. Otherwise, the system may implement security requirements at significantly higher costs or, in the worst case, not at all. It is well known that NFRs influence the system architecture more than functional requirements. In the worst case, a product may not be able to implement these NFRs at all.

Find the right agile software development company with this 4 step process

These preliminary considerations provide the basis for a structured process that allows you to select providers based on the most important criteria in four steps. 

1. Make a preliminary selection

When you start a tender for an agile project, you will receive a number of offers. The first step is to preselect the offers. 

The purchasing department usually takes over the pre-selection. Bidders are chosen based on the price and the qualification criteria described above. 

 2. Conduct intensive discussions

After the pre-selection has been implemented, intensive discussions between your and the provider’s experts take place. These discussions occur to validate the first impression. 

Also, requirements as NFRs can be addressed at this stage to give the provider a detailed idea of what is expected from him during project implementation.

 3. Conduct the prototype phase

The prototype phase is crucial, but it is often not done. Especially if you have not worked with the provider yet, you should not skip this phase under any circumstances. 

Ideally, the prototype phase should be done with several selected providers. The goal of this phase is to assess the collaboration based on the executable software. This will give you a better understanding of whether cooperation with the software development company works and whether all quality criteria can be met. 

Important: During the prototype phase, you should make sure that it is executed with the final team. The employees who will later be responsible for the product development should already work on the prototype in the final constellation during this phase.

4. Start product development

The software created during the prototype stage and the feedback from the experts regarding the cooperation with the provider serves as the basis for decisions that will further cut back the selection making. At the end of this phase, you should select a company with whom you will carry out product development.

However, before entering the development phase, contract negotiations must be conducted. With agile projects, you should be aware of several pitfalls when drafting the contract. We have summarized essential tips for you to create a solid contractual basis for your agile projects.

Price is not the decisive criterion

Price is usually not the best selection criterion, even if it seems so at first glance. It is important to be aware of the most important quality criteria in advance and validate them in a structured process for potential providers.

The following list of questions will help you to assess the quality of the provider:

  • How much experience with agile methods does the company have?
  • Is it guaranteed that I get a well-established experienced team? Do I have a direct influence on the people who work in my team?
  • Is a direct communication of customer’s and provider’s team members ensured?
  • Does the provider and, especially, the team implementing my project, have experience in my domain?
  • For longer running initiatives: How high is the fluctuation of people in the team?
  • Do you know your non-functional requirements?
  • How directly can you communicate with the implementation team?
  • Is the implementation team perhaps a part of your own team?
  • How good is the agile provider in the maintenance phase?

Do you have further questions about the selection of agile providers?

Or do you want to develop an environment in your company that enables working with agile methods?

Please contact me via eMail or on LinkedIn with your questions.


Managing the Uncertainty of Legacy Code: Q&A Part #1 with J.B. Rainsberger

In this first chapter of our three-part Q&A blog series he adressed questions that came up during his session.

On June 3, 2020 J.B. Rainsberger spoke in our remote Intro Talk about managing the various kinds of uncertainty that we routinely encounter on projects that involve legacy code. He presented a handful of ideas for how we might improve our practices related to testing, design, planning, and collaboration. These ideas and practices help us with general software project work, but they help us even more when working with legacy code, since legacy code tends to add significant uncertainty and pressure to every bit of our work. Fortunately, we can build our skill while doing everyday work away from legacy code, then exploit that extra skill when we work with legacy code.

Our next remote course Surviving Legacy Code from 14-17 September 2020.

J. B. Rainsberger helps software companies better satisfy their customers and the business that they support.

Here are some questions that came up during this session and some answers to those questions.

One of the issues is that the legacy code base consists of useful code and dead code and it’s hard to know which is which.

Indeed so. Working with legacy code tends to increase the likelihood of wasting time working with dead code before we feel confident to delete it. I don’t know how to avoid this risk, so I combine monitoring, testing, and microcommitting to mitigate the risk.

Microcommits make it easier to remove code safely because we can recover it more safely. Committing frequently helps, but also committing surgically (the smallest portion of code that we know is dead) and cohesively (portions of code that seem logically related to each other) helps. If our commits are more independent, then it’s easier to move them backward and forward in time, which makes it easier to recover some code that we mistakenly deleted earlier while disturbing the live code less. We will probably never do this perfectly, but smaller and more-cohesive commits make it more likely to succeed. This seems like a special case of the general principle that as I trust my ability to recover from mistakes more, I feel less worried about making mistakes, so I change things more aggressively. When I learned test-driven development in the early years of my career, I noticed that I become much more confident to change things, because I could change them back more safely. Practising test-driven development in general and microcommitting when working with legacy code combine to help the programmer feel more confident to delete code—not only code that seems dead.

Even with all this, you might still feel afraid to delete that code. In that case, you could add “Someone executed this code” logging statements, then monitor the system for those logging statements. You could track the length of time since you last saw each of these “heartbeat” logging messages, then make a guess when it becomes safe to delete that code. You might decide that if nothing has executed that code in 6 months, then you judge it as dead and plan to remove it. This could never give us perfect confidence, but at least it goes beyond guessing to gathering some amount of evidence to support our guesses

More testing, especially microtesting, puts more positive pressure on the design to become simpler: less duplication, better names, healthier dependencies, more referential transparency. I have noticed a pattern: as I simplify the design, I find it easier to notice parts that look irrelevant and I find it clearer that those parts are indeed dead code. Moreover, sometimes obviously dead code simply appears before my eyes without trying! This makes it safer to delete that code, using the microcommitting and monitoring as a recovery strategy in case I get it wrong.

So not all legacy code adds value to the business… but it is hard to know which part does.

Indeed so. We have to spend time, energy, and money to figure this out. I accept responsibility as a programmer to give the business more options to decide when to keep the more-profitable parts running and to retire the less-profitable parts. As I improve the design of the system, I create more options by making it less expensive to separate and isolate parts of the system from each other, which reduces the cost of replacing or removing various parts. Remember: we refactor in order to reduce volatility in the marginal cost of features, but more-generally in the marginal cost of any changes, which might include strangling a troublesome subsystem or a less-profitable feature area.

The Strangler approach describes incrementally replacing something in place: adding the new thing alongside the old thing, then gradually sending traffic to the new thing until the old thing becomes dead. Refactoring the system to improve the health of the dependencies makes this strangling strategy more effective, which gives the business more options to replace parts of the legacy system as they determine that a replacement would likely generate more profit. As we improve the dependencies within the system, we give the business more options by reducing the size of the smallest part that we’d need to replace. If we make every part of the system easier to replace, then we increase the chances of investing less to replace less-profitable code with more-profitable code.

This illustrates a general principle of risk management: if we don’t know how to reduce the probability of failure, then we try reducing the cost of failure. If we can’t clearly see which parts of the legacy code generate more profit and which ones generate less, then we could instead work to reduce the cost of replacing anything, so that we waste less money trying to replace things. This uses the strategy outlined in Black Swan of accepting small losses more often in order to create the possibility of unplanned large wins.

What do you think about exploratory refactoring? Do you use this technique sometimes?

Yes, I absolutely do! I believe that programmers can benefit from both exploratory refactoring and feature-oriented refactoring, but they need to remain aware of which they are doing at any time, because they might need to work differently with each strategy to achieve those benefits.

When I’m refactoring in order to add a feature or change a specific part of the code, I remind myself to focus on that part of the code and to treat any other issues I find as distractions. I write down other design problems or testing tasks in my Inbox as I work. I relentlessly resist the urge to do those things “while I’m in this part of the code”. I don’t even follow the Two-Minute Rule here: I insist on refactoring only the code that right now stands between me and finishing the task. Once I have added my feature, I release the changes, then spend perhaps 30 minutes cleaning up before moving on, which might include finishing a few of those Two-Minute tasks.

The rest of the time, I’m exploring. I’m removing duplication, improving names, trying to add microtests, and hoping that those activities lead somewhere helpful. This reminds me of the part of The Goal, when the manufacturing floor workers engineered a sale by creating an efficiency that nobody in the sales department had previously thought possible. When I do this, I take great care to timebox the activity. I use timers to monitor how much time I’m investing and I stop when my time runs out. I take frequent breaks—I use programming episodes of about 40 minutes—in order to give my mind a chance to rise out of the details and notice higher-level patterns. I don’t worry about making progress, because I donI ’t yet know what progress would look like—instead I know it when I see it. By putting all these safeguards in place, I feel confident in letting myself focus deeply on exploring by refactoring. I avoid distracting feelings of guilt or pressure while I do this work. I also feel comfortable throwing it all away in case it leads nowhere good or somewhere bad. This combination of enabling focus and limiting investment leads me over time to increasingly better results. As I learn more about the code, exploratory refactoring turns into feature-oriented refactoring, which provides more slack for more exploratory refactoring, creating a virtuous cycle.

What is your experience with Approval Tests, in cases where writing conventional unit tests might be to expensive?

I like the Golden Master technique (and particularly using the Approval Tests library), especially when text is already a convenient format for describing the output of the system. I use it freely and teach it as part of my Surviving Legacy Code course. It provides a way to create tests from whatever text output the system might already produce.

I get nervous when programmers start going out of their way to add a text-based interfaces to code that doesn’t otherwise need it only for the purpose of writing Golden Master tests. In this case, checking objects in memory with equals() tends to work well enough and costs less. I notice it often that programmers discover a helpful technique, then try to use it everywhere, then run into difficulties, then invest more in overcoming those difficulties than they would invest in merely doing things another way. Golden Master/Approval Tests represents merely another situation in which this risk comes to the surface.

I get nervous when programmers start choosing to write integrated tests for code where microtests would work equally well. When programmers think about adding Golden Master tests, they tend to think of these as end-to-end tests, because they often judge that as the wisest place to start. Just as in the previous paragraph, they sometimes fall into the trap of believing that “since it has helped so far, we must always do it this way”. No law prevents you from writing unit tests using Golden Master/Approval Tests! Indeed, some of the participants of my Surviving Legacy Code training independently discover this idea and use it to great effect. Imagine a single function that tangles together complicated calculations and JSON integration: it might help a lot to use Approval Tests to write Golden Master tests for this function while you slowly isolate the calculations from the JSON parsing and formatting. The Golden Master tests work very well with multiline text, such as values expressed in JSON format, but probably make the calculation tests awkward, compared with merely checking numeric values in memory using assertEquals().

When programmers use Golden Master/Approval Tests, they need to treat it as just one tool in their toolbox. This is the same as with any technique! I tend to treat Golden Master as a temporary and complementary technique. I use it when I focus on writing tests as a testing technique, even though I tend to prefer to write tests for design feedback. Not everyone does this! If you find yourself in the stage where you’re drowning in defects and need to focus on fixing them, then Golden Master can be a great tool to get many tests running early. Once you’ve stopped drowning, it becomes easier to look at replacing Golden Master with simpler and more-powerful unit tests—eventually microtests.


➡️ Also read our two Q&A Blogposts with J.B. Rainsberger Part #2 The Risks Related to Refactoring Without Tests” and Part #3 “Questions About Test Frameworks“! Follow us on Twitter or LinkedIn to get new posts.


Approval Testing: What It Is and How It Helps You To Manage Legacy Code

Emily Bache is a Technical Agile Coach, she helps software development teams to get better at the technical practices needed to be agile, including Test-Driven Development, Refactoring, and Incremental Design. Emily is known as the author of the book, “The Coding Dojo Handbook”. For the second time, we organize a training course with Emily on Approval Testing. In this email interview we asked Emily what counts as legacy code, how to get into approval testing, and what her upcoming book will be about.

What is the optimal way of learning Approval Testing? What is the role of Gilded Rose Kata and other exercises in this process? 

Approval Testing is a style and approach to writing automated tests that changes the way you verify behaviour. Basically, the ‘assert’ part of the test. As with any new tool or approach, it helps to have actual code examples to play with when you’re learning it. Once you start to see it in action then you’re bound to have lots of questions so it’s good to have people around you to discuss it with.

The Gilded Rose Kata is a fun little exercise that I maintain. It actually comes with approval tests, as well as being translated into about 40 programming languages. Whatever your coding background and language preferences, you can try it out and see how it works for yourself. When you’ve done that, you should easily be able to find other people to discuss it with, since it’s quite a popular exercise. For example Ron Jeffries recently wrote 13(!) blog posts about his experience with it.

You talk about refactoring and handling legacy code? What is actually legacy code? How would you define it? 

Many developers struggle with code they inherited which has poor design and lacks automated tests. On their own, any one of those difficulties could probably be overcome, but in combination developers get a kind of paralyzing fear of changing the code. That’s how I would define legacy code. Code that you need to change but you’re afraid to in case you break it.

The antidote to that fear, I find, is feedback. High-quality feedback telling the developer when they are making safe changes. The feedback that gives them the confidence to improve the design and get in control. Approval testing is one way to get that feedback – you create regression tests that give you good information when behaviour changes.

What are the main things one should know before starting working with Approval Testing? 

Since it’s a style of automated testing, it helps to have experience with unit testing already, perhaps with JUnit or similar. Approval Testing is often used in larger-granularity tests too, so experience with tools like Selenium or Cucumber would give you a good perspective, although it works a bit differently. This way of testing also fits extremely well into Agile methods, BDD, and Specification by Example. If you are working in a more traditional process, you may find adding these kinds of tests will help you to increase your agility.

For which situations is Approval Testing the best solution? When shouldn’t it be used? 

If you’re facing legacy code, this can be a great addition to your strategy for getting control. I wouldn’t discount it for new development though, particularly if your system will produce some kind of detailed artifact where the user really cares about how it looks. For example I’ve seen this approach used to verify things like invoices, airline crew schedules, 3D molecular visualizations, and hospital blood test results.

Of course there are situations where I wouldn’t use Approval Testing, for example where the output is a single number – the result of a complex calculation. If you can calculate the expected result before you write the code, testing it with an ordinary assertion is a sensible approach.

Can Behaviour Driven Development be considered as the future of the industry and Approval Testing as an essential part of it? Why is it so?  

The main priority of BDD is to improve collaboration and communication so we build the right software. In my experience Approval testing promotes good conversations. I’m giving a keynote speech at Cukenfest soon, (a conference about BDD), and I’m going to be talking about exactly this topic. For the test automation part of BDD most teams use the Gherkin syntax with Cucumber or SpecFlow. I think you can use Approval testing in a similar way.

You have been working on this topic for a while  – what excites you about it? 

There is so much potential for this technique! I see a lot of legacy code out there, and I see a lot of test cases that are unnecessarily difficult to maintain. If I can spread these testing techniques to even a small proportion of all those places it will make a huge positive difference to the quality of the software in the world.

You wrote a book about Coding Dojo, what can we expect from your follow-up book? 

The motivation for my upcoming book “Technical Agile Coaching” is largely the same as for the previous one – I write for people who want to make a difference and improve the way software is built. In 2011 I published “The Coding Dojo Handbook” which is full of advice and experiences setting up a forum for practicing coding skills. You can see my new book as an expansion of those ideas, seasoned with ten years of additional experience.

The focus of the coaching method I describe in the book is specifically on technical practices and how people write code. There are two main elements to the coaching. Firstly teaching techniques via interactive exercises and code katas. Secondly coaching a whole team to work effectively together as they do mob programming.


Online Planning and Collaboration in Multiple Teams (Kostenloser 90-min. Remote-Workshop)

Ole Jepsen
Enterprise Agile Coach | Scaled Planning Advisor |

Um die Entwicklung von Produkten und Dienstleistungen zu beschleunigen und marktfähig zu bleiben, setzen viele Teams auf agile Methoden.

Bis vor wenigen Wochen war es üblich sich in großen Runden in Meetingräumen zum PI Planning, Face2Face zu treffen, um die Hürden der Planungs- und Koordinierungsaufgaben zu besprechen.

Dann kam Covid-19 und die Frage wie diese großen Planungssessions nun durchzuführen seien. Verschieben und Momentum verlieren, oder online durchführen?

Ole Jepsen wird in diesem Remote-Workshop seine Erfahrung im Bereich Online-Planung weitergeben. Sowohl aus Pre-Corona-Sicht (Teams in unterschiedlichen Ländern verteilt) und Post-Corona-Sicht (alle zu Hause am Laptop).

“Set up the collaboration board exactly as you would set up the physical conference room” – Tip by Ole Jepsen

Die Gute Nachricht ist, dass Online-Planung (PI Planning) sehr wohl online gut umsetzbar ist. Die Durchführung erfordert eine gute Vorbereitung, die richtigen Tools und ein paar Tipps und Tricks.

An wen richtet sich dieser Online-Workshop?

Idealerweise haben Sie Erfahrung in der Zusammenarbeit mit agilen Methoden, Scaled Agile, SAFe, LESS, oder haben in der Vergangenheit schon an PI-Planning-Sessions teilgenommen.

Nehmen Sie an diesem interaktiven Remote-Workshop teil und lernen, wie Sie Ihre Online-Planungs-Sessions erfolgreich durchführen können.

Weitere remote Workshops finden Sie auf unserer Trainingsseite.


Rückfragen & Kontakt

Milena Krnjic

milena.krnjic@techtalk.at
LinkedIn

Hinweis: Der Workshop wird in englischer Sprache abgehalten. Für die Durchführung des Workshops wird die Videokonferenz-Lösung Zoom herangezogen, sowie das Tool Metro Retro. Sie müssen keine Software für die Teilnahme installieren. Falls Sie noch keinen Metro Retro-Account haben, erstellen Sie bitte im Vorfeld einen. Ein Zoom Account ist nicht notwendig. Für Zoom verwenden Sie am besten den Browser Chrome.


Agile Teams: A Method to Enable Autonomy by Clarity of Roles

How can you enable teams to take initiative and autonomy by knowing their decision boundaries? In this blog post, I am sharing the concept for a workshop format that you can adapt to achieve that goal.

I used this method to address the following situation: within a classical organization, new cross-functional teams are put together and are now supposed to work in an agile way, but they are stuck at the beginning because they do not know what they are allowed to decide. In addition, the concept of shared responsibility in these new teams is new. A second effect is also that those teams are not stuck, but they are now willing to take initiative, decide certain things (like for example an architecture decision), but then their manager is not happy with the decision and overrules. Also leading to a stuck team.

The problem is that there are unspoken assumptions about what autonomy means between managers and the teams or other stakeholders around the team. We want to reveal those assumptions!

This blog post is based on the talks at the agile tour 2019 and at the ASQF agile night.

Important preconditions in the mindset

A basic precondition is that the organization and all the stakeholders understand the concept of small autonomous teams or the “law of the small team” as Denning describes it (Denning, 2018). Those teams act aligned with the product and corporate vision and are be able to self-direct and self-manage to achieve the goals.

Leaders and managers understand that they should enable those teams by following the concept: “Push authority to information”, one of the major principles of intent-based leadership because we know that they are the experts in their domain and can make the best decisions.

Source: Intent-Based Leadership Keynote by Jenni Jepsen @Agile Tour Vienna“

In order to that, we need to give control to the teams, which is a gradual shift, not a one-off “now you do it all” because we need to check if the competence in the team is there.

Source: Intent-Based Leadership Keynote by Jenni Jepsen @Agile Tour Vienna“

Third, it is clear that we can only manage the environment and not the people.

Workshop Format

The goal of the workshop format is to reveal who is ultimately deciding and how much of these decisions can be delegated to the team. This question can arise between for example the former line manager, the department lead, a software architect or other stakeholders.

Step 1: Key Decision Areas

First of all, we need to collect the most important decision areas where we faced problems or need clarity. It is important that you do not list all decision areas as this would end up in a huge probably Excel sheet that nobody uses later anymore.

Key decision areas can be for example:

  • Who is responsible for deciding on vacations?
  • Who can decide it is a good thing to go on home office or not?
  • Who decides ultimately about an architectural proposal?
  • Who decides how much a solution proposal can cost?
  •  Can we invest as a team in experimenting with solutions for a given problem?
  •  Can we decide on hiring external consultants?
  • Who is responsible for staffing the team?

Step 2: The RACI matrix

You can skip this step if you only need to clarify the delegation between one role and the team. If you have multiple stakeholders, the RACI matrix can help.

In the columns you list who is Responsible (R), who is Accountable (A), who needs to be consulted (C) before taking the decision and who needs to be informed (I) of the decision.

In the rows, you list the key decision areas. It is important that you do not go further in certain team roles. The team is either doing it as a team, but you do not delegate certain decisions to a certain team member.

R
Responsible
A
Accountable
C
Consulted
I
Informed
deciding on vacations Individual Team Member Line Manager Team Team
homeoffice Team? Line Manager Team Line Manager
hiring external consultants Team Department Lead Team Lead Department Lead

Also, you should try to move the accountability as far as possible also to the team, not just being responsible.

Now, you create clarity who is accountable and who should do it (in most cases hopefully the team). Between those two now you can go further and clarify how far the delegation should go with delegation poker.

Thanks to Jenni Jepsen, who held an inspiring keynote at Agile Tour Vienna 2019 that motivated me to write this blog post.

Check out the upcoming training Intent-Based Leadership with Jenni Jepsen.

Step 3: Delegation Poker

Delegation is not a zero or one exercise. It is important to clarify how far a delegation should go. For example, if we hire external consultants, the department lead can expect that the team comes with a potential solution, but he keeps the budget authority and needs to sign it off. That would be delegation-level 3.

In order to clarify this, every team member and the involved manager or role who is accountable for a decision are to be discussed gets a deck of cards. Now everyone decides, how far they would expect that the accountable person delegates that decision to the team.

Like with planning poker this usually leads to good discussions and clarifications.

Step 4: Instepct & Adapt

I would suggest that you create an information radiator and use a delegation board on the wall where you see all the time what your delegation rules are. If you need to change it, do it and if you need to add further key decision areas do it on-the-job, for example during team retrospectives.

Hints and Tips

I would not necessarily start with that exercise before you set up the teams, but explain to the teams that we start to collect key decision areas on-the-job when we see that there is a problem. This avoids endless discussions before there is actually a problem.

If you face the situation that there is no real wish to put autonomy to the team, stop the exercise and work on the reasons why before.

The reason why I introduced the RACI matrix next to delegation poker is that delegation poker only allows for two roles, like manager and team playing it but the RACI matrix can show multiple stakeholders at once.

The goal is not to draw the lines but to reveal hidden assumptions and misconceptions.

Thanks to Jenni Jepsen, who held an inspiring keynote at Agile Tour Vienna 2019 that motivated me to write this blog post.

Check out the upcoming training Intent-Based Leadership with Jenni Jepsen.

References

Appelo, J. (2016). Managing for Happiness: Games, Tools, and Practices to Motivate Any Team. John Wiley & Sons, Inc.

Denning, S. (2018). The Age of Agile : How Smart Companies Are Transforming the Way Work Gets Done. Retrieved from http://sbiproxy.uqac.ca/login?url=http://international.scholarvox.com/book/88852686

6 Questions about Intent-Based Leadership with Jenni Jepsen

You have never heard about Intent Based Leadership? Then this post is for you. Jenni Jepsen consults, writes and speaks worldwide about leadership, teams, and how to make transformations work. She was the keynote speaker at Agile Tour Vienna in 2019 and gives a two-day remote course on “Essential Intent Based Leadership” this September.

We reached out to Jenni and asked her six questions about Intent Based Leadership. If you are a manager, director, leader who wants to create environments where people succeed, then read on!

If someone heard never before about Intent Based Leadership, how would you describe it in 150 words?

Intent-Based Leadership™ is fundamentally the language leaders and teams use to communicate at work – the words we use with each other and how we ask questions – in order to give control to people, so people who are closest to the information are the ones making the decisions.With this leadership paradigm, team members come to the leader describing what they see, what they think, and what they intend to do. With Intent-Based Leadership,the culture of the organization shifts from one of permission and waiting, to intent and action. Not only does effectivity increase, people also feel motivated and are happier at work. 

As work becomes more cognitive and less physical, Intent-Based Leadership offers a how-to for organizations to redefine what leadership means in a way that creates a workplace where the passion, motivation, engagement, creativity and intellect of each member is maximized.

Are you as a manager, or head of an agile org. tired of having to always have all the answers? Check out two-day remote training course Essential Intent-Based Leadership September 2020 in Vienna.

How/When/Who developed the concept / methodology of Intent Based Leadership?

The concept of Intent-Based Leadership is the direct result of how David Marquet, former U.S. Naval submarine captain turned his ship the USS Santa Fe from worst to first in the U.S. Navy. David wrote an amazing book on how it all came to be: Turn the Ship Around!. It’s a great story, even if you skip the leadership tips! When David took over command of the USS Santa Fe, it was at the last minute. He only had three weeks to learn everything about the ship – an impossible task. When he took command, he quickly found out that if he followed the old ways of working with him giving commands in an environment where he didn’t know everything there was to know about the ship, and people following those commands blindly, people might get killed. This was when he decided to keep quiet and asked others to come to him with what they intended to do.

People implementing Intent-Based Leadership don’t have to have all the answers. When we stop “getting people to do things” and instead give control while increasing competence and clarity, we gain more engaged people who have the competency to make decisions, feel ownership and take responsibility.

Practical outcomes of Intent-Based Leadership

How is Intent Based Leadership related to Agile: Is the methodology based on Agile, can it be applied only in an agile organization?

When I first read Turn the Ship Around! in 2012 after the book was published, my partner and I (in goAgile) thought “This is it! This is a way of leading that supports Agile ways of working.” Because so much of Agile is about team members taking responsibility, about being self-organizing, about being self-directed and having clarity about where we’re headed and why, in order to make better decisions at every level in the organization. David actually did not know about the Agile community when we first contacted him. Since then, things have, obviously, taken off for David and for Intent-Based Leadership. We’re not the only ones who can see the advantages IBL brings around how to give control, and increase organizational clarity and technical competencies. In our experience, organizations that combine Agile transformation with Intent-Based Leadership reach their goals faster. It’s because IBL offers real tools to nudge people into new behaviors, and that is the key to lasting change. 

Attend our two-day remote training course and learn how to move in an Agile way to a culture where people take initiative and ownership. September 2020 in Vienna.

Can you give an example of how language increases the feeling of empowerment?

There is a lot of talk in organizations about how to empower people. What we know from neuroscience research, is that the only thing we can do is create an environment where people feel empowered. Empowering others is a contradictory statement. It says that I have the power to empower others. That is NOT what we are going for. We want people to have influence and control. And this happens when leaders create an environment where people feel empowered. 

Now, with that said… “I intend to” are the three most amazing, empowering words we can use to increase the feeling of empowerment. Rather than asking permission, just saying “I intend to…” works on both sides. For the person saying it, it is simply informing others about what the person will do. For others, it provides information ahead of time. So there is an opportunity to give more information before the action occurs. Of course, there are lots of other examples of language increasing empowerment, “I intend to” is my favorite. 

What is an example of a leadership tool that can be used to create an environment to adopt Intent-Based Leadership?

So one of the great tools from Intent-Based Leadership is called the Ladder of Leadership. It provides some simple questions leaders can ask based on how their people talk with them. For example, if someone says “Please just tell me what to do.” That person is at the lowest level on the Ladder. The leader wants to move them up the Ladder so that they will be more comfortable taking control. The question the leader asks is: “What do you see?” This is the next step on the Ladder. This allows the person to answer in a psychologically safe environment. The leader is asking for observations. Rather than jumping to “What do you intend to do?”, the leader needs to help people up the Ladder gradually. In that way, people become safe with taking more control, and over what is usually a very short time, you can move people up to the level where they come to you with what they intend to do.

Ladder of Leadership

Reading tips: If I think about attending the training, what should I read or watch, to be prepared best? (blog posts, YT videos etc.)

Of course, reading David’s book, Turn the Ship Around! is a great idea.

Here are a couple of other links to watch and read:

Attend our two-day remote training course and learn how to move in an Agile way to a culture where people take initiative and ownership. September 2020 in Vienna.