Wartung von Altanwendungen bei TechTalk: Interview mit Softwarearchitekt Thomas Korosa

In unserer Interviewserie „Get to know TechTalk“ stellen wir regelmäßig Mitarbeiter der TechTalk vor. Diesmal verrät uns Softwarearchitekt Thomas Korosa, wie er mit den Herausforderungen bei der Wartung von Altanwendungen umgeht und was seine Arbeitsweise bei der Übernahme sowie Betreuung der Altanwendungen besonders macht.

Was ist Deine Rolle bzw. Dein Betätigungsfeld bei TechTalk?

Zu meinen Aufgaben gehören neben der Programmierung einerseits die Unterstützung und Weiterentwicklung der anderen Developer im Team, andererseits die Unterstützung der internen Product Owner und der Kunden. Den Product Owner unterstütze ich in der Planung und bei der Analyse von Anforderungen, besonders wenn diese nicht nahtlos in die bestehende Anwendung passen. 

Typische Themen im Rahmen der Wartung von Altanwendungen sind bei uns: 

  • Evaluierung und Nach-Dokumentation einer Domain
  • Quick Wins für Refactoring finden, die die Wartbarkeit verbessern und auch mit schonendem Umbau umgesetzt werden können
  • Altanwendungen welche auch oft einen hohen Anteil von Legacy-Code beinhalten 
  • die Erhöhung der Testabdeckung im Nachhinein. 

Das Betätigungsfeld ist also sehr vielfältig.

Worauf legst du besonderen Wert bei der Weiterentwicklung und Zusammenarbeit Deines Developer-Teams?

Bei der Weiterentwicklung von Developern setze ich auf Mentoring. Die Unterstützung ist ganzheitlich und betrifft sowohl Technologien oder Programmier-Patterns, als  auch die Entwicklung des Verständnisses, was Coding Standards und deren konsequente Einhaltung bringen oder welche Auswirkung Knowledge Sharing auf die Effizienz und die Effektivität eines Teams hat – und am Ende auch auf die Motivation. Letztlich ist es mein Ziel, Nachhaltigkeit in Projekten sowohl in technischer als auch in organisatorischer Hinsicht herzustellen. Das bewirkt auch, dass die Teammitglieder von Anfang an viele Aufgaben selbständig lösen können – Eine wichtige Voraussetzung, um Altanwendungen als Team effizient warten zu können.

Welchen Fokus setzt Du bei der Übernahme von Altanwendungen?

Der Fokus liegt auf der behutsamen Modernisierung der bestehenden Architekturen, und weniger auf dem Bau neuer Architekturen. Refactoring so einzusetzen, dass man den Code sukzessive im Zuge der Weiterentwicklung modernisiert und dabei die Anwendung stabil in Betrieb hält, erfordert spezielles Wissen und folgt anderen Kriterien, was das Handling von Code anbelangt, als eine Neuentwicklung.  Das Gleiche gilt für die Modernisierung von Infrastruktur oder Komponenten.

Ist die Betreuung von Altanwendungen nicht ziemlich aufwändig?

Es stimmt schon, dass die Betreuung von Altanwendungen ein paar Herausforderungen mit sich bringt, z.B. häufig geringe Testabdeckung und unzureichende Dokumentation. Oft gibt es keine inhaltlichen oder technischen Ansprechpartner beim Kunden, die uns bei der Übernahme unterstützen könnten. Das alles erhöht den Aufwand für Änderungen an der Software.  Wir haben jedoch genau mit diesen Herausforderungen viel Erfahrung und Routine und können daher unsere Kunden besonders gut dabei unterstützen.

Was schaust Du Dir als Erstes an, wenn ein neuer Kunde mit seiner Altanwendung zu dir kommt?

Bei der Erstanalyse ist unser Ziel, die mindestens notwendigen Umbauarbeiten zu erkennen und Quick Wins zu finden, die leicht realisierbar sind, die Wartung erleichtern und Verbesserungen für die Benutzer*innen bringen. Dabei achten wir besonders auf den Business Value und auf inkrementelle Änderung.  Das heißt wir können relativ schnell eine realistische Empfehlung geben, wie der weitere Betrieb der Software unter Berücksichtigung von Domäne, Technologien und Business Value aussehen sollte.

Wie profitiert ein neuer Kunde, wenn er mit TechTalk seine Altanwendungen verwaltet?

Ich glaube, dass der größte Vorteil für einen neuen Kunden ist, dass wir wissen, was wir brauchen, um seine Software zu übernehmen. Wir integrieren diese in unsere  bestehenden Abläufe, sodass der Betrieb der Anwendung in der gewohnten Qualität kosteneffizient weiterhin gewährleistet ist. 

In organisatorischer Hinsicht profitieren unsere Kunden davon, dass wir mit State-of-the-Art-Tools (DevOps) im Code für Ordnung sorgen und mit unserem –  für die Wartung maßgeschneiderten – agilen Vorgehen auch organisatorisch den Überblick bewahren. Wir arbeiten dabei mit einem Kanban Board und synchronisieren uns regelmäßig mit den Product Ownern im Sprint-Rhythmus von zwei Wochen.

Der Vorteil für die Kunden ist, dass einheitliche Standards für alle Projekte implementiert sind und gelebt werden, die gewährleisten, dass wir im Team transparent und effizient zusammenarbeiten können. Diese Transparenz hilft auch dem Product Owner die Effektivität unserer Arbeit optimal zu unterstützen, sodass wir das Richtige umsetzen, und regelmäßig mit dem Kunden die Prioritäten im Backlog anpassen können, wenn von ihm gewünscht. Das ist in der Wartung besonders relevant, weil sich die Schwerpunkte bei Anwendungen, die bereits in Betrieb sind, oft schneller ändern. Für den Kunden verbessert das also die Kosteneffizienz und Planbarkeit. Zusätzlich erleichtert es die Einbindung weiterer Teammitglieder, was auch das Problem nicht mehr vorhandener Ansprechpartner reduziert. 

Und welche Vorteile ergeben sich aus technischer Hinsicht für den Kunden?

Auch in technischer Hinsicht bieten wir den Kunden Vorteile. Dadurch, dass wir gängige Technologien im Auge behalten, können wir unseren Kunden passende Komponenten vorschlagen, wie zum Beispiel Kibana für die Systemüberwachung. Ein anderes Beispiel ist, dass wir erkennen, wenn eine Komponente deprecated, also veraltet, ist oder Sicherheitslücken bekannt werden. Der Kunde muss sich nicht selbst um solche Probleme kümmern.

Inwiefern ist die Zusammenarbeit mit Kunden in der Wartung anders als bei Neuentwicklungen? 

Der Kontakt zu Kunden unterscheidet sich von der Neuentwicklung vor allem dadurch , dass wir bis zu 5 verschiedene Kunden in jedem Sprint betreuen und oft auch in technischer Hinsicht eng zusammenarbeiten. Die Inhalte reichen dabei von der Analyse eines Problems in der Infrastruktur, wie z.B. dem korrekten Aufruf einer Schnittstelle, bis hin zur Beratung, welche die optimale Lösung für den Benutzer aus Domänen- und Usability-Sicht ist.

Was motiviert Dich besonders bei deiner Arbeit mit Altanwendungen?

Mich motivieren am meisten drei Aspekte meiner Arbeit.

Erstens, wenn es mir gelingt, zu meinen Kunden ein gutes Verhältnis aufzubauen und partnerschaftlich zusammenzuarbeiten. Wenn das gelingt, motiviert es mich besonders, weil gerade in der Wartung oft Probleme schnell gelöst werden müssen und da ist eine gute Beziehung zum Kunden besonders wichtig.

Zweitens kann ich in meiner Rolle junge Kollegen*innen weiterbilden und an den vielfältigen Projekten wachsen lassen – und lerne dabei auch immer selbst dazu .

Drittens analysiere ich gerne, versuche die Domäne und die Implementierung von Anwendungen zu verstehen und freue mich, wenn ich dadurch Lücken schließen kann, die entstanden sind weil im Team des Kunden die  wissenden Personen – Entwickler oder Produktmanager – nicht mehr verfügbar sind. 

Wie bleibst Du am Ball? 

Ich schaue auf einen ständigen Austausch mit Kollegen*innen –  auch mit Juniors – und lasse mich gerne auch von ihnen challengen. 

Außerdem ist es mir wichtig, auch in Legacy-Projekten den Fokus auf die Einhaltung der State-of-the-Art-Qualitätsstandards zu legen. Last but not least lernt man sowieso durch die Unterschiedlichkeit der Lösungen in den Altanwendungen flexibel zu sein und sich effiziente und effektive Lösungen zu überlegen.

Welche Soft Skills hast du im Zuge deiner Tätigkeit weiterentwickelt?

Die wichtigsten Dinge, die ich gelernt habe, sind mehrere verschiedene parallele Aufgaben effizient zu organisieren, mit vielen verschiedenen Standpunkten umgehen zu können und zwischen den Meinungen zu vermitteln. Zuhören und Reflektieren führt zu einer gemeinsamen, ganzheitlichen Sicht und ermöglicht die beste Lösung.

Möchtest du mehr über die Methoden zur Wartung von Altanwendungen bei der TechTalk erfahren? Thomas Korosa steht dir für weitere Fragen über Xing oder Mail zur Verfügung.

Haben Sie eine Legacy Software im Einsatz die modernisiert gehört?

Nutzen Sie unser Angebot der kostenlosen Analyse im Rahmen des TechTalk Relax Application Management Service.

The Journey to Agile Tour Vienna 2020 and beyond!

The successful 10th anniversary edition of Agile Tour Vienna is now over. While we are already excited about planning the next one, we’d like to share insights of the long and bumpy road towards this year’s conference.

After the lockdown in March and the world standing still for two months, the fate of the Agile Tour Vienna 2020 was hanging by a thread as planning an in-person conference like in previous years no longer seemed reasonable.

In May, we’d reached the point where a crucial decision had to be made. How realistic is it that the worldwide COVID-19 pandemic would be under control by the time the conference was scheduled to take place? It quickly became clear that this was highly unlike. Yet cancelling the conference was at no time an option for us!

Planning an online event

  • The only logical conclusion was to switch to a remote event and start planning from ground zero again. We set the following goals to achieve:
    The line-up should be something special – for this we’ve contacted the keynote speakers from past years.
  • The program has to contain something new as well – that’s why we’ve integrated the hybrid panel discussion format for the first time.
  • As this was 10th anniversary, we wanted to thank our community for their loyalty – let’s make the tickets free of charge and have attendees give a voluntary contribution if they want to support the time and effort of our speakers.
  • A major goal was to come as close as possible to a real-life conference experience – after intensive research we chose Hopin to help us provide the conference spirit.

What might seem to be simple at first, turned out to be quite complex indeed. After throwing several agenda drafts over-board, finally a month before the conference, a small, circle of the organisation team was able to fix a schedule! Coordinating the speakers living all across the globe was one big task itself. We’ve executed dry runs with every speaker prior to the conference to make sure they learn the platform’s technical requirements and the event day will run smoothly.

Who will be responsible for the moderation? Without hesitation, Robert Finan and Richard Brenner, two of TechTalk’s Agile Coaches and essential members of the organization committee, volunteered to take on this task. This required being familiar with our speakers’ work and making the panel discussions interesting and dynamic.

The technical set-up was tested in numerous dry runs at TechTalk’s DC Spaces event location, which is perfectly equipped for remote and hybrid meeting formats. We were able to apply the know-how acquired over the past months through a number of online trainings and events that we organized. Nevertheless, this was still a completely new experience and an experiment for us to organize a remote conference of this scale. After all, we wanted the speakers as well viewers to have a pleasant experience.

The conference day is finally here

Finally, the conference day was here and we were happy that several panelists joined the organization team at the conference site. After Robert’s opening speech in which he described all the organization struggles pictured in this blog post, the long awaited talks and panel discussions could start.

Below you will find the list in chronological order. With a click, you can watch the video recordings, too!

Looking back, we are very happy about how successful the 10th anniversary of the Agile Tour Vienna turned out. We got 558 registrations and a show rate of 68% with participants from all over the world.

The speakers did an outstanding job, the participants had a great exchange in the chat, the technical set-up worked perfectly, and the collected feedback was very positive and helpful!

That is what participants said:

“You all did a great job, thanks for not cancelling it and getting through all complicated stuff to organise it in a different way. Congrats to the organisation team! You did a great job!”

“Thanks for organising this, and keeping it up even in 2020 conditions. Got me excited about this awesome community once again!”

“Moderation was excellent – the guests brought interesting insights, I especially enjoyed the TRAFO-Talk”

“Great way for online-networking – randomly chosen couples are really fine to talk to unknown people”

Check out what attendees tweeted about this year´s Agile Tour Vienna. #ATVIE20

With Hopin, we’ve also chosen a tool that enables an entertaining way of networking. During the breaks, participants were able to be paired randomly with five minutes to chat. It was important for us to use this feature to encourage the kind of sharing and conversations that you might normally have at an onsite conference.

In which format will we be able to hold the conference next year, is yet to be decided. Whether it will be an onsite, online, or hybrid, thanks to our community and speakers, we will sure create another great event.

Until then, you can watch the entire 2020 conference on YouTube. Click here for the playlist.

Cheers, see you next year :-)

How does pricing work for agile contracts?

When it comes to pricing for software contracts, two opposite interests collide: the contractor wants to achieve the highest hourly rate for the project possible. On the other hand, the client wants to keep project costs as low as possible and get the maximum benefit with their budget. Of course, this is only the first, contractual overview, because the reality does not always look like this.

Nevertheless, we want to find a contractual model that takes these interests into account and regards them on paper equally. We show you possible pitfalls as well as an alternative solution approach from which both parties in an agile IT project benefit.

Risk sharing: A key aspect aspect of software contracts

When working together in agile projects, the risk is inevitable for both parties. Because even if a detailed briefing takes place and requirements appear to be clear, changes can and should be allowed to occur during the implementation. An agile process model is used to limit this risk.

A substantial risk that can occur is that implementing a user story may take significantly longer than planned. It is, therefore, important to consider the possibility of risk-sharing before signing a software contract. After all, the contractor and the client bear a different share of the risk, depending on on their contract.

We will show how the risk distribution works, using two common pricing models (T&M or price per team hour and price per story point):

Price per team hour

A widely used model is billing based on team hours.. This is a classic T&M procedure where the entire risk lies on the client’s side. He pays the contractor for hours worked, regardless of the result of the work performed.

Price per Story Point

In this model, the contractor gets paid upon completion of a story point. This should motivate the contractor’s team to work efficiently. The risk with this model clearly lies on the contractor.If no completed story points are delivered, there will be no payment made.The risk which clearly remains with the client, is that they don’t have a working software, meaning the “time to market” suffers as a result.

As you can easily see these two models have a major disadvantage: they distribute the financial risk for the collaboration very unevenly. The risk lies mainly with one party.

But there is another way.In the video made for this blogpost, TechTalk’s Agile Coach Richard Brenner explains the difference in detail and presents an alternative approach.

Combining client’s and contractor

TechTalk has been working on solving this problem for many years. In order to spread the risk equally among both parties, we have developed a model that combines the two methods: price per team hour and price per story point.

Our model, “Pay per Story Point and Hour,” splits the project risks between contractor and client by combining the following components:

  1. Price per Story Point: fixed-price share per delivered functional unit, according to the solution complexity assumed at the beginning
  2. Reduced Price per Story Point: reduced fixed price per delivered functional unit, for example, if an unforeseen story point is added
  3. Price per Team Hour: Variable price share per actually performed contractor team hour.

Let’s look at a concrete example to understand this model and its effects in different scenarios. 

Calculation example for the combined TechTalk model

Let’s make the following assumptions for this example:

  • Experience shows that the effort for a story point for a team in a project is 8 hours. 
  • The price for a team hour is 100 EUR. 

Thus, the calculated sales price for a story point is 800 EUR (8 * 100 EUR per hour). 

This is divided into:

  • A fixed share of 400 EUR per Story Point
  • A variable share of 50 EUR/hour (800 EUR – 400 EUR / 8 hours per story point)

In addition, a reduced price for an unforeseen increase in complexity is set at 100 EUR per story point.

In our example, we want to split the share equally. The figure below shows the effects in comparison to a billing based exclusively on story points or hours.. When invoicing exclusively by story points or hours, the full risk always lives with one of the contract parties. This is not the case with the combined model. In this model, the risk is shared.

How can the risk be divided?

In the next step, let’s take a look at how this splitting affects three different scenarios. Let’s assume that the total number of Story Points is 1,000.

1st scenario: Exact adherence to the plan

The following services were rendered: 

  • 1,000 Story Points
  • 8,.000 hours

These were accounted for as follows:

  • 1,000 Story Points * 400 EUR = 400,000 EUR
  • 8,000 hours * 50 EUR = 400.000 EUR

This results in total costs of 800,000 EUR and an average selling price of a team hour of 100 EUR. The initial estimated costs or the sales price per team hour are thus fulfilled for both sides. 

2. scenario: 5% less complexity, 10% less effort

The following services were rendered: 

  • 950 Story Points (5 % reduction)
  • 6,840 hours (10% reduction of hours per Story Point * number of Story Points delivered)

These were accounted for as follows:

  • 950 Story Points * 400 EUR = 380,000 EUR
  • 6,840 hours * 50 EUR = 342,000 EUR

This results in total costs of 722,000 EUR. The project costs decrease for the client . The average sale  price of a team hour is around 106 EUR. The sales price for a team hour increases for the contractor.

3rd scenario: 30% more complexity, 15% more effort

The following services were rendered: 

  • 1,300 story points (30% more complexity)
  • 15,600 hours (15% more effort per Story Point * number of Story Points delivered)

These were accounted for as follows:

  • 1,000 Story Points * 400 EUR = 400,000 EUR
  • 300 Story Points * 100 EUR = 100,000 EUR (reduced price for an  unpredictable complexity)
  • 15,600 hours * 50 EUR = 780,000 EUR

This results in total costs of 1,280,000 EUR. The total costs increase for the client. The average selling price of a team hour is around 82 EUR and thus decreases for the client.

Putting it simply, this results in the following consequences for contractor and client, depending on the respective scenario:

Risk sharing for contractor and client

The importance of checkpoints

An important component of the combined model is the checkpoint. At this checkpoint, you check whether the assumptions made are still correct. For example, the checkpoint can be set after six sprints. The questions like these are to be clarified here:

  • Is the assumed efficiency of the implementation correct? 
  • Does the complexity increase significantly in the course of detailing? 
  • Were we able to check the initial assumptions and mitigate technical risks?

Experience and trust are crucial

It is important that you know the effort per story point and the speed of the development team (Velocity). Therefore, at least an initial phase should start in this particular project setup, so that a realistic assessment of story points is possible. The combined model is thus a model based on experience. And on trust. 

If experience and trust are given, this model is well suited to distribute the risk equally between the two parties. The combined model creates two contracting parties that can communicate and work with each other on equal terms.


Do you have any questions?

You want to know more about pricing for software contracts?

Do you want to develop an environment in your company that enables working with agile methods?

Please contact Richard Brenner via eMail or on LinkedIn with your questions.

Stop giving feedback, ask for it instead. Watch Jenni Jepsen’s webinar.

For many of us, being a leader means having every little thing under control, always knowing what is going on, and telling everyone what to do. However, this role model does not equal success and leadership. 

Many psychologists and managers agree that a new kind of leader should be able to shift the tasks and even give others control over something. In this scenario, not only the leader makes his or her life more comfortable, but also the team feels more motivated, responsible, and eager to do something. In other words, it is crucial for a leader to set the right environment for others to excel and act to the maximum extent of their creativity and intellect.

However, it is easier to say than to do it because the over-controlling way of behavior is hardwired in our brains. 

Neuroscience shows that “the language we use affects how our brains wire.”

That’s why we need to relearn and train our brains to behave differently: to trust, to ask for feedback, to learn that it’s okay not to have all the answers. 

This kind of leadership is called “Intent-Based Leadership“. Jenni Jepsen will explain how and why it works from the perspective of neuroscience during the workshop on September 28-29th

The crucial part of this leadership methodology is feedback. Feedback is useful and helpful. However, we need to stop giving it. Neuroscience shows that feedback works when we understand and believe that it will lead to good things for us.

We need to learn to ask for feedback because, in this case, it’s our choice to take in and use it for growth and improvement. We are thankful for the feedback then. It makes us better – that’s the point of feedback. Creating an ask-for-feedback mindset is key to it.

This way, people will feel free to share their thoughts and ideas. In order to do that, team members should have access to information. This will lead to a higher motivation level inside a working group.

Learn how to create an ask-for-feedback mindset, and why it can help to achieve excellence in your organization in this webinar by Jenni Jepsen. 

As a primer for the upcoming training course on Intent-Based Leadership, you can rewatch the online meetup we held with Jenni Jepsen in May 2020.

Hand-picked related content:

Create an Ask-for-Feedback Mindset Workshop with Jenni Jepsen from TechTalk Software AG on Vimeo.

Questions About Test Frameworks: Q&A Part #3 with J.B. Rainsberger

This is the third chapter of our three-part Q&A blog series with J. B. Rainsberger. In this chapter he adresses questions about “Questions About Test Frameworks. The first chapter and the second chapter is in our blog in case you missed it.

On June 3, 2020 J.B. Rainsberger spoke in our remote Intro Talk about managing the various kinds of uncertainty that we routinely encounter on projects that involve legacy code. He presented a handful of ideas for how we might improve our practices related to testing, design, planning, and collaboration. These ideas and practices help us with general software project work, but they help us even more when working with legacy code, since legacy code tends to add significant uncertainty and pressure to every bit of our work. Fortunately, we can build our skill while doing everyday work away from legacy code, then exploit that extra skill when we work with legacy code.


J. B. Rainsberger helps software companies better satisfy their customers and the business that they support.

Our next remote course Surviving Legacy Code from 12 – 15 April 2021.


If the code base is too old even for any available test frameworks, how you handle it?

**Testing does not need frameworks. Testing never needed frameworks.** You can always start by just writing tests and refactoring them. If you do this long enough, you will extract a testing framework. If you’ve never tried it, then I recommend it! Kent Beck’s _Test-Driven Development: By Example_ included this exercise.

Every test framework began life with `if (!condition) { throw Error(“Test failure.”) }`. If you can write this, then you can build a testing framework; if this suffices, then you don’t need a testing framework. Start there!

If you can execute one part of the system in isolation from the rest, then you can write unit tests. In the early days of web browsers, we could only execute Javascript in the browser, because even so, we could (and did!) write unit tests without frameworks. We merely had to run those tests in a browser window. Eventually, someone decided to run Javascript outside the browser, which made it easier to write microtests for Javascript code. This made it _easier_ to write tests, but we were writing tests long before NodeJS existed.

If you can invoke a function (or procedure or division or block of code) and you can signal failure (such as by raising an error), then you can write tests without waiting for someone else to build a framework.

In addition, you don’t need to write your tests in the same language or environment as the running system. Golden Master technique helps us write tests for any system that offers a text-based interface. Any protocol could help us here: for example, think of HTTP as “merely” a special way of formatting requests and responses with text. If you have (or can easily add) this kind of interface or protocol to your system, then you can write tests in any language that might offer a convenient test framework. Use Python to test your COBOL code. Why not?

Finally, not all testing must be automated. As I wrote earlier, programmers have a strong habit of forgetting alternatives to techniques that they’ve found helpful. If you don’t know how to automate your tests easily, then don’t automate them yet. Instead, make them repeatable and document them. One day, someone will have a good idea about how to automate them.

You may have to write your own test framework but it can prove a daunting task.

In addition to what I wrote in the previous answer, I encourage you to follow the general advice about building any software with a Lightweight (Agile, Lean, …) approach: build the first feature that you need, then start using it, then add more features one at a time. You don’t need to build a fully-featured testing framework before you start to benefit from it. Start with `if (!assertion) throw Error()` and then use it! The testing framework SUnit was built incrementally. All the testing frameworks you know began from there. You can do it, too. Merely start, then take one step at a time.

You also need this refactoring-without-tests skill, to effectively refactor your tests!

Maybe! I don’t say you _need_ it, but it would probably help you. Your production code helps you to refactor your tests: if you change your tests and they now expect the wrong behavior, then your production code will fail that test for “the right reasons”. It doesn’t provide perfect coverage, but it helps more than you might expect. In that way, the production code helps to test the tests.

There are testing frameworks for COBOL and NATURAL. What could be older?

Indeed, the “framework” portion of testing relates to identifying tests, collecting test results, and reporting them in a unified way, as well as adding standard mechanisms for “set up” and “tear down”. We don’t need those things to start writing tests, although eventually we will probably want to have them. **Simply start writing tests, then remove duplication in any way that your programing language allows.** I don’t know what might be older than COBOL or NATURAL.


➡️ Also read our last two Q & A Blogposts with J.B. Rainsberger Part #1 “Managing the Uncertainty of Legacy Code” and Part #2 “The Risks Related to Refactoring Without Tests“! Follow us on Twitter or LinkedIn to get new posts.


The Risks Related to Refactoring Without Tests: Q&A Part #2 with J.B. Rainsberger

This is the second chapter of our three-part Q&A blog series with J. B. Rainsberger. In this chapter he adresses questions about “The Risks Related to Refactoring Without Tests. The first chapter and the third chapter is in our blog in case you missed it.

On June 3, 2020 J.B. Rainsberger spoke in our remote Intro Talk about managing the various kinds of uncertainty that we routinely encounter on projects that involve legacy code. He presented a handful of ideas for how we might improve our practices related to testing, design, planning, and collaboration. These ideas and practices help us with general software project work, but they help us even more when working with legacy code, since legacy code tends to add significant uncertainty and pressure to every bit of our work. Fortunately, we can build our skill while doing everyday work away from legacy code, then exploit that extra skill when we work with legacy code.


J. B. Rainsberger helps software companies better satisfy their customers and the business that they support

Our next remote course Surviving Legacy Code from 12 – 15 April 2021.


What we should say to project planners who are afraid to let us do refactoring without tests, because some folks in our team are not very good at refactoring and make mistakes? How to convince them it can work for some good programmers?

First, I recognize that if I were the project planner, then I would worry about this, too! I probably don’t know how to judge the refactoring skill of the programmers in the group, so I wouldn’t know whom to trust to refactor without tests. Moreover, I probably can’t calculate the risk associated with refactoring without tests, so I wouldn’t know when to trust _anyone_ to refactor without tests, even if I feel confident in their skill. Once I have thought about these things, it becomes easier to formulate a strategy, because I can ask myself what would make _me_ feel better in this situation? I encourage you to ask yourself this question and write down a few ways that you believe you could increase your confidence from the point of view of the project planner. I can provide a few general ideas here.

I encourage you to build trust by telling the project planner that you are aware of the risks, that you care about protecting the profit stream of the code base, and that you are prepared to discuss the details with them. It often helps a lot simply to show them that you and they are working together to solve this problem and not that you are doing what helps you while creating problems for them.

I would ask the project planners what specifically they are worried about, then matching my strategies to their worries. For example, microcommitting provides one way to manage the risk of refactoring without tests, because it reduces the cost of recovering from a mistake. At the same time, if the project planner worries about different risks than the ones I have thought about, then my strategies might not make them feel any more secure! If I know more about which risks affect them more or concern them more, then I can focus my risk-management work on those points, which also helps to build trust.

I would emphasize that we do not intend to do this as a primary strategy forever. We don’t feel comfortable doing it, either! Even so, we _must_ make progress _somehow_. We refactor without tests because it would be even more expensive to add “enough” tests than to recover from our mistakes. Of course, we have to be willing to explain our judgment here and we have to be prepared that we are wrong in that judgment! I am always prepared to take suggestions from anyone who has better ideas, but outside of that, they hired me to do good work and make sound decisions, so if they don’t trust me, then I must try to earn their trust or they should give my job to someone that they trust more. I don’t mean this last part as a threat, but merely as a reminder that if they hire me to do the job, but they never trust me, then they _should_ hire someone else!

How about pair-refactoring?

I love it! Refactoring legacy code is often difficult and tiring work, so pair-refactoring fits well even in places where “ordinary” pair programing might not be needed. Refactoring legacy code often alternates periods of difficulty understanding what to do next with long periods of tedious work. Working in pairs significantly increases the profit from both of those kinds of tasks.

You also need this refactoring-without-tests skill, to effectively refactor your tests!

Maybe! I don’t say you _need_ it, but it would probably help you. Your production code helps you to refactor your tests: if you change your tests and they now expect the wrong behavior, then your production code will fail that test for “the right reasons”. It doesn’t provide perfect coverage, but it helps more than you might expect. In that way, the production code helps to test the tests.

Moreover, tests tend to have simpler design than the production code. This means that we might never need to refactor tests in certain ways that feel common when we refactor production code. I almost always write tests with a cyclomatic complexity of 1 (no branching), so the risk when refactoring tests tends to be much lower than when refactoring legacy code. This makes refactoring tests generally safer.


➡️ Also read our two Q&A Blogposts with J.B. Rainsberger Part #1 The Risks Related to Refactoring Without Tests” and Part #3 “Questions About Test Frameworks“! Follow us on Twitter or LinkedIn to get new posts.


How Do I Find the Right Agile Software Development Company for My Project?

Price is often the crucial point when it comes to selecting an agile software development partner. Different offers can be easily compared and thus you can make what seems to be a safe choice. At least at first glance. 

But if you do not take the provider’s ability into account or underestimate it, the initial savings can quickly be reversed. For example, the provider may not meet the project’s requirements, and the project team may suffer. If you scrape expenses at the wrong end, the additional cost and time will be significantly higher than the savings at the beginning. 

We will show you which quality criteria are important for an agile approach and how you can select the right agile software development vendor for your agile project with the help of a structured process.

Criteria for selection of an agile software development company 

It is crucial to deal with the topic of quality criteria before starting the selection process. Only this way, you will know what to look for during the selection interviews. In our opinion, you should definitely consider the following criteria:

  • Experience in Agility: The more experience an agile software development company has with agile methods, the better. Agile project development can only be successful if an “Agile Culture” is lived.
  • Well-established Team: An established team is preferable over a newly assembled one. Because well-established teams can often be much more productive than a new team. For longer projects, you should also take turnover in the team into account. We are familiar with this issue, especially in offshore teams, where individual developers can be replaced at any time.
  • Direct Communication: This factor is particularly important for agile projects. It must be ensured that your experts can work with the provider’s experts directly –  ideally in the same scrum team. From our own experience, problems often arise when there are too many handovers between individual team members. Decisions have to be made quickly, without long decision-making processes. 
  • Experience in the Domain: The experience in the domain also plays an important role, especially at the team members’ level. International references are often mentioned, but the corresponding team does not have this experience. 
  • Culture Fit: It is important to understand the mindset and development processes of the provider. You should check in advance to what extent this fits with your own company culture and enables a close cooperation.
  • Degree of Dependency:  It makes sense to consider how to keep the extent of dependence on your provider low in further course of the project.One possibility is to rely on open standards. It makes it easier to change the supplier at a later point or to train its own developers. 
  • System Architecture: The architecture of a new solution must fit into the existing system landscape. A completely new system with new technologies increases complexity and makes maintenance more difficult later on if the skills available are not sufficient within the organization. This also increases the dependency on the agile software vendor.
  • Maintenance: Once the software has been developed, the maintenance phase begins. It is well known that it takes much longer than the development phase. Therefore, you should review the agile software suppliers strategies for this phase and test how quickly they react to unforeseen errors such as production incidents.

In addition to the quality criteria, as a customer, you should know your most important NFRs (Non-functional Requirements), such as security, scalability, testability. This is the only way you can examine the selection process and whether providers can fulfill these and communicate them directly. Otherwise, the system may implement security requirements at significantly higher costs or, in the worst case, not at all. It is well known that NFRs influence the system architecture more than functional requirements. In the worst case, a product may not be able to implement these NFRs at all.

Find the right agile software development company with this 4 step process

These preliminary considerations provide the basis for a structured process that allows you to select providers based on the most important criteria in four steps. 

1. Make a preliminary selection

When you start a tender for an agile project, you will receive a number of offers. The first step is to preselect the offers. 

The purchasing department usually takes over the pre-selection. Bidders are chosen based on the price and the qualification criteria described above. 

 2. Conduct intensive discussions

After the pre-selection has been implemented, intensive discussions between your and the provider’s experts take place. These discussions occur to validate the first impression. 

Also, requirements as NFRs can be addressed at this stage to give the provider a detailed idea of what is expected from him during project implementation.

 3. Conduct the prototype phase

The prototype phase is crucial, but it is often not done. Especially if you have not worked with the provider yet, you should not skip this phase under any circumstances. 

Ideally, the prototype phase should be done with several selected providers. The goal of this phase is to assess the collaboration based on the executable software. This will give you a better understanding of whether cooperation with the software development company works and whether all quality criteria can be met. 

Important: During the prototype phase, you should make sure that it is executed with the final team. The employees who will later be responsible for the product development should already work on the prototype in the final constellation during this phase.

4. Start product development

The software created during the prototype stage and the feedback from the experts regarding the cooperation with the provider serves as the basis for decisions that will further cut back the selection making. At the end of this phase, you should select a company with whom you will carry out product development.

However, before entering the development phase, contract negotiations must be conducted. With agile projects, you should be aware of several pitfalls when drafting the contract. We have summarized essential tips for you to create a solid contractual basis for your agile projects.

Price is not the decisive criterion

Price is usually not the best selection criterion, even if it seems so at first glance. It is important to be aware of the most important quality criteria in advance and validate them in a structured process for potential providers.

The following list of questions will help you to assess the quality of the provider:

  • How much experience with agile methods does the company have?
  • Is it guaranteed that I get a well-established experienced team? Do I have a direct influence on the people who work in my team?
  • Is a direct communication of customer’s and provider’s team members ensured?
  • Does the provider and, especially, the team implementing my project, have experience in my domain?
  • For longer running initiatives: How high is the fluctuation of people in the team?
  • Do you know your non-functional requirements?
  • How directly can you communicate with the implementation team?
  • Is the implementation team perhaps a part of your own team?
  • How good is the agile provider in the maintenance phase?

Do you have further questions about the selection of agile providers?

Or do you want to develop an environment in your company that enables working with agile methods?

Please contact me via eMail or on LinkedIn with your questions.


Managing the Uncertainty of Legacy Code: Q&A Part #1 with J.B. Rainsberger

In this first chapter of our three-part Q&A blog series he adressed questions that came up during his session.

On June 3, 2020 J.B. Rainsberger spoke in our remote Intro Talk about managing the various kinds of uncertainty that we routinely encounter on projects that involve legacy code. He presented a handful of ideas for how we might improve our practices related to testing, design, planning, and collaboration. These ideas and practices help us with general software project work, but they help us even more when working with legacy code, since legacy code tends to add significant uncertainty and pressure to every bit of our work. Fortunately, we can build our skill while doing everyday work away from legacy code, then exploit that extra skill when we work with legacy code.

Our next remote course Surviving Legacy Code from 12 – 15 April 2021.

J. B. Rainsberger helps software companies better satisfy their customers and the business that they support.

Here are some questions that came up during this session and some answers to those questions.

One of the issues is that the legacy code base consists of useful code and dead code and it’s hard to know which is which.

Indeed so. Working with legacy code tends to increase the likelihood of wasting time working with dead code before we feel confident to delete it. I don’t know how to avoid this risk, so I combine monitoring, testing, and microcommitting to mitigate the risk.

Microcommits make it easier to remove code safely because we can recover it more safely. Committing frequently helps, but also committing surgically (the smallest portion of code that we know is dead) and cohesively (portions of code that seem logically related to each other) helps. If our commits are more independent, then it’s easier to move them backward and forward in time, which makes it easier to recover some code that we mistakenly deleted earlier while disturbing the live code less. We will probably never do this perfectly, but smaller and more-cohesive commits make it more likely to succeed. This seems like a special case of the general principle that as I trust my ability to recover from mistakes more, I feel less worried about making mistakes, so I change things more aggressively. When I learned test-driven development in the early years of my career, I noticed that I become much more confident to change things, because I could change them back more safely. Practising test-driven development in general and microcommitting when working with legacy code combine to help the programmer feel more confident to delete code—not only code that seems dead.

Even with all this, you might still feel afraid to delete that code. In that case, you could add “Someone executed this code” logging statements, then monitor the system for those logging statements. You could track the length of time since you last saw each of these “heartbeat” logging messages, then make a guess when it becomes safe to delete that code. You might decide that if nothing has executed that code in 6 months, then you judge it as dead and plan to remove it. This could never give us perfect confidence, but at least it goes beyond guessing to gathering some amount of evidence to support our guesses

More testing, especially microtesting, puts more positive pressure on the design to become simpler: less duplication, better names, healthier dependencies, more referential transparency. I have noticed a pattern: as I simplify the design, I find it easier to notice parts that look irrelevant and I find it clearer that those parts are indeed dead code. Moreover, sometimes obviously dead code simply appears before my eyes without trying! This makes it safer to delete that code, using the microcommitting and monitoring as a recovery strategy in case I get it wrong.

So not all legacy code adds value to the business… but it is hard to know which part does.

Indeed so. We have to spend time, energy, and money to figure this out. I accept responsibility as a programmer to give the business more options to decide when to keep the more-profitable parts running and to retire the less-profitable parts. As I improve the design of the system, I create more options by making it less expensive to separate and isolate parts of the system from each other, which reduces the cost of replacing or removing various parts. Remember: we refactor in order to reduce volatility in the marginal cost of features, but more-generally in the marginal cost of any changes, which might include strangling a troublesome subsystem or a less-profitable feature area.

The Strangler approach describes incrementally replacing something in place: adding the new thing alongside the old thing, then gradually sending traffic to the new thing until the old thing becomes dead. Refactoring the system to improve the health of the dependencies makes this strangling strategy more effective, which gives the business more options to replace parts of the legacy system as they determine that a replacement would likely generate more profit. As we improve the dependencies within the system, we give the business more options by reducing the size of the smallest part that we’d need to replace. If we make every part of the system easier to replace, then we increase the chances of investing less to replace less-profitable code with more-profitable code.

This illustrates a general principle of risk management: if we don’t know how to reduce the probability of failure, then we try reducing the cost of failure. If we can’t clearly see which parts of the legacy code generate more profit and which ones generate less, then we could instead work to reduce the cost of replacing anything, so that we waste less money trying to replace things. This uses the strategy outlined in Black Swan of accepting small losses more often in order to create the possibility of unplanned large wins.

What do you think about exploratory refactoring? Do you use this technique sometimes?

Yes, I absolutely do! I believe that programmers can benefit from both exploratory refactoring and feature-oriented refactoring, but they need to remain aware of which they are doing at any time, because they might need to work differently with each strategy to achieve those benefits.

When I’m refactoring in order to add a feature or change a specific part of the code, I remind myself to focus on that part of the code and to treat any other issues I find as distractions. I write down other design problems or testing tasks in my Inbox as I work. I relentlessly resist the urge to do those things “while I’m in this part of the code”. I don’t even follow the Two-Minute Rule here: I insist on refactoring only the code that right now stands between me and finishing the task. Once I have added my feature, I release the changes, then spend perhaps 30 minutes cleaning up before moving on, which might include finishing a few of those Two-Minute tasks.

The rest of the time, I’m exploring. I’m removing duplication, improving names, trying to add microtests, and hoping that those activities lead somewhere helpful. This reminds me of the part of The Goal, when the manufacturing floor workers engineered a sale by creating an efficiency that nobody in the sales department had previously thought possible. When I do this, I take great care to timebox the activity. I use timers to monitor how much time I’m investing and I stop when my time runs out. I take frequent breaks—I use programming episodes of about 40 minutes—in order to give my mind a chance to rise out of the details and notice higher-level patterns. I don’t worry about making progress, because I donI ’t yet know what progress would look like—instead I know it when I see it. By putting all these safeguards in place, I feel confident in letting myself focus deeply on exploring by refactoring. I avoid distracting feelings of guilt or pressure while I do this work. I also feel comfortable throwing it all away in case it leads nowhere good or somewhere bad. This combination of enabling focus and limiting investment leads me over time to increasingly better results. As I learn more about the code, exploratory refactoring turns into feature-oriented refactoring, which provides more slack for more exploratory refactoring, creating a virtuous cycle.

What is your experience with Approval Tests, in cases where writing conventional unit tests might be to expensive?

I like the Golden Master technique (and particularly using the Approval Tests library), especially when text is already a convenient format for describing the output of the system. I use it freely and teach it as part of my Surviving Legacy Code course. It provides a way to create tests from whatever text output the system might already produce.

I get nervous when programmers start going out of their way to add a text-based interfaces to code that doesn’t otherwise need it only for the purpose of writing Golden Master tests. In this case, checking objects in memory with equals() tends to work well enough and costs less. I notice it often that programmers discover a helpful technique, then try to use it everywhere, then run into difficulties, then invest more in overcoming those difficulties than they would invest in merely doing things another way. Golden Master/Approval Tests represents merely another situation in which this risk comes to the surface.

I get nervous when programmers start choosing to write integrated tests for code where microtests would work equally well. When programmers think about adding Golden Master tests, they tend to think of these as end-to-end tests, because they often judge that as the wisest place to start. Just as in the previous paragraph, they sometimes fall into the trap of believing that “since it has helped so far, we must always do it this way”. No law prevents you from writing unit tests using Golden Master/Approval Tests! Indeed, some of the participants of my Surviving Legacy Code training independently discover this idea and use it to great effect. Imagine a single function that tangles together complicated calculations and JSON integration: it might help a lot to use Approval Tests to write Golden Master tests for this function while you slowly isolate the calculations from the JSON parsing and formatting. The Golden Master tests work very well with multiline text, such as values expressed in JSON format, but probably make the calculation tests awkward, compared with merely checking numeric values in memory using assertEquals().

When programmers use Golden Master/Approval Tests, they need to treat it as just one tool in their toolbox. This is the same as with any technique! I tend to treat Golden Master as a temporary and complementary technique. I use it when I focus on writing tests as a testing technique, even though I tend to prefer to write tests for design feedback. Not everyone does this! If you find yourself in the stage where you’re drowning in defects and need to focus on fixing them, then Golden Master can be a great tool to get many tests running early. Once you’ve stopped drowning, it becomes easier to look at replacing Golden Master with simpler and more-powerful unit tests—eventually microtests.


➡️ Also read our two Q&A Blogposts with J.B. Rainsberger Part #2 The Risks Related to Refactoring Without Tests” and Part #3 “Questions About Test Frameworks“! Follow us on Twitter or LinkedIn to get new posts.


Classic or agile contracts: How to find the right contract type

Traditional projects and their contracts have a decisive weakness. Even though they offer a large measure of (a supposed) budget security, they can hardly keep up with the speed of the fast-moving business world.

Thus, instead of the sureness of knowing when a product is ready and what functions it has, you have the danger of receiving an outdated product that no longer meets the up-to-date requirements.

Especially with complex software problems, the classic contracts’ limits are reached quickly. This is because modern software development is becoming increasingly agile. This means: neither the product that is to be delivered nor when the product will be done is clearly defined at the beginning of the joint work. Contractually, these patterns can only be covered by contracts that make agile collaboration possible. 

In this article, you will learn what exactly makes contracts ‘agile’ and whether they are also suitable for your project.

Four crucial differences between classic and agile contracts

First of all, let’s look at the difference between classic and agile contracts. I have summarized the most important of them in a short video:

It can be stated that the two types of contracts differ in four basic points in particular:

  1. Project scope: In a classic contract for work, the project scope is usually fixed for a long period of time. This means that all requirements are collected in a specification document and processed afterward. This is not the case with an agile contract. The changes here can be made after each sprint. This allows the development team to take feedback into account promptly. However, with agile contracts, the basis is a backlog, with the effort being estimated at the beginning.
  2. Project period: In classic contracts, the time of the releases and the milestones is fixed. With agile contracts, the following applies: the project is finished as soon as the product is ready. Of course, you can also define a time period or a fixed number of sprints. You can then go into production with whatever is ready by that time.
  3. Release cycles: The release cycles for classic projects can last from several months to years.  When it comes to agile contracts, there is ideally a prototype after each cycle (sprint), which can be tested.
  4. Budget: With both contract models, the budget can be deducted according to T&M (Time & Materials), or it can be fixed. In the case of agile, this means that a certain “Run-Rate” of the team has to be budgeted.
  5. Control: Output vs. outcome applies here. For example, in the case of classic contracts, it is measured whether the development team has reached the corresponding milestone at time X. With agile contracts, it is checked whether the product meets the customer’s requirements.

In summary, it can be said that in classic contracts, budget security is clearly in focus. Agile contracts give the development team much more freedom to react to the short-term changes and to gather feedback regularly in order to ultimately develop the best product for customers.

How to find the right contract type

The choice between an agile or classic contract depends on two basic factors: the complexity of the project and the way your company works.

Complex projects need agile contracts

An agile contract is not the best choice for every project. It is crucial to think about the tasks that arise in the project and how well you can define them in advance. The following questions can help you with this:

  • Are you sure that the general situation will not change during the project?
  • Are you sure that the value of your company can be achieved exactly the way you have defined it?
  • Do the tasks consist exclusively of recurring activities?
  • Are the risks of the technical implementation low, and can the requirements be clearly formulated?
  • Is the project rather small and short-term, so that you don’t need a team for the support and further development of the product?
  • Are you buying a standardized product that does not require integration into an existing product?

If you can give a positive answer to all these questions, then a classic contract is probably enough. Your project and requirements can be precisely defined in advance. The situation is different if you can only answer “yes” to some of the questions. In this case, it is worth taking a closer look at the agile contracts.

No agile contracts without agile working methods

The complexity of your project is not the only decisive factor. If you are not able to implement agile projects in your company, an agile contract will not get you anywhere either.

In order to develop the best possible product in an agile way, there should be a vendor-customer relationship that allows close collaboration and some adaptability. The basis for this are the principles of the agile manifesto. The manifesto originally comes from software development. Nevertheless, it can also be applied to companies. 

Consider whether you have a basis for agile projects in your company and can guarantee the following points:

  • Feedback Cycles: You are able to implement fast feedback cycles (ideally every 2 to 4 weeks) and provide continuous feedback to the development team.
  • Transparency: You can ensure complete transparency during the project. This means that the development team has access to the backlog, to the progress of the implementation of different features and to the results of each feedback cycle.
  • Variable project scope: You agree that the feedback after each cycle is incorporated into further product development. Project scope and tasks can be adjusted accordingly to it. The option must also be specified contractually, for example, via a “Changes for free” clause. It allows changes to be made in the backlog as long as they do not involve any additional work.
  • Effective collaboration: You can ensure close cooperation between yourself, the development team, and the end customer. Ideally, the team works in one place to make direct and informal communication possible. It is an advantage if the vendor provides an on-site person who takes an active role in the project (not a management role that only serves as a link to the development team).

If you can implement the following points and also have complex projects, an agile contract is probably the right choice. If you have not established an agile mindset in your company yet but still want to work with agile projects, you can gradually approach this goal with the help of workshops. Feel free to contact us if you need further information. 

When is an agile approach useful?

No two projects are the same. And no two companies are alike. That’s why the same rigid framework conditions cannot always fully apply to the cooperation with your customers and partners.

Consider in advance what framework conditions you need for the respective project and whether your company lives an agile way of working. If you want to implement complex projects, an agile way of working or agile contracts offer a particularly solid basis.


Do you have further questions about the implementation of projects with agile contracts?

Or do you want to develop an environment in your company that enables working with agile methods?

Please contact me via eMail or on LinkedIn with your questions.