Why your software project will slowly die without continuous updating

A software project is like a house.

To keep your house in a good condition you have to clean it every week. From time to time some things will break and you have to fix or to replace them. But most of the time a quick paint job for the doors and the windows might be enough.

If you take good care of your house, people will like to spend time in it.

d8151debd2c200d350bf398fbe9a6126

But now imagine you are leaving your house to itself. In the beginning the house will still be in good shape and everything will work as it should. But once no one is there to sweep the floor or take out the trash it becomes dusty and dirty. Then after a while things start to break. At first only small and unimportant things break, but one day a big storm hits and breaks a lot of things.

d1d2fd55a76f09544afa203623129c25

If nobody is there to fix the broken stuff, the dilapidation continues.

adcda058d1ee74e72a9f62bdd4f7d9fc

After a while the house is in such a bad shape that nobody wants to go there anymore. People will tell you it’s cheaper to build a new house then to fix the old one.

The same is true for any software project.

Assume you are working on a software project with a team of 7 people. Every week they add new features and bug fixes. As the project grows they do some refactoring to keep the architecture clean and without workarounds. They do migrations. They update their software libraries from time to time to take advantage of bugfixes, security fixes and new features.

But then suddenly development stops. Maybe somebody at the top decided not to spend more money on the project. A decision maker says: “The product is working. We just keep it running, but we will not spend more Money for new features.”

And so it comes that the team is assigned another task and the software project might run unattended. The project is now live and is used every day in production by real people.

After a couple of years suddenly the payment process is not working any longer. What happened? Nobody touched the code base in years. How can it be that now suddenly it doesn’t work anymore? How is it possible?

While nobody was watching, a storm was coming up…

You have to know, a software project is not a closed universe anymore. Almost always do software projects interact with the outside world. Software depends on an operating system, on hardware and mostly on a database or some kind of backend. Nowadays even on external APIs. These all are moving parts and they keep changing. And so your project has to change too, to keep working properly.

In our example the European Union had passed a new law to unify the different money transfer formats in Europe. That’s why old payment APIs might not work any longer in the near future.

That doesn’t look like a big deal. Our decision maker decides to hire a Ruby developer to fix the problem. It’s a Ruby on Rails + MySQL project. Nothing special. A Ruby developer is found quickly and his first estimate is 1 or 2 days of work.

The Ruby developer then checks out the project and he realizes that it’s an old project with Rails 2.x. Not Rails 3.x or even 4.x. No! It’s 2 major versions behind. And he never worked with such an old Rails version.

The next surprise is the MySQL version 4.0. Not 5.0 or 5.5. No! One major release behind. The mysql driver gem in the project has a native extension, which can’t be compiled on his dev machine because his gcc compiler is too new. So he has to downgrade his gcc compiler to be able to install the old database driver.

I’ll stop right here. But you get my point.

Because the project was not updated for many years even small changes become very painful now. A minor bug fix which shouldn’t take longer then a day, takes now more then a week. And the Ruby developer is not enjoying his work at all. Other Ruby developers can play around with new fun features like “Turbolinks” in Rails 4.x and he has to deal with this old crap. He will recommend to throw the project away and to rebuild it from scratch with current technologies.

The lesson is clear. If you leave your house for a couple of years you should pay a cleaning lady once a week to keep your house in good shape. In the world of software development that means you should invest a bit of time every week to check and update your dependencies.

Sometimes there are no updates and nothing in the outside world changed. In that cases the job is done in 5 minutes.

Most of the time there are new patches and minor versions available for the software libraries you use. In that case you have to update and see if the tests are still passing and at patch and minor updates they usually do. That’s usually a 20 min job and is totally worth it because these updates bring bug & security fixes and new features. Sometimes even memory and speed optimizations.

Every couple of months there are new major versions available for software dependencies like databases and APIs. Adapting them takes a couple hours, sometimes even days. But it’s worth it because these updates keep your project fresh and it’s the best way to attract talented developers in the future who want to work on recent projects and not on legacy code. Sometimes the updates are not optional, if you don’t update your application will simply break.

Continuous Updating is a method to keep projects alive, healthy and fresh over years. That ensures that even complicated changes of the business logic can be implemented any time in a reasonable time frame with a reasonable budget. Continuous Updating is like a health insurance for your project.

If you are interested in continuous updating you might be interested in design erosion as well. I can recommend this paper “Design Erosion: Problems & Causes” from Jilles van Gurp & Jan Bosch.

How do you deal with “old” or “dead” projects? Let me know what you think in the comments or join the discussion on Reddit.

On Twitter use the hashtag #ContinuousUpdating.

By the way. I’m working on VersionEye, a notification system for software libraries 😉

19 thoughts on “Why your software project will slowly die without continuous updating

  1. Hi Robert!

    Great article & analogies.

    I agree there needs to be a process to keep up to date. If it’s not done regularly, it accumulates, and at one point, will compromise the project.

    However, this brings a regular flow of updates, and something can go wrong at any time. And not all problems are caught by tests or caught soon after the library update, because they are edge cases. In Tiki, we have a lot of features, code and libraries to deal with (and VersionEye is really helpful). FYI, Tiki is:
    https://tiki.org/FOSS+Web+Application+with+the+most+built-in+features
    https://tiki.org/FOSS+Web+Application+with+the+fastest+release+cycle

    To deal with the duality of needs for stability and innovation, many projects have Long Term Support (LTS) versions. For Tiki, we have put a lot of thought and work into this, and we now have a pretty awesome system, which offers a rapid release cycle as well as a 5 year LTS version, as described here: http://info.tiki.org/Version+Lifecycle

    LTS versions go through a phase of development, then, bug fixes only, and then, security-only fixes. The update of component libraries should take this into account. Semantic Versioning is very helpful here.

    Of course, it’s tricky to offer LTS support when some of the included code doesn’t. It would be nice if VersionEye could track:
    * which libraries are still supported and until when. Most projects don’t publish this data, but they should.
    * which library updates are security-related

    Thanks!

    M 😉

    Marc Laporte

    http://MarcLaporte.com
    http://Tiki.org/MarcLaporte
    http://AvanTech.net

    1. It really depends on the application. For some kind of applications I can understand that you drive only bug fixes and security fixes. But what would you do in the example from the blog post? An external API is changing. In this kind of issues you have to react. Right?

      1. Yes, it depends on the use case. There are API changes to external services (ex.: Google Maps API v2 to v3), and sometimes, legal changes (ex.: EU cookie directive). This being said, it’s usually not a big impact so we can do it in a minor release, and we usually have months or years of warning. So the new code can be developed in trunk, stabilized and once it’s good enough, backported to the relevant supported branches (LTS or current stable) in one clean commit. The code could need a little adjusting but it’s usually simple.

        And if ever there was a change which was too big or risky, the onus would be on the external service. Our answer would be: “Sorry, we can’t fix this in the LTS in this phase of the lifecycle, and thus, if you want this feature, you need to upgrade”.

        The capacity to keep things very stable in LTS branches is one thing. However, in trunk, we should always have recent versions of the libraries.

        Best regards,

        M 😉

  2. Hey Mark,

    Thanks for the comment. It’s true that you can not test everything. Even if you have 100% test coverage it’s not a guarantee for bug free software. But it’s anyway important to do it, because usually it covers your ass. 😉

    5 year LTS is pretty good. Most open source libraries don’t care about LTS and there is no standardized way of defining a LTS in a repository. That’s why it will be very difficult to track that in VersionEye.
    Updates which are security related are the most important ones. We will integrate this kind of information into VersionEye this year 🙂

    Robert

    1. I look forward to the security-related notifications. I presume this will require a new flag somewhere: “This is a security release”. Ideally, there would be a way to indicate which versions are vulnerable. Ex.: “All versions before 1.8.5 and 1.9.1 are vulnerable”. But sometimes, it could be: “1.9.1 resolves this vulnerability in 1.9.0, but 1.8.4 is unaffected”.

      Dreaming a bit: If this new security info makes a spec evolve. Maybe while we are at it, we can add some fields for End of Life (EoL). Even though very few projects will use it at first, it’s conceivable to have proper data for the most active projects. It would be nice to be able to indicate a date or a condition. Ex.:
      * 1.7.x is end of life when 1.9.0 is released.
      * 1.7.x is end of life 2 years after release of 1.7.0
      * etc.

      Best regards,

      M 😉

      1. The whole security story is not easy. I will post this week on HN and ask for ideas and advice. There are some databases out there with known security vulnaribilities. The problem is to match them to the right software packages. I think that can be done as crowdsourcing project.

  3. You’re actually saying that your software project will solidify without continuous updating.

    Suddenly working on solidified codebases can be tricky, but relatively straightforward. Boot up a virtual machine using the operating system from whenever the software was written and you obviate any issues getting modern systems to setup the old software. Triple the standard multiplier for project work since you have allow for time for the software to soften (i.e. time for the programmer to load up the old domain contexts).

    As long as you don’t expect to pick up right where the software was left and at the same speed, you’re all set.

    1. Good point. Having the whole development environment in a virtual machine is a walkable way. But most dev teams I now just don’t work that way. The case study from my blog post has actually a real background. It’s not a fantasy story 😉 And in that particular project there was simply no virtual machine, just a very old code base on GitHub. And it was very painful to get it up and running.

      Currently I don’t develop in a virtual machine. Simply because it’s too memory intense for me. But I know one guy who made a big project for a very big Enterprise and they knew right from the beginning that the software has to run for at least 10 years. The checked everything in into Subversion. All Maven dependencies, the Java compiler and even the Eclipse IDE. So that they could reproduce everything even years later.
      But this is the only project I know so far which worked that way.

  4. A colleague of mine has a good term for this, calling it ‘technical Inflation’. It’s somewhat related to the idea of adding ‘technical debt’ to your code base.

  5. Re the “10 year project”: I hope they have a budget and plan in place to test the full install and configuration every 6 to 12 months!

    Currently, we are trying to work with a set of tools, including Docker, Ansible and Fabric, to get the installation and configuration of host-and-software fully automated. Good docs help, but doing this type of thing manually will go wrong at some point…

    1. It is cheaper to do it regularly than doing a big refactoring every couple years.

      Docker and Ansible are awesome tools. We use both of them at VersionEye. Specially Docker is a big help to test whole environments.

  6. I am very happy to read this blogs.You have pointed out some great details , I too conceive this is a very wonderful post I Appreciated you to write this good post. Thanks

Leave a comment