Software Development

Continuous Integration is Dead

A few days ago, my article “Why Continuous Integration Doesn’t Work” was published at DevOps.com. Almost the same day I received a few strongly negative critiques on Twitter.

Here is my response to the un-asked question:

Why the hell shouldn’t continuous integration work, being such a brilliant and popular idea?

 
 
Even though I have some experience in this area, I won’t use it as an argument. I’ll try to rely only on logic instead.

BTW, my experience includes five years of using Apache Continuum, Hudson, CruiseControl, and Jenkins in over 50 open source and commercial projects. Besides that, a few years ago I created a hosted continuous integration service called fazend.com, renamed to rultor.com in 2013. Currently, I’m also an active user of Travis.

How Continuous Integration Should Work

The idea is simple and obvious. Every time you make a new commit to the master branch (or /trunk in Subversion), a continuous integration server (or service) attempts to build the entire product. “Build” means compile, unit test, integration test, quality analysis, etc.

The result is either “success” or “failure”. If it is a success, we say that “the build is clean”. If it is a failure, we say that “the build is broken”. The build usually gets broken because someone breaks it by commiting new code that turns previously passing unit tests into failing ones.

This is the technical side of the problem. It always works. Well, it may have its problems, like hard-coded dependencies, lack of isolation between environments or parallel build collisions, but this article is not about those. If the application is well written and its unit tests are stable, continuous integration is easy. Technically.

Let’s see the organizational side.

Continuous integration is not only a server that builds, but a management/organizational process that should “work”. Being a process that works means exactly what Jezz Humble said in Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation, on page 55:

Crucially, if the build fails, the development team stops whatever they are doing and fixes the problem immediately

This is what doesn’t work and can’t work.

Who Needs This?

As we see, continuous integration is about setting the entire development team on pause and fixing the broken build. Let me reiterate. Once the build is broken, everybody should focus on fixing it and making a commit that returns the build to the stable state.

Now, my question is — who, in an actively working team, may need this?

A product owner, who is interested in launching new features to the market as soon as possible? Or maybe a project manager, who is responsible for the deadlines? Or maybe programmers, who hate to fix someone else’s bugs, especially under pressure.

Who likes this continuous integration and who needs it?

Nobody.

What Happens In Reality?

I can tell you. I’ve seen it multiple times. The scenario is always the same. We just start to ignore that continuous integration build status. Either the build is clean or it is broken, and we continue to do what we were doing before.

We don’t stop and fix it, as Jezz Humble recommends.

Instead, we ignore the information that’s coming from the continuous integration server.

Eventually, maybe tomorrow or on Monday, we’ll try to find some spare time and will try to fix the build. Only because we don’t like that red button on the dashboard and want to turn it into a green one.

What About Discipline?

Yes, there is another side of this coin. We can try to enforce discipline in the team. We can make it a strict rule, that our build is always clean and whoever breaks it gets some sort of a punishment.

Try doing this and you will get a fear driven development. Programmers will be afraid of committing anything to the repository because they will know that if they cause a build failure they will have to apologize, at least.

A strict discipline (which I’m a big fan of) in this case only makes the situation worse. The entire development process slows down and programmers keep their code to themselves for as long as possible, to avoid possibly broken builds. When it’s time to commit, their changes are so massive that merging becomes very difficult and sometimes impossible.

As a result you get a lot of throw-away code, written by someone but never committed to master, because of that fear factor.

OK, What Is The Solution?

I wrote about it before; it is called “read-only master branch”.

It is simple — prohibit anyone from merging anything into master and create a script that anyone can call. The script will merge, test, and commit. The script will not make any exceptions. If any branch breaks at even one unit test, the entire branch will be rejected.

In other words: raise the red flag before the code gets into master.

This solves all problems.

First, the build is always clean. We simply can’t break it because nobody can commit unless his code keeps the build clean.

Second, there is no fear of breaking anything. Simply because you technically can’t do it. All you can do is get a negative response from a merging script. Then you fix your errors and tell the script to try again. Nobody sees these attempts, and you don’t need to apologize. Fear factor is gone.

BTW, try to use rultor.com to enforce this “read-only master branch” principle in your project.

Related Posts

You may also find these posts interesting:

Reference: Continuous Integration is Dead from our JCG partner Yegor Bugayenko at the About Programming blog.

Yegor Bugayenko

Yegor Bugayenko is an Oracle certified Java architect, CEO of Zerocracy, author of Elegant Objects book series about object-oriented programing, lead architect and founder of Cactoos, Takes, Rultor and Jcabi, and a big fan of test automation.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

25 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mike Minicki
Mike Minicki
9 years ago

“Instead, we ignore the information that’s coming from the continuous integration server”

Which company is that? I would like to get a name so I can avoid it :)

The rule is simple. Don’t push if you know it doesn’t work; if you pushed anyway, try to fix it fast or roll it back. Before going home. Good rule of thumb – don’t push 10 minutes before your shift ends, do it in the morning next day.

Quaiks
Quaiks
9 years ago

CI is dead, DI containers are code polluters… Polemic and more polemic. What’s next? TDD?

Yannick Majoros
Yannick Majoros
9 years ago
Reply to  Quaiks

+1

OP is jumping to wrong conclusions.

A developer shouldn’t be able to break a build.

The OP even suggests a solution (test before merging in master), but insists on saying that CI is dead. Why? Use this solution in your CI system: commit, push to a branch, let the CI build this specific branch and mark it as pass/fail. Nothing broken but the branch, impact is for original commiter only and works simply goes on.

Sergiy Bezzub
Sergiy Bezzub
9 years ago

+1

Bogdan Marian
Bogdan Marian
9 years ago

TeamCity has a very nice feature called “pre-tested commit” – see it here: https://www.jetbrains.com/teamcity/features/delayed_commit.html.
This little thing ensures that a build is run before the changes are committed to the SCM repository, basically ensuring that the repo is always in a stable form (at least from the build point of view).

Dan Gros
Dan Gros
8 years ago
Reply to  Bogdan Marian

TFS also has a Gated Check-in functionality which rejects any developer’s check-ins if they fail any tests

Yannick Majoros
Yannick Majoros
9 years ago

Is CI as dead as Java was 5 years ago?

Tom
Tom
9 years ago

Interesting point; pre-commit triggers have been around for a long time. This is a variation on that. Something to thing about.

One remark. Continuous Integration is all about frequently committing changes to the master branch several times per day (http://en.wikipedia.org/wiki/Continuous_integration). This approach will be a challenge if each commit first needs to run through all tests. I think your suggestion is more in line with continuous delivery without the integration aspect.

Yannick Majoros
Yannick Majoros
9 years ago
Reply to  Tom

Only to the master branch? Why? If you build other branches, you just avoid this problem, why should you refrain from doing it?

Any authoritative source for that? (Martin Fowler has a a lot to say but isn’t, neither is Wikipedia)

Yannick Majoros
Yannick Majoros
9 years ago

(my above reply was meant to be to Sergiy Bezzub below)

Ali Arda Orhan
Ali Arda Orhan
9 years ago

Why do they let you write articles on this website.

Yegor Bugayenko
9 years ago
Reply to  Ali Arda Orhan

I don’t write them here! They shamelessly steal my articles from my blog, which is here: http://www.yegor256.com/

Sergiy Bezzub
Sergiy Bezzub
9 years ago

Described in article solution just a modification/adaptation of CI nothing more. There are a lot of ways to make CI better or worse it depends on skills of people who use that technology, imho

John I. Moore, Jr.
John I. Moore, Jr.
9 years ago

While the title of your article is controversial, the solution you propose is exactly what developers need. Great article. And the title probably did its job since it attracted me to read the article.

Yannick Majoros
Yannick Majoros
9 years ago

Or the solution to this could just be……… Continuous Integration. The workflow the author describes in http://www.yegor256.com/2014/07/24/rultor-automated-merging.html is just the one I’ve been using in various jenkins projects. Seems like reinventing the wheel. I agree on not letting anyone merging into master. You can use CI to build another branch, why wouldn’t you? You CI tool can then be integrated in a code review tool, which can be used to merged back into master when everything has passed. This workflow leads to unbreakable builds. It’s been around for years, a quick google search leading to stuff since 2004, and things… Read more »

paul
paul
8 years ago

Hi Yannick. This is a diagram of a workflow that I first started using in late 80s as Gatekeeper for HP-UX 1.1. The tools of course are different, but the workflow is similar. We also had primitive DVCS in late 80s at Sun, I was Gatekeeper for SunOS/SparcStation 2. I tailor this workflow based on team culture. Basic idea is to have relocatable processes. Process that run downstream should be runnable upstream and vice versa. For many teams it requires rethinking their source code repo structures, and design/implementation of automation, including build and test. The more granular the better. Avoid… Read more »

Yegor Bugayenko
9 years ago

Thanks for reading it! It was originally published on my blog: http://www.yegor256.com/2014/10/08/continuous-integration-is-dead.html (you can follow me there, the are more articles coming soon)

Matt
Matt
9 years ago

Just because your team lacks discipline doesn’t mean it’s the same for everyone.
I’ve worked in plenty of teams with great coders where you build before you commit and you rarely ever break the build. If you do you are embarrassed and fix it asap.

toma
toma
9 years ago
Eyal Edri
Eyal Edri
9 years ago

The script will merge, test, and commit. The script will not make any exceptions. If any branch breaks at even one unit test, the entire branch will be rejected.

The “script” already exists and it’s called Zuul. (developed by openstack ci)

http://ci.openstack.org/zuul/
http://status.openstack.org/zuul/

Yannick Majoros
Yannick Majoros
9 years ago

> The script will merge, test, and commit. The script will not make any exceptions. If any branch breaks at even one unit test, the entire branch will be rejected.
> The “script” already exists and it’s called Zuul. (developed by openstack ci)

Or just “Jenkins” or any other… well, continuous integration solution. :-)

Korporal
8 years ago

I’m not sure exactly why you’re having the problem you outlined Yegor, but what is wrong with simply having the developers run the build/test locally before they commit? Then when they do commit have them commit to a branch on their personal fork and rely on pull requests to move the commits from there to the public master branch in the origin repo? If you let developers push commits directly to the origin repo then of course you’re taking a risk. This approach won’t eliminate all risks of a breaking build, but should catch the majority of breakages arising from… Read more »

DesiderousErasmusRot
DesiderousErasmusRot
8 years ago

Yes, on all counts! a) We need to be able to pull clean base code AT ANY TIME. When you have too many hands “contributing” to a release day after day, nobody is working with the actual base code. And you can spend hours EVERY DAY cleaning up yesterday’s bugs on your local even though you have no intention of moving your hacks to the build. b) FDD – Fear Driven Development leads to gamesmanship instead of quality code. You check-in early so you won’t be the last one to touch a component, or you check-in last, so you can… Read more »

Benjamin Mark
Benjamin Mark
7 years ago

Pushing directly to master and let Jenkins act after that isn’t a good idea? We are creating pull requests from our branches to master which will be collected and handled by Jenkins. After the build succeeded the pull request is marked as “clean” and can be merged to master.
Works great and master won’t be polluted by dirty code.

Back to top button