The Importance of Continuous Integration for Software Development

In my opinion, any organization that produces software, regardless if the software is to be sold or used internally, should be using Continuous Integration (CI) as a best practice to help improve the quality of their software, increase productivity and reduce risk.

This post does not aim to re-define CI, as there are plenty of other sources online that can give you the run-down, but it does aim to share my thoughts as to why it is so important.  Let me summarize CI at a high-level, and then break down the positive impact as I see it.

What Is Continuous Integration?

Good question, and an excellent place to start.  Continuous Integration, or “CI” for short, in the simplest form is a process which fully automates the compilation of your projects latest source code from source control.  There are some very important points in that sentence, so let me highlight them:

  • fully automated
  • compilation of latest source code
  • from source control

These points are very important, but often seem to be missed at some point or another.  If developers are running build scripts manually (either from their machines, or on a server somewhere), that doesn’t qualify as CI.  If they are still doing builds manually (not even running scripts, ouch) from their machines, then keep reading….

The “fully automated” and “from source control” parts actually abstract out a few important concepts that I don’t want to skip over.  The concept is that developers should all be checking in their code as frequently as possible and practical for the project.  It is important that their code compiles against the latest source, but their entire feature or problem set they are working on may not have to be fully baked out.  The act of checking-in the code, will trigger a CI build.  This is where the “fully automated” part comes into play.  A proper CI server or tool will automatically check source control for any new changes.  When it detects that somebody has checked-in a change, it will get the latest code and force a build (compile) of all the latest source.  If the compile works, that’s great!  If the compile breaks for whatever reason, it should notify the developer who broke the build with their change, and optionally, other team members as well.

That’s the bare-bones CI process in a nutshell.  However, even that basic process can save a team from some major headaches down the line, especially for a geographically dispersed or outsourced team.

Improve Quality

A bare-bones CI process that simply compiles the latest source code may not seem to directly impact the quality of your software, but it can have some nice benefits that will indirectly improve your quality.  Since CI is fully automated, it will lay the foundation for a repeatable and consistent build process which will immediately notify the team where there are build issues, and what those issues are.  Having this type of stability in your build process will help improve the quality of your software indirectly.

In terms of having a direct impact on your software quality, a good CI process will lay the foundation for rolling in additional automated tasks into a daily or nightly build, such as running unit tests, static code analysis, security testing, building installers, deployment tasks and more.  Basically, any task that can be automated, should.  This will give your entire build process a much needed overhaul in terms of automation and consistency.  Having testing and analysis built into the build process is a really good way to start improving on the software quality.  These tools will not improve the quality themselves, but they will increase the transparency of potential issues to the project team, and help them locate and focus in on potential issues more proactively.

Improve Productivity

Using an automated build process should have a rather obvious direct correlation to team productivity.  If it is fully automated, then that means they are spending less time doing builds.  Correct!

Let me also highlight some other potential issues that can have dramatic impacts on team productivity.  Let me start with some questions.  Do you have a geographically disperse team?  How often do your developers check in their code?  Do they get the latest code and ensure their proposed changes will compile before they check those changes in?  Do they double-check that any new dependencies they are adding are also properly checked-in and working?

Any time a developer checks in their code without properly checking if their changes will negatively impact others on the team (i.e. they “broke” the build), there is the potential for some major downtime or troubleshooting while team members figure out why their build broke and how to best fix the problem.

When teams are spread out across the globe, or at least different timezones, this can become a bigger problem.  I have seen cases where a team member in the US checked in their latest code on a Friday night, shortly before leaving work.  On Sunday, team members in India needed the latest code to continue working, so they checked out the code and it didn’t compile!  The team member who “broke” the build had forgotten to ensure that all their files, including new dependencies, were checked-in and working before leaving the office.  Had this team had a proper CI build in place at that time, the developer (and team in our case) would have been notified of the broken build within minutes of having checked-in the code.  With any luck, the notification would have arrived before the team member left the office.

Now keep in mind that CI does not promote blindly checking in your code without performing other best practices (like ensuring the proposed code changes compile against the latest code, unit tests pass, etc).  However, a good CI process will help cover the gap for when these types of issues, or mistakes, do happen.

In the cases I mention above, a proper CI process will help improve productivity through a few means.  Since CI is automated, it only needs to be written once (and probably tweaked over time), which helps reduce the time team members are performing low-value tasks, such as building, running unit tests and analyzing the code.  In a few cases, CI can improve team productivity in the, hopefully rare, cases where a broken build would have cost the team hours of troubleshooting and fixing just to get back up and running.

Reduce Risk

Hopefully a few trends have become apparent above.  An automated and easily reproducible build process can help reduce risk, just in the fact that it can help eliminate potential human errors during a build process.  Having the team notified immediately as soon as a broken build is detected can also reduce delivery risk by helping to keep the team productive and the project on-track.  Building in additional steps into a nightly build, like automated unit testing, security testing, code analysis, documentation generation, installer generation, even deployment tasks (the list can go on), can help further reduce risk of potential undetected issues or human errors in each of those steps.


The more time that the team can focus on producing good, quality code, and spend less time manually running low-value tasks such as some of the above, means the team is able to produce value to the business.  And that is very important.

Best Practices

Here are some of the best practice that I encourage others to follow based on my experiences:

  • Use source control and ensure the team is proactively checking-in code as frequently as possible
  • Find a build server product that you like, and most importantly integrates with your source control product.  It should be easy to use and to configure/maintain.  There are quite a few CI servers out there, many of which are open-source and free – but just ensure you will be able to understand and easily configure the tool, as it will quickly become your best friend or worst enemy depending on how easy it is to use and the features it supports.  I do not claim to have a broad exposure to many CI servers, but I will post a future review on some build server tools I have used in the hopes that it will help somebody in their review.
  • Keep different build configurations for each project:
    • CI Build Configuration (mandatory) – Keep it lean and mean.  Configure the build server get the latest code and compile full project on every check-in to source control.  Possibly run critical unit tests as long as they are fast.  Remember, the goal with this configuration is to notify the team of a broken build as soon as possible.  If possible, have the build server clean out the local source directory and dependencies before getting the latest code from source control.  This should help ensure a clean build every time.
    • Nightly Build Configuration (recommended) – Have this build on a regular schedule for the project.  This should perform in much the same manner as the CI Build Configuration (clean build), but you have the luxury of time now.  The primary objective is not to notify the team of a broken build as soon as possible, but to perform a full build and additional quality checks along the way.  Even deploying to a development or test environment may make sense depending on the project and the team’s needs.  Build servers will often let a team member manually trigger a build by clicking on a button or a link.  This means even if this build configuration is scheduled to occur in the middle of the night, the team can still use it to do a full build, run tests, scan the code and deploy, during the middle of the day if needed.
    • Weekly Build Configuration (optional) – This configuration was based on our needs around full static code analysis.  Full static code analysis can take a really long time, depending on the produce used, what it is scanning for, and the volume of source code it needs to analyze.  It can also put a heavy load on the build server, effectively locking it up for a lengthy period of time.  For us, it simply did not make sense to scan the code everyday, as it didn’t offer that much value to the team at this point.  Once a week seemed to give us the results we wanted, and did not task the build servers during the busy week days (when the build servers should be focused more on CI builds).
  • Try to keep all project dependencies within a relative path under the project in source control.  In other words, try to refrain from referencing dependencies in the GAC (.Net) or on the local machine that the build server may not be able to easily retrieve from source control.  Build servers will typically support build agents, and may federate the build on any agent that is available.  Having dependencies required to be pre-installed, or pre-setup on each build agent in a real pain to maintain, and also complicates project deployment.  It is my opinion that having all dependencies stored in source control with the project is the best way to go.  This way the build server doesn’t require any special setup or steps to compile.  It simply gets the latest code (and dependencies) from source control and triggers the build.  This also allows the build agent to potentially purge the build directory and all dependencies before it gets the latest code – which can help ensure a clean build each and every time.  This also limits the potential impact of one project’s dependency from impacting another project, assuming they are sharing the same build agent (common to share build agents in our organization).
  • Look for ways to further improve your build and deployment process.  Automate everything over time.
  • Suggestion:  If cost is an issue, and virtual machines (VMs) are available in your organization, then use hosted VMs for the build server and build agents.  I find that in almost all cases, a properly configured VM will have no difficulty running CI builds quickly.  Full static code analysis can take much longer if the VM does not have sufficient resources.  VMs can be easier to setup, and/or clone, and can reduce cost across a medium or large deployment of build agents.  If you are using VMs, ensure you have enough disk space allocated for frequently building your selected projects.  In my case, having a dedicated logical drive for builds, at least 20-30 GB may be sufficient for a number of projects (again, it depends on the size of the projects, the logs and artifacts that get generated for each build, how often the CI builds will get triggered, how long you want to keep the build logs).  In our case, static code analysis consumes much more RAM, CPU and disk on the VMs, than just a regular CI build.  This is part of the reason for us doing it on the weekends during idle time.

  1. Debunking Testing Myths – Coded Complex - pingback on September 3, 2010 at 1:27 pm

Leave a Reply

Trackbacks and Pingbacks:

%d bloggers like this: