At AI, we are strong proponents of paying down technical debt in order to enable teams to move faster and with less technical overhead. At the core of such practices within the development community is the concept of automated build and deployment practices. Sadly, we find within a majority of our clients that there is a massive underinvestment in this area that inevitably results in a multitude of problems and increased time to market on new initiatives. This led us to the desire to share our findings and approach to the issue, which is described thoroughly in the following post, and is supplemented with guidelines and examples throughout.
Continuous Integration (CI) builds are the easiest to setup correctly, and should be done first and foremost before attempting other build setups. This build type helps to raise awareness throughout the development and QA teams, helps to identify problems in coding standards earlier in the process, and helps to ensure that defects are not introduced as a result of code changes. CI builds are highly varied throughout the industry, and can boast a complex list of steps in order to accomplish their purpose. In order to simplify this setup, AI typically recommends the following steps be considered when defining the standardized CI setup within an organization, in order to set a standardized baseline for the approach that will ensure the success of the teams involved. With continuous integration, the main purpose is to detect events or criteria that indicate that the build does not pass some minimum level of acceptability.
Examples of this may include, but are not limited to the following:
- Failure to compile
- Missing dependencies upon check-in
- Overly complex method bodies
- Failing unit tests
- Failure to comply to corporate coding standards
- Lack of code coverage (code introduced that has no associated unit tests)
The team will be automatically notified if the build fails due to one of the above conditions, and the person responsible for the offensive check-in should immediately rectify the build. This ensures that potential issues are corrected before the developer moves on to another task, and long before code is escalated to QA, thereby avoiding much of the cost associated with the surfacing of such defects and issues at a later time or in downstream environments.
Unit testing is an important part of CI, as it helps to set the framework which will ensure that code remains in operating order over time, and will greatly reduce the time spent by development resources in defect fixing and regression testing. There are a variety of unit testing frameworks in use today, but the most commonly used include nUnit, xUnit and msUnit. All of these provide some form of visual studio support and all are easily integrated into automated builds and reporting. Further, since these are syntactically very similar, the determination on which to use is often trivial, so long as the chosen standard is followed consistently by development teams throughout the organization.
As a corollary to unit testing, it is necessary to ensure that code has the proper amount of associated tests, or is ‘covered’. Un-covered code is code that is not tested via any unit test, and defects will not be detected by the running of the unit test suite. Most typically, code coverage is monitored in two main ways: Peer code reviews and automated builds.
In automated builds, the integration of a coverage analysis tool such as OpenCover or nCover. It is recommended and standard practice to set a minimum coverage level on each source code base (e.g. 75% coverage), and any code change that causes the code coverage level to fall below this benchmark will result in a broken build.
Cyclomatic Complexity Checking
Cyclomatic Complexity is a well adopted indicator of code complexity, and it has been found that higher complexity ratings correspond directly to higher defect rates. The most important metrics to monitor and report are as follows:
- Depth (both max and average)
- Complexity (both max and average)
The depth metric reveals how deep nested blocks of code are within a given method. Deeply nested code can be hard to read and maintain, and the management of this metric can keep a solution from becoming unsupportable.
The complexity metric is even more meaningful, and shows how many possible paths of conditional logic a given method contains. This metric alone can quickly reveal methods for which proper code coverage is troublesome, and team members can react by refactoring to keep code simpler where possible, thereby avoiding the cost of complex and hard to track defects that might slip into production. There are several tools available in the marketplace that will accomplish these types of checks, but by far the most versatile and popular is Source Monitor, which integrates quite easily with most automated build platforms.
Design Guideline and Coding Practices Conformity
Code reviews are an important component of any project. They help ensure a proper level of quality in the code, as well as conformity to coding standards and guidelines. In many cases, this can help to keep out code ‘smells’ and will help to ensure that developers can more easily learn code within the system that was generated by someone else. Additionally, this will help to detect key logical flaws, such as potential security vulnerabilities and performance bottlenecks.
Visual Studio comes equipped with FxCop, a static code analysis tool that is prepopulated with the most common rules targeting the .NET framework. This tool can easily be customized with rules targeted towards particular organizational standards, such as abiding by common naming conventions and avoiding usage of particular types within the framework. For example, many organizations forbid developers to use native database access classes, and enforce the use of repositories and other abstractions which are decoupled from the underlying database platform.
Automated Build Number Incrementing
Well thought out continuous integration plans include auto-incrementing the build number, and the archiving of every build. This helps to ease the testing process, as defects can be logged against an accurate build number that can be traced to a particular check-in at all times. This can be accomplished using standard msBuild or nAnt scripts, and is a relatively easy step to implement in order to ensure that build numbers are always up to date and accurate.
Database Object Versioning
Database scripts including both DDL and application system data are an important part of the typical corporate application. However, in the majority of cases, these scripts are not kept under source control or versioned along with the rest of the application source code. As a result, this adds risk to every deployment, as the database version cannot be accurately traced to match the version of the corresponding application(s). In recent years, the marketplace has offered up a vast array of tools to address this need, including RoundhousE, dbDeploy, and a suite of tools within TFS and Visual Studio. The automated application of newly checked in versions of database scripts in lower environments helps to ensure that errors within scripts are found quickly and accurately, and enforce that all changes go through source control and are associated with a particular version number. This enables proper rollbacks and restores of previous database versions to ensure that the database is always correctly versioned with the application, eliminating wasted time due to manual errors.
Alerting and Reporting
The most important aspect of any CI implementation is the alerting and reporting associated with the solution. As the objective of CI in general is to raise awareness of coding issues and speed the time to reaction to such issues, proper notifications and reporting are key to ensure that all team members are aware at all times of breaks and other metrics within the build. All modern CI platforms now support a variety of customizable reports and email notifications. The most common out-of-the-box reports include broken build history, build duration statistics, code coverage reports, and FxCop violation reports. Unfortunately, most CI platforms do not come with standard reports for complexity graphs, but these have been developed and are readily available as customized addons from a variety of sources.
The final component to a great CI solution is the implementation of information radiators. Every team has members that are not actively developing full time, such as development managers, BAs, coaches, and product managers. The placement of information radiators throughout team areas helps to provide further visibility into project metrics and will raise awareness into the quality of the product.
There are many types of information radiators, but those centered around CI builds and test builds typically include:
- Ambient Orbs – (Lights that turn red on a broken build)
- Build Monitors – Monitors that show the status of one or more builds, and the name of the last developer to check in code in the event that the build is broken
- Build Break Reports – Graphical representation of the build history indicating the percentage of time the build is broken
- Complexity Reports – List of the top 10 most complex methods in a code base.
Test deployments typically borrow from several of the steps utilized in the CI build setup, and start with building of code and deployment of database objects. However, they do not include the various code analysis functions described under the CI build, as these have presumably already been run on code that is checked in and ready to deploy to the test environment.
Instead, test deployments in their simplest form consist of three simple steps:
- Compile code
- Deploy/Copy application artifacts
- Deploy database updates
In more advanced scenarios, these builds may contain a variety of other environmental updates, such as MSMQ setups, IIS configurations, updating of multiple machines for distributed computing scenarios, etc.
These scripts are obviously very dependent upon the application and environment in question, and can easily grow into complex processes quite quickly. Many times, test deployments are supplemented with environment database refreshes. However, due to their complexity and potentially long-running processes, these are not typically kicked off as part of the test deployment itself.
Dependencies differ from environment to environment, and are typically programmed such that these dependencies can be updated or referenced in the applications configuration files. Updating these configuration files, however, can be a cumbersome and error-prone task. With the introduction of delta configuration files, this can be automated as a simple build step, so that simple specification of the target environment will result in the output of the correct version of the configuration files.
Other approaches to attack this problem include maintaining separate configuration files for each environment. Unfortunately, this method can be equally time consuming and cumbersome, as the addition of new configuration values is frequently overlooked, resulting in the introduction of defects to downstream environments.
Standard delta configuration files specify only the values that differ from the common baseline configuration for each environment, and therefore contain fewer values to maintain and ensure are accurate. Delta configuration approaches were introduced to .NET by the Microsoft PnP group with the Enterprise Library initiative, and have been widely accepted as standard practice across the industry as a common step in the deployment build process.
Publishing of the build is the most important and critical step in the deployment build process. This step can be incredibly simple, or extremely complex. While this automation always has benefits, it is clear that the more complex the build, the more value that automating it brings to the team in terms of time savings and quality assurance.
On Demand Test Deployments
One of the most important aspects of test deployments is that unlike the CI build, these deployments should not be triggered automatically as the result of a check-in or code change. Test deployments automatically deploy changes to the QA environment, including database updates and application updates, and more complex processes may destroy and restore event queues, restart IIS, or even truncate database tables and logs. Clearly, if this process were to silently start in the midst of a tester performing some testing on a particular fix, it could terminate their session with the application and would likely result in lost time and/or work. One of the most important characteristics of a deployment build is that these builds, while automated, can only be triggered by the manual clicking of a button. In this way, we can enable QA to ‘pull’ new versions into the test environment as they are ready to test the changes, and ensure that their work is not lost or terminated unknowingly.
• Cruise Control.Net - http://ccnet.sourceforge.net/CCNET/Documentation.html
• RoundhousE - https://github.com/chucknorris/roundhouse/wiki/GettingStarted
• dbDeploy – http://dbdeploy.com/
• Enterprise Library – http://entlib.codeplex.com/
• nAnt – http://nant.sourceforge.net/
• msBuild Reference – http://msdn.microsoft.com/enus/library/0k6kkbsd.aspx
• Source Monitor – http://www.campwoodsw.com/sourcemonitor.html
• OpenCover – https://github.com/sawilde/opencover
• nCover – http://www.ncover.com/
• FxCop – http://msdn.microsoft.com/enus/library/bb429476(v=vs.80).aspx
» Blog » Automating your Build: Part I