Continuous Integration with Jenkins
My team started playing with Jenkins for automated testing. We moved from that to full continuous integration over a series of sprints dedicated to improving our deploy process. This post goes into the details of our configuration and catalogues some of the challenges we faced along the way.
We have software, and unit tests, and environments, but nothing to tie them all together. Enter Jenkins, probably the most widely used continuous integration tool. I wasn’t a huge fan of Jenkins, but it has earned my admiration. The more I use it, the more I realize it was my own bias and not Jenkins that was the problem.
This was our first foray into CI, we initially setup Jenkins so we’d have automated unit tests. Whenever a developer pushes code to master, we fire off a curl call to Jenkins which triggers a build of our primary project. Before doing this, unit tests were run by each developer,
before committing sometimes rarely.
Deploy to $ENVIRONMENT
We have 4 environments, DEVELOPMENT which is a virtual machine that each developer has where code changes are made and tested by the individuals doing the work. INTEGRATION where code from multiple developers is checked out and tested together to look for regressions. STAGING where everything for the next release is prepared and tested in an “as-close-to-production” environment as possible, and finally PRODUCTION where the live code lives. We configured the “deploy pipeline” plugin and set each step of the jenkins build to push code to the next environment. Now, as soon as we have a passing build, jenkins automatically pushes that to integration, where we can do our internal smoke testing. Currently, a manual process (click a button) is required to push that build out to staging, and a second click is required to push *that* code out to production. We’re not quite comfortable enough with our unit test and functional test coverage to let Jenkins just push this for us, but maybe one day.
After each step of the deploy, Jenkins can run additonal scripts or actions to do things like clear memcached, or restart apache to ensure a clean APC cache. We use it to do both of these things, as well as notifying New Relic that we just did a production deploy.
I haven’t seen a way to use rsync over ssh to copy build artifacts. Currently as far as I know you can only use scp, and you can forget about symlinks, they aren’t followed, their destinations are copied. This is actually an issue with scp and not with Jenkins.
I haven’t found a good way to push artifacts while maintaining symbolic links. Luckily we only have 1 symlink that’s critical to our project, and that easy enough to recreate using a script that Jenkins calls after a sucessful push to the new environment.
I’d love to have Jenkins automatically run our functional tests in integration for us, before allowing us to push to staging. Since we use behat internally for our functional tests, and behat can format it’s output in junit style, this sholdn’t be hard to do. Just need to find the time.
Want to Help?
Want to help us move to continous deployment?
We’re hiring: seedbox.com/careers