Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Continuous Integration

No description
by

Marcela-Nicoleta Craciunescu

on 31 March 2014

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Continuous Integration

Thank you
for your
attention!
Principles
Jenkins CI
Continuous Integration
This practice advocates the use of a revision control system for the project's source code. All artifacts required to build the project should be placed in the repository. In this practice and in the revision control community, the convention is that the system should be buildable from a fresh checkout and not require additional dependencies
by Marcela Craciunescu
Continuous integration (CI) is the practice, in software engineering, of merging all developer workspaces with a shared mainline several times a day. It was first named and proposed as part of extreme programming (XP). Its main aim is to prevent integration problems, referred to as "integration hell" in early descriptions of XP. CI can be seen as an intensification of practices of periodic integration advocated by earlier published methods of incremental and iterative software development, such as the Booch method. CI isn't universally accepted as an improvement over frequent integration, so it is important to distinguish between the two as there is disagreement about the virtues of each.
What is CI?
CI was originally intended to be used in combination with automated unit tests written through the practices of test-driven development. Initially this was conceived of as running all unit tests and verifying they all passed before committing to the mainline. Later elaborations of the concept introduced build servers, which automatically run the unit tests periodically or even after every commit and report the results to the developers. The use of build servers (not necessarily running unit tests) had already been practised by some teams outside the XP community. Now, many organisations have adopted CI without adopting all of XP.
In addition to automated unit tests, organisations using CI typically use a build server to implement continuous processes of applying quality control in general — small pieces of effort, applied frequently. In addition to running the unit and integration tests, such processes run additional static and dynamic tests, measure and profile performance, extract and format documentation from the source code and facilitate manual QA processes. This continuous application of quality control aims to improve the quality of software, and to reduce the time taken to deliver it, by replacing the traditional practice of applying quality control after completing all development. This is very similar to the original idea of integrating more frequently to make integration easier, only applied to QA processes.
Advantages
Continuous integration has many advantages:

When unit tests fail or a bug emerges, developers might revert the codebase to a bug-free state, without wasting time debugging
Developers detect and fix integration problems continuously — avoiding last-minute chaos at release dates, (when everyone tries to check in their slightly incompatible versions).
Early warning of broken/incompatible code
Early warning of conflicting changes
Immediate unit testing of all changes
Constant availability of a "current" build for testing, demo, or release purposes
Immediate feedback to developers on the quality, functionality, or system-wide impact of code they are writing
Frequent code check-in pushes developers to create modular, less complex code
Metrics generated from automated testing and CI (such as metrics for code coverage, code complexity, and features complete) focus developers on developing functional, quality code, and help develop momentum in a team
Disadvantages

Initial setup time required
Well-developed test-suite required to achieve automated testing advantages

Many teams using CI report that the advantages of CI well outweigh the disadvantages. The effect of finding and fixing integration bugs early in the development process saves both time and money over the lifespan of a project.
Continuous Integration Cycle
Maintain a code repository
Automate the build
A single command should have the capability of building the system. Many build-tools, such as make, have existed for many years. Other more recent tools are frequently used in continuous integration environments. Automation of the build should include automating the integration, which often includes deployment into a production-like environment. In many cases, the build script not only compiles binaries, but also generates documentation, website pages, statistics and distribution media (such as Debian DEB, Red Hat RPM or Windows MSI files).
Automate deployment
Most CI systems allow the running of scripts after a build finishes. In most situations, it is possible to write a script to deploy the application to a live test server that everyone can look at. A further advance in this way of thinking is Continuous deployment, which calls for the software to be deployed directly into production, often with additional automation to prevent defects or regressions
Make it easy to get the latest deliverables
Making builds readily available to stakeholders and testers can reduce the amount of rework necessary when rebuilding a feature that doesn't meet requirements. Additionally, early testing reduces the chances that defects survive until deployment. Finding errors earlier also, in some cases, reduces the amount of work necessary to resolve them.
Everyone can see the results of the latest build
It should be easy to find out whether the build breaks and, if so, who made the relevant change.
Test in a clone of the production environment
Having a test environment can lead to failures in tested systems when they deploy in the production environment, because the production environment may differ from the test environment in a significant way. However, building a replica of a production environment is cost prohibitive. Instead, the pre-production environment should be built to be a scalable version of the actual production environment to both alleviate costs while maintaining technology stack composition and nuances.
The build needs to complete rapidly, so that if there is a problem with integration, it is quickly identified.
Keep the build fast
Make the build self-testing
Once the code is built, all tests should run to confirm that it behaves as the developers expect it to behave.
By committing regularly, every committer can reduce the number of conflicting changes. Checking in a week's worth of work runs the risk of conflicting with other features and can be very difficult to resolve. Early, small conflicts in an area of the system cause team members to communicate about the change they are making. Committing all changes at least once a day (once per feature built) is generally considered part of the definition of Continuous Integration. In addition performing a nightly build is generally recommended
Everyone commits to the baseline every day
Every commit (to baseline) should be built
The system should build commits to the current working version in order to verify that they integrate correctly. A common practice is to use Automated Continuous Integration, although this may be done manually. For many, continuous integration is synonymous with using Automated Continuous Integration where a continuous integration server or daemon monitors the version control system for changes, then automatically runs the build process.
What is Jenkins?
Jenkins is an award-winning application that monitors executions of repeated jobs, such as building a software project or jobs run by cron. Among those things, current Jenkins focuses on the following two jobs:
1. Building/testing software projects continuously
just like CruiseControl or DamageControl. In a nutshell, Jenkins provides an easy-to-use so-called continuous integration system, making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. The automated, continuous build increases the productivity.
such as cron jobs and procmail jobs, even those that are run on a remote machine. For example, with cron, all you receive is regular e-mails that capture the output, and it is up to you to look at them diligently and notice when it broke. Jenkins keeps those outputs and makes it easy for you to notice when something is wrong
2.Monitoring executions of externally-run jobs
Cron is the time-based job scheduler in Unix-like computer operating systems. Cron enables users to schedule jobs (commands or shell scripts) to run periodically at certain times or dates. It is commonly used to automate system maintenance or administration, though its general-purpose nature means that it can be used for other purposes, such as connecting to the Internet and downloading email.[1] The name cron comes from the Greek word χρόνος [chronos] for time.
procmail is a mail delivery agent (MDA) capable of sorting incoming mail into various directories and filtering out spam messages. Procmail is widely used on Unix-based systems and stable, but no longer maintained; users who wish to use a maintained program are advised to use an alternative MDA, such as maildrop.
Features
Easy installation
: Just java -jar jenkins.war, or deploy it in a servlet container. No additional install, no database.
Easy configuration
: Jenkins can be configured entirely from its friendly web GUI with extensive on-the-fly error checks and inline help. There's no need to tweak XML manually anymore, although if you'd like to do so, you can do that, too.
Change set support
: Jenkins can generate a list of changes made into the build from Subversion/CVS. This is also done in a fairly efficient fashion, to reduce the load on the repository.
Permanent links:
Jenkins gives you clean readable URLs for most of its pages, including some permalinks like "latest build"/"latest successful build", so that they can be easily linked from elsewhere.
RSS/E-mail/IM Integration
: Monitor build results by RSS or e-mail to get real-time notifications on failures.
After-the-fact tagging
: Builds can be tagged long after builds are completed.
JUnit/TestNG test reporting:
JUnit test reports can be tabulated, summarized, and displayed with history information, such as when it started breaking, etc. History trend is plotted into a graph.
Distributed builds
: Jenkins can distribute build/test loads to multiple computers. This lets you get the most out of those idle workstations sitting beneath developers' desks.
File fingerprinting:
Jenkins can keep track of which build produced which jars, and which build is using which version of jars, and so on. This works even for jars that are produced outside Jenkins, and is ideal for projects to track dependency.
Plugin Support
: Jenkins can be extended via 3rd party plugins. You can write plugins to make Jenkins support tools/processes that your team uses.
formerly known as "Hudson Labs"
Jenkins Best Practices
Use
"file fingerprinting"
to manage
dependencies.
The most reliable builds will be clean builds, which are built fully from Source Code Control.
Always configure your job to generate trend reports and automated testing when running a Java build
Always secure Jenkins
Backup Jenkins Home regularly
Integrate tightly with a repository browsing tool like FishEye if you are using Subversion as source code management tool
Integrate tightly with your issue tracking system, like JIRA or bugzilla, to reduce the need for maintaining a Change Log
Setup a different job/project for each maintenance or development branch you create
Allocate a different port for parallel project builds and avoid scheduling all jobs to start at the same time
Write jobs for your maintenance tasks, such as cleanup operations to avoid full disk problems
Set up Jenkins on the partition that has the most free disk-space
Archive unused jobs before removing them.
Take steps to ensure failures are reported as soon as possible.
Set up email notifications mapping to ALL developers in the project, so that everyone on the team has his pulse on the project's current status.
Configure Jenkins bootstrapper to update your working copy prior to running the build goal/target
Tag, label, or baseline the codebase after the successful build.
In larger systems, make sure all jobs run on slaves. This ensures that the jenkins master can scale to support many more jobs than if it had to process build jobs directly as well.
Jenkins supports the "master/slave" mode, where the workload of building projects are delegated to multiple "slave" nodes, allowing a single Jenkins installation to host a large number of projects, or to provide different environments needed for builds/tests. This document describes this mode and how to use it.
How does it work?

A "master" is an installation of Jenkins. When you weren't using the master/slave support, a master was all you had. Even in the master/slave mode, the role of a master remains the same. It will serve all HTTP requests, and it can still build projects on its own.
Slaves are computers that are set up to build projects for a master. Jenkins runs a separate program called "slave agent" on slaves. In other words, there is no need to install the full Jenkins (package or compiled binaries) on a slave node. There are various ways to start slave agents, but in the end a slave agent and Jenkins master needs to establish a bi-directional byte stream (for example a TCP/IP socket.)
When slaves are registered to a master, a master starts distributing loads to slaves. The exact delegation behavior depends on configuration of each project. Some projects may choose to "stick" to a particular machine for a build, while others may choose to roam freely between slaves. For people accessing Jenkins website, things works mostly transparently. You can still browse javadoc, see test results, download build results from a master, without ever noticing that builds were done by slaves. In other words, the master becomes a sort of "portal" to the entire build farm.
https://wiki.jenkins-ci.org/display/JENKINS/Step+by+step+guide+to+set+up+master+and+slave+machines
https://ci.jenkins-ci.org/view/All/
Example
https://ci.jenkins-ci.org/job/core_selenium-test/
Full transcript