Devops in Practice - Danilo Sato - E-Book

Devops in Practice E-Book

Danilo Sato

0,0

Beschreibung

DevOps is a cultural and professional movement that's trying to break these walls. Focused on automation, collaboration, tool sharing and knowledge sharing, DevOps has been revealing that developers and system engineers have a lot to learn from one another. In this book, Danilo Sato will show you how to implement DevOps and Continuous Delivery practices so as to raise your system's deployment frequency at the same time as increasing the production application's stability and robustness. You will learn how to automate a web application's build and deploy phases and the infrastructure management, how to monitor the system deployed to production, how to evolve and migrate an architecture to the cloud and still get to know several other tools that you can use on your company

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 269

Veröffentlichungsjahr: 2014

Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:

Android
iOS
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



TOC

Dedication

Preface

About the book

Acknowledgements

About the author

Chapter 1: Introduction

1.1 Traditional approach

1.2 An alternative approach: DevOps and Continuous Delivery

1.3 About the book

Chapter 2: Everything starts in production

2.1 Our example application: an online store

2.2 Installing the production environment

2.3 Configuring the production servers

2.4 Application build and deploy

Chapter 3: Monitoring

3.1 Installing the monitoring server

3.2 Monitoring other hosts

3.3 Exploring Nagios service checks

3.4 Adding more specific checks

3.5 Receiving alerts

3.6 A problem hits production, now what?

Chapter 4: Infrastructure as code

4.1 Provision, configure or deploy?

4.2 Configuration management tools

4.3 Provisioning the database server

4.4 Provisioning the web server

Chapter 5: Puppet beyond the basics

5.1 Classes and defined types

5.2 Using modules for packaging and distribution

5.3 Refactoring the web server Puppet code

5.4 : Separation of concerns: infrastructure vs. application

5.5 Puppet forge: reusing community modules

5.6 Conclusion

Chapter 6: Continuous integration

6.1 Agile engineering practices

6.2 Starting with the basics: version control

6.3 Automating the build process

6.4 Automated testing: reducing risk and increasing confidence

6.5 What is continuous integration?

6.6 Provisioning a continuous integration server

6.7 Configuring the online store build

6.8 Infrastructure as code for the continuous integration server

Chapter 7: Deployment pipeline

7.1 Infrastructure affinity: using native packages

7.2 Continuous integration for the infrastructure code

7.3 Deployment pipeline

7.4 Next steps

Chapter 8: Advanced topics

8.1 Deploying in the cloud

8.2 DevOps beyond tools

8.3 Advanced monitoring systems

8.4 Complex deployment pipelines

8.5 Managing database changes

8.6 Deployment orchestration

8.7 Managing environment configuration

8.8 Architecture evolution

8.9 Security

8.10 Conclusion

References

Go to Code Crushing and see our other e-books - www.codecrushing.com.

Dedication

"To my dad, who introduced me to the world of computers and is my life-long role model."

Preface

Jez Humble

Shortly after I graduated from university in 1999, I got a job at a start-up in London. My boss, Jonny LeRoy, taught me the practice of continuous deployment: When we were finished with a new feature, we would do a quick manual smoke test on our workstation and then ftp the relevant ASP scripts directly onto the production server – not a practice I would recommend today, but it did have the advantage of enabling us to get new ideas to our users quickly.

In 2004 I joined ThoughtWorks where my job was to help enterprises deliver software, and I was appalled to discover that lead times of months or even years were common. Fortunately, I was lucky enough to work with a number of smart people in our industry who were exploring how to improve these outcomes while also increasing quality and improving our ability to serve our users. The practices we came up with also made life better for the people we were working with (for example, no more deployments outside of business hours) – an important indication that we were doing something right. In 2010, Dave Farley and I published “ Continuous Delivery,” in which we describe the principles and practices that make it possible to deliver small, incremental changes quickly, cheaply, and at low risk.

However, our book omits the nuts and bolts of how one actually gets started creating a deployment pipeline, put in place monitoring and infrastructure as code, and the other important, practical steps needed to implement continuous delivery. Thus I am delighted that Danilo has written the book that you have in front of you, which I think is an important and valuable contribution to the field. Danilo has been deeply involved in helping organizations implement the practices of continuous delivery for several years and has deep experience, and I am sure you will find his book practical and informative.

I wish you all the best with your journey.

About the book

Delivering software in production is a process that has become increasingly difficult in IT department of various companies. Long testing cycles and a division between development and operations teams are some of the factors that contribute to this problem. Even Agile teams that produce releasable software at the end of each iteration are unable to get to production when they encounter these barriers.

DevOps is a cultural and professional movement that is trying to break down those barriers. Focusing on automation, collaboration, tools and knowledge sharing, DevOps is showing that developers and system engineers have much to learn from each other.

In this book, we show how to implement DevOps and Continuous Delivery practices to increase the deployment frequency in your company, while also increasing the production system's stability and reliability. You will learn how to automate the build and deployment process for a web application, how to automate infrastructure configuration, how to monitor the production system, as well as how to evolve the architecture and migrate it to the cloud, in addition to learning several tools that you can apply at work.

Acknowledgements

To my father, Marcos, for always being an example to follow and for going beyond by trying to follow the code examples without any knowledge of the subject. To my mother, Solange, and my sister, Carolina, for encouraging me and correcting several typos and grammar mistakes on preliminary versions of the book.

To my partner and best friend, Jenny, for her care and support during the many hours I spent working on the book.

To my editor, Paulo Silveira, for giving me the chance, and knowing how to encourage me at the right time in order for the book to become a reality. To my reviewer and friend, Vivian Matsui, for correcting all my grammar mistakes.

To my technical reviewers: Hugo Corbucci, Daniel Cordeiro and Carlos Vilella. Thanks for helping me find better ways to explain difficult concepts, for reviewing terminology, for questioning my technical decisions and helping me improve the overall contents of the book.

To my colleagues Prasanna Pendse, Emily Rosengren, Eldon Almeida and other members of the "Blogger's Bloc" group at ThoughtWorks, for encouraging me to write more, as well as as for providing feedback on the early chapters.

To my many other colleagues at ThoughtWorks, especially Rolf Russell, Brandon Byars and Jez Humble, who heard my thoughts on the book and helped me choose the best way to approach each subject, chapter by chapter.

Finally, to everyone who contributed directly or indirectly in writing this book.

Thank you so much!

About the author

Danilo Sato started to program as a child at a time when many people still did not have home computers. In 2000, he entered the bachelor's program in Computer Science at the University of São Paulo [USP], beginning his career as a Linux Network Administrator at IME-USP for 2 years. While at university he began working as a Java / J2EE developer and had his first contact with Agile in an Extreme Programming (XP) class.

He started his Masters at USP soon after graduation, and supervised by Professor Alfredo Goldman, he presented his dissertation in August 2007 about "Effective Use of Metrics in Agile Software Development" [null].

During his career, Danilo has been a consultant, developer, systems administrator, analyst, systems engineer, teacher, architect and coach, becoming a Lead Consultant at ThoughtWorks in 2008, where he worked in Ruby, Python and Java projects in Brazil, USA and United Kingdom. Currently, Danilo has been helping customers adopt DevOps and Continuous Delivery practices to reduce the time between having an idea, implementing it and running it in through production.

Danilo also has experience as a speaker at international conferences, presenting talks and workshops in: 2007/2009/2010 XP, Agile 2008/2009, Ágiles 2008, Java Connection 2007, Falando em Agile 2008, Rio On Rails 2007, PyCon Brazil 2007, SP RejectConf 2007, Rails Summit Latin America 2008, Agile Brazil 2011/2012/2013, QCon SP 2011/2013, QCon Rio 2014, RubyConf Brazil 2012/2013. He was also the founder of the Coding Dojo @ São Paulo and an organizer for Agile Brazil 2010, 2011 and 2012.

Chapter 1: Introduction

With the advancement of technology, software has become an essential part of everyday life for most companies. When planning a family vacation — scheduling hotel rooms, buying airplane tickets, shopping, sending an SMS or sharing photos of a trip — people interact with a variety of software systems. When these systems are down, it creates a problem not only for the company that is losing business, but also for the users who fail to accomplish their goals. For this reason, it is important to invest in quality software and stability from the moment that the first line of code is written until the moment it starts running.

1.1 Traditional approach

Software development methodologies have evolved, but the process of transforming ideas into code still involves several activities such as requirements gathering, design, architecture, implementation and testing. Agile software development methods have emerged in the late 90s, proposing a new approach to organize such activities. Rather than performing them in distinct phases - the process known as waterfall - they happen at the same time, in short iterations. At the end of each iteration, the software becomes more and more useful, with new features and less bugs, and the team decides with the customer what should be the next slice that is developed.

As soon as the customer decides that the software is ready to go live and the code is released to production, the real users can start using the system. At this time, several other concerns become relevant: support, monitoring, security, availability, performance, usability, among others. When the software is in production, the priority is to keep it running stably. In cases of failure or disaster, the team needs to be prepared to react quickly to solve the problem.

Due to the nature of these activities, many IT departments have a clear separation of responsibilities between the development team and the operations team. The development team is responsible for creating new products and applications, adding features or fixing bugs, while the operations team is responsible for taking care of these products and applications in production. The development team is encouraged to introduce changes, while the operations team is responsible for keeping things stable.

At first glance, this division of responsibilities seems to make sense. Each team has different goals and different ways of working. While the development team works in iterations, the operations team needs to react instantly when something goes wrong. Furthermore, the tools and knowledge necessary to work in these teams are different. The development team evolves the system by introducing changes. On the other hand, the operations team avoids changes because they bring a certain risk to the stability of the system. This creates a conflict of interest between these two teams.

Once the conflict exists, the most common way to manage this relationship is by creating processes that define the method of working as well as the responsibilities of each team. From time to time, the development team packages the software that needs to go to production, writes some documentation explaining how to configure the system, how to install it in production, and then transfers the responsibility to the operations team. It is common to use ticket tracking systems to manage the communication between the teams and defining service-level agreements (SLAs) to ensure that the tickets are processed and closed in a timely fashion. This hand-off often creates a bottleneck in the process of taking code from development and testing to production. It is common to call this process a deployment to production, or simply a production deploy.

Over time, the process tends to become more and more bureaucratic, decreasing the frequency of deploys. With that, the number of changes introduced in each deploy tends to accumulate, also increasing the risk of each deploy and creating the vicious cycle shown in figure 1.1.

Fig. 1.1: Vicious cycle between development and operations

This vicious cycle not only decreases the ability of the company to respond quickly to changes in business, but also impacts earlier stages of the development process. The separation between development and operation teams, the hand-off of code between them and the ceremony involved in the deployment process, end up creating a problem known as "the Last Mile"[null].

The last mile refers to the final stage of the development process that takes place after the software meet all its functional requirements but before being deployed into production. It involves several activities to verify whether the software that will be delivered is stable or not, such as: integration testing, system testing, performance testing, security testing, user acceptance testing (UAT), usability testing, smoke testing, data migration, etc.

It is easy to ignore the last mile when the team is producing and showcasing new features every one or two weeks. However, there are few teams that are actually deploying to production at the end of each iteration. From the business point of view, the company will only have a return of investment when the software is actually running in production. The last mile problem is only visible when taking a holistic view of the process. To solve it we must look past the barriers between the different teams involved (the business team, the development team or the operations team).

1.2 An alternative approach: DevOps and Continuous Delivery

Many successful internet businesses — such as Google, Amazon, Netflix, Flickr, Facebook and GitHub — realized that technology can be used in their favor and that delaying a production deploy means delaying their ability to compete and adapt to changes in the market. It is common for them to perform dozens or even hundreds of deploys per day!

The line of thinking that attempts to decrease the time between the creation of an idea and its implementation in production is also known as "Continuous Delivery"[null], and is revolutionizing the process of developing and delivering software.

When the deployment process ceases to be a ceremony and starts becoming commonplace, the vicious cycle of figure 1.1 gets completely reversed. Increasing deployment frequency causes the amount of change in each deploy to decrease, also reducing the risk associated with that deploy. This benefit is not something intuitive, but when something goes wrong it is much easier to find out what happened because the amount of changes that may have caused the problem is smaller.

However, reducing the risk does not imply a complete removal of the processes between development and operation teams. The key factor that allows the reversal of the cycle is process automation, as shown in figure 1.2. Automating the deployment process allows it to run reliably at any time, removing the risk of problems caused by human error.

Fig. 1.2: DevOps practices help to break the vicious cycle through process automation

Investing in automation is not a new idea; many teams already write automated tests as part of their software development process. Practices such as test-driven development (TDD) [null] or continuous integration 6 — which will be discussed in more detail in chapter 6 — are common and widely accepted in the development community. This focus on test automation, along with the creation of multidisciplinary teams, helped to break the barrier between developers, testers and business analysts, creating a culture of collaboration between people with complimentary skills working as part of the same team.

Inspired by the success of Agile methods, a new movement emerged to take the same line of reasoning to the next level: the DevOps movement. Its goal is to create a culture of collaboration between development and operation teams that can increase the flow of completed work — higher frequency of deploys — while increasing the stability and reliability of the production environment.

Besides being a cultural change, the DevOps movement focuses a lot more on practical automation of the various activities necessary to tackle the last mile and deliver quality code to production, such as: code compilation, automated testing, packaging, creating environments for testing or production, infrastructure configuration, data migration, monitoring, log and metrics aggregation, auditing, security, performance, deployment, among others.

Companies that have implemented these DevOps practices successfully no longer see the IT department as a bottleneck but as an enabler to the business. They can adapt to market changes quickly and perform several deploys per day safely. Some of them even make a new developer conduct a deploy to production on their first day of work!

This book will present, through actual examples, the main practices of DevOps and Continuous Delivery to allow you to replicate the same success in your company. The main objective of the book is to bring together the development and operations communities. Developers will learn about the concerns and practices involved in operating and maintaining stable systems in production, while system engineers and administrators will learn how to introduce changes in a safe and incremental way by leveraging automation.

1.3 About the book

The main objective of the book is to show how to apply DevOps and Continuous Delivery concepts and techniques in practice. For this reason, we had to choose which technologies and tools to use. Our preference was to use open source languages and tools, and to prioritize those that are used widely in industry.

You will need to use the chosen tools to follow the code examples. However, whenever a new tool is introduced, we will briefly discuss other alternatives, so that you can find the option that makes the most sense in your context.

You do not need any specific prior knowledge to follow the examples. If you have experience in the Java or Ruby ecosystem, that will be a bonus. Our production environment will run on UNIX (Linux, to be more specific), so a little experience using the command line can help but is not mandatory [null]. Either way, you will be able to run all the examples on your own machine, regardless if you are running Linux, Mac or Windows.

Target audience

This book is written for developers, system engineers, system administrators, architects, project managers and anyone with technical knowledge who has an interest in learning more about DevOps and Continuous Delivery practices.

Chapter structure

The book was written to be read from beginning to end sequentially. Depending on your experience with the topic of each chapter, you may prefer to skip a chapter or follow a different order.

Chapter 2 presents the sample application that will be used through the rest of the book and its technology stack. As the book's focus is not on the development of the application itself, we use a nontrivial application written in Java built on top of common libraries in the Java ecosystem. At the end of the chapter, the application will be running in production.

With the production environment running, in chapter 3 we wear the operations team's hat and configure a monitoring server to detect failures and send notifications whenever a problem is encountered. At the end of the chapter, we will be notified that one of the servers crashed.

In chapter 4 we will rebuild the problematic server, this time using automation and treating infrastructure as code. Chapter 5 is a continuation of the subject, covering more advanced topics and refactoring the code to make it more modular, readable and extensible.

After wearing the operations team's hat, we will turn our attention to the software development side. Chapter 6 discusses the Agile engineering practices that help writing quality code. You will learn about the various types of automated tests and launch a new server dedicated to perform continuous integration of our application code.

Chapter 7 introduces the concept of a deployment pipeline. We setup continuous integration for the infrastructure code, and implement an automated process to deploy the newest version of the application in production with the click of a button.

In chapter 8, we migrate the production environment to the cloud. Finally, we discuss more advanced topics, including resources for you to research and learn more about the topics that were left out of the scope of this book.

Code conventions

In the code examples, we will use ellipsis ... to omit the unimportant parts. When making changes to already existing code, we will repeat the lines around the area that needs to be changed to give more context about the change.

When the line is too long and does not fit on the page, we will use a backslash \ to indicate that the next line in the book is a continuation of the previous line. You can simply ignore the backslash and continue typing in the same line.

In the command line examples, in addition to the backslash, we use a greater than signal > in the next line to indicate that it is still part of the same command. This is to distinguish between the command that is being entered and the output produced when the command executes. When you run those commands, you can simply type it all in one line, ignoring the \ and the >.

More resources

We have created a discussion forum on Google Groups where you can send questions, suggestions, or feedback directly to the author. To subscribe, visit the URL:

https://groups.google.com/d/forum/devops-in-practice-book

All code examples from the book are also available on the author's GitHub projets, in these URLs:

https://github.com/dtsato/loja-virtual-devops

https://github.com/dtsato/loja-virtual-devops-puppet

Chapter 2: Everything starts in production

Contrary to what many software development processes suggest, the software life cycle should only begin when real users start using it. The last mile problem presented in chapter 1 should really be the first mile, because no software delivers value before going into production. Therefore, the objective of this chapter is to launch a complete production environment, starting from scratch, and install a relatively complex Java application that will be used as a starting point for the DevOps concepts and practices presented in the rest of the book.

By the end of this chapter, we will have a complete e-commerce application running – backed by a database – which will allow users to register and make purchases, in addition to providing administrative tools to manage the product catalog, promotions and the static content available at the online store.

What is the point of adding new features, improving performance, fixing bugs, creating beautiful screens for the website if it is not in production? Let's start by tackling the most difficult part, the last mile and put the software into production.

2.1 Our example application: an online store

As the book's focus is not on the development process itself, but rather on the DevOps practices that help to build, deploy and operate an application in production, we will use a sample application based on an open source project. The online store is a web application written in Java, using the Broadleaf Commerce platform (http://www.broadleafcommerce.org/). The code used in this chapter and throughout the book was written based on the demo site created by Broadleaf and can be accessed at the following GitHub repository:

https://github.com/dtsato/loja-virtual-devops/

Broadleaf Commerce is a flexible platform that provides configuration and cusomtization points to extend its functionality. However, it already implements many of the standard features of an online shopping website such as Amazon, including:

Product catalog browsing and search;Product pages with name, description, price, photos, and related products;Shopping cart;Customizable checkout process including promotions, payment and shipping information;User registration;Order history;Administration tools for managing: the product catalog, promotions, prices, shipping rates and pages with customized content.
Fig. 2.1: A preview of the online store's homepage

Moreover, the Broadleaf Commerce platform is built using several well established frameworks in the Java community, making it an interesting example from a DevOps perspective, because it is a good representation of the complexity involved in the build and deploy process of a Java application in the real world. The implementation details are beyond the scope of this book, but it is important to have an overview of the technologies and frameworks used in this application:

Java: The application is written in Java (http://java.oracle.com/), compatible with Java SE 6 and above.Spring: Spring (http://www.springframework.org/) is a popular framework for Java enterprise applications that offers several components such as: dependency injection, transaction management, security, an MVC framework, among others.JPA e Hibernate: JPA is the Java Persistence API and Hibernate (http://www.hibernate.org/) is the most popular JPA implementation in Java community, allowing developers to perform object-relational mapping (ORM) between Java objects and database tables.Google Web Toolkit: GWT (http://developers.google.com/web-toolkit/) is a framework written by Google to facilitate the creation of rich interfaces that run in the browser. It allows the developer to write Java code that is then compiled to Javascript. The online store uses GWT to implement the administration UI.Apache Solr: Solr (http://lucene.apache.org/solr/) is a search server that allows indexing of the product catalog and the online store and offers a powerful and flexible API to query full text searches throughout the catalog.Tomcat: Tomcat (http://tomcat.apache.org/) is a server that implements the web components of Jave EE – Java Servlet and Java Server Pages (JSP). Although Broadleaf Commerce runs on alternative application servers – such as Jetty, GlassFish or JBoss – we will use Tomcat because it is a common choice for many enterprises running Java web applications.MySQL : MySQL (http://www.mysql.com/) is a relational database server. JPA and Hibernate allow the application to run on several other database servers – such as Oracle, PostgreSQL or SQL Server – but we will also use MySQL because it is open source and is also a popular choice.

This is not a small application. Better yet, it uses libraries that are often found in the majority of real world Java applications. As we have mentioned, we will use it as our example, but you can also follow the process with your own application, or even choose another software, regardless of language, in order to learn the DevOps techniques that will be presented and discussed in the book.

Our next objective is to make the online store live. Before making the first deploy we must have a production environment ready with servers where the code can run. The production environment in our example will initially be composed of two servers, as shown in figure 2.2.

Fig. 2.2: Production environment for the online store

Users will access the online store through a web server, which will run an instance of Tomcat. The web server runs all libraries and Java frameworks used by the online store, including an embedded Solr instance. Finally, the web application will use MySQL, running on a separate database server. This is a two-tier architecture commonly used by several real world applications. In chapter 8 we will discuss in more detail the factors that influence the choice of your application's physical architecture, but for now, let's follow a common pattern to simplify the example and make the store live as soon as possible.

2.2 Installing the production environment

Buying servers and hardware for the online store would cost too much money, so we will initially use virtual machines (VM). This allows us to run the entire production environment on our machine. There are several tools available for running virtual machines – such as VMware or Parallels – however some are paid and most use a GUI for configuration, which would turn this chapter into a collection screenshots that are hard to follow.

To solve these problems, we will use two tools that simplify running and configuring virtual machines: Vagrant (http://www.vagrantup.com) and VirtualBox (http://www.virtualbox.org). VirtualBox is Oracle's VM hypervisor that lets you configure and run virtual machines on all major platforms: Windows, Linux, Mac OS X and Solaris. To further simplify our task, we will also use Vagrant, that provides a Ruby DSL to define, manage and configure virtual environments. Vagrant also has a simple command line interface for interacting with these virtual environments.

If you already have Vagrant and VirtualBox installed and running, you can skip to subsection "Declaring and booting the servers". Otherwise, the installation process for these tools is simple.

Installing VirtualBox

To install VirtualBox, visit the downloads page: http://www.virtualbox.org/wiki/Downloads and select the latest version. At the time of writing, the newest version is VirtualBox 4.3.8 and the examples will use it throughout this book. Select the installation package according to your platform: on Windows, the installer is an executable .exe file; On Mac OS X, the installer is a .dmg package; on Linux, VirtualBox offers both .deb or .rpm packages depending on your distribution.

Once you have downloaded the package, install it: on Windows and Mac OS X, just double-click on the installation file (.exe on Windows and .dmg on Mac OS X) and follow the installer's instructions. On Linux, if you have chosen the .deb package, install it by running the dpkg -i {file.deb} command, replacing {file.deb} with the name of the file you have downloaded, e.g. virtualbox-4.3_4.3.8-92456~Ubuntu~raring_i386.deb. If you have chosen the .rpm package, install it by executing the rpm -i {file.rpm} command, replacing {file.rpm} with the name of the file you have downloaded, e.g. VirtualBox-4.3-4.3.8_92456_el6-1.i686.rpm.

Note: On Linux, if your user is not root, you will need to execute the previous commands with the sudo command in front, for example: sudo dpkg -i {file.deb} or sudo rpm -i {file.rpm}.

To test that VirtualBox is installed properly, go to the console and run the VBoxManage -v command. If you are running on Windows, to open the console you can use Win+R and type cmd. If everything is correct, the command will run and print an output like "4.3.8r92456", depending on the installed version.

Installing Vagrant

Once VirtualBox is installed, we can continue with Vagrant's installation process, which is very similar. Visit Vagrant's downloads page http://www.vagrantup.com/downloads.html and select the latest version. At the time of writing, the newest version is Vagrant 1.5.1 and the examples will use it throughout the book. Select the installation package according to your platform. On Windows, the installer is an .msi file; on Mac OS X, the installer is a .dmg package; on Linux, Vagrant offers both .deb or .rpm packages depending on your distribution.

Once you have downloaded the package, install it. On Windows and on Mac OS X, just do a double-click the installation file (.msi on Windows and .dmg on Mac OS X) and follow the installer's instructions. On Linux, just follow the same steps of the Linux VirtualBox installation using the chosen package (vagrant_1.5.1_i686.deb or vagrant_1.5.1_i686.rpm). On Mac OS X and Windows, the command vagrant is already added to the PATH after installation. On Linux, you will need to add /opt/vagrant/bin to your PATH manually.

To test that Vagrant is installed properly, go to the console and run the vagrant -v command. If everything is correct, the command will run and print an output like "Vagrant 1.5.1", depending on the installed version.

The last thing you need to do is to set up an initial image that serves as a template for initializing new VMs. These images, also known as box, serve as a starting point, containing the base operating system. In our case we will use a box offered by Vagrant, which contains the image of a 32-bit Linux Ubuntu 12.04 LTS. To download and configure this box, you must run the command:

1 $ vagrant box add hashicorp/precise32 2 ==> box: Loading metadata for box 'hashicorp/precise32'3 box: URL: https://vagrantcloud.com/hashicorp/precise324 …

This command will download a large VM image file (299MB), so it will require a good internet connection and will take some time to finish. There are many other images made available by the Vagrant community that can be found at http://www.vagrantbox.es/ and the setup process is similar to the previous command. The company behind Vagrant, HashiCorp, recently introduced the Vagrant Cloud (https://vagrantcloud.com/), as a way to simplify the process of finding and sharing “boxes ” with the community.

Declaring and booting the servers

Once Vagrant and VirtualBox are installed and working, it is simple to declare and manage a virtual environment. Our VM configuration is declared in a file called Vagrantfile. The vagrant init command creates an initial Vagrantfile, with comments explaining all the available configuration options.

In our case, we will start simple and only set up what is required for our two production servers. The content we need in our Vagrantfile is:

1 VAGRANTFILE_API_VERSION"2" 2 3 Vagrant.configure(VAGRANTFILE_API_VERSION)do|config| 4 config.vm.box"hashicorp/precise32" 5 6 config.vm.define:dbdo|db_config| 7 db_config.vm.hostname"db" 8 db_config.vm.network:private_network, 9 :ip=>"192.168.33.10"10 end11 12 config.vm.define:webdo|web_config|13 web_config.vm.hostname"web"14 web_config.vm.network:private_network,15 :ip=>"192.168.33.12"16 end17 end

This will configure two VMs, named db and web. Both use the hashicorp/precise32box that we have installed earlier. Each has a configuration block defining their hostname and an IP address for network connectivity. The IP addresses 192.168.33.10 and 192.168.33.12