41,99 €
Boost your organization's growth by incorporating networking in the DevOps culture
The book is aimed for Network Engineers, Developers, IT operations and System admins who are planning to incorporate Networking in DevOps culture and have no knowledge about it.
Frustrated that your company's network changes are still a manual set of activities that slow developers down? It doesn't need to be that way any longer, as this book will help your company and network teams embrace DevOps and continuous delivery approaches, enabling them to automate all network functions.
This book aims to show readers network automation processes they could implement in their organizations. It will teach you the fundamentals of DevOps in networking and how to improve DevOps processes and workflows by providing automation in your network. You will be exposed to various networking strategies that are stopping your organization from scaling new projects quickly.
You will see how SDN and APIs are influencing DevOps transformations, which will in turn help you improve the scalability and efficiency of your organizations networks operations. You will also find out how to leverage various configuration management tools such as Ansible, to automate your network.
The book will also look at containers and the impact they are having on networking as well as looking at how automation impacts network security in a software-defined network.
This will be a comprehensive, learning guide for teaching our readers how networking can be leveraged to improve the DevOps culture for any organization.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 446
Veröffentlichungsjahr: 2016
Copyright © 2016 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: October 2016
Production reference: 1261016
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham B3 2PB, UK.
ISBN 978-1-78646-485-9
www.packtpub.com
Author
Steven Armstrong
Reviewer
Daniel Jonathan Valik
Commissioning Editor
Pratik Shah
Acquisition Editor
Namrata Patil
Content Development Editor
Abhishek Jadhav
Technical Editor
Mohd Riyan Khan
Copy Editor
Dipti Mankame
Project Coordinator
Judie Jose
Proofreader
Safis Editing
Indexer
Pratik Shirodkar
Graphics
Kirk D'Penha
Production Coordinator
Shantanu N. Zagade
Cover Work
Shantanu N. Zagade
Steven Armstrong is a DevOps solution architect, a process automation specialist, and an honors graduate in Computer and Electronic Systems (BEng) from Strathclyde University in Glasgow.
He has a proven track record of streamlining company's development architecture and processes so that they can deliver software at pace. Specializing in agile, continuous integration, infrastructure as code, networking as code, Continuous Delivery, and deployment, he has worked for 10 years for leading consulting, financial services, benefits and gambling companies in the IT sector to date.
After graduating, Steven started his career at Accenture Technology solutions as part of the Development Control Services graduate scheme, where he worked for 4 years, then as a configuration management architect helping Accenture's clients automate their build and deployment processes for Siebel, SAP, WebSphere, Weblogic, and Oracle B2B applications.
During his time at Accenture, he worked within the development control services group working for clients, such as the Norwegian Government, EDF Energy, Bord Gais, and SABMiller. The EDF Energy implementation led by Steven won awards for “best project industrialization” and “best use of Accenture shared services”.
After leaving Accenture, Steven moved on to the financial services company, Cofunds, where he spent 2 years creating continuous integration and Continuous Delivery processes for .Net applications and Microsoft SQL databases to help deploy the financial services platform.
After leaving Cofunds, Steven moved on to Thomsons Online Benefits, where he helped create a new DevOps function for the company. Steven also played an integral part in architecting a new private cloud solution to support Thomsons Online Benefits production applications and set up a Continuous Delivery process that allowed the Darwin benefits software to be deployed to the new private cloud platform within minutes.
Steven currently works as the technical lead for Paddy Power Betfair's i2 project, where he has led a team to create a new greenfield private cloud platform for Paddy Power Betfair. The implementation is based on OpenStack and Nuage VSP for software-defined networking, and the platform was set up to support Continuous Delivery of all Paddy Power Betfair applications. The i2 project implementation was a finalist for the OpenStack Super User Award and won a RedHat Innovation Award for Modernization.
Steven is an avid speaker at public events and has spoken at technology events across the world, such as DevSecCon London, OpenStack Meetup in Cluj, the OpenStack Summit in Austin, HP Discover London, and most recently gave a keynote at OpenStack Days Bristol.
I would most importantly like to thank my girlfriend Georgina Mason. I know I haven't been able to leave the house much at weekends for 3 months as I have been writing this book, so I know it couldn't have been much fun. But thank you for your patience and support, as well as all the tea and coffee you made for me to keep me awake during the late nights. Thank you for being an awesome girlfriend.
I would like to thank my parents, June and Martin, for always being there and keeping me on track when I was younger. I would probably have never got through university never mind written a book if it wasn't for your constant encouragement, so hopefully, you both know how much I appreciate everything you have done for me over the years.
I would like to thank Paddy Power Betfair for allowing me the opportunity to write this book and our CTO Paul Cutter to allow our team to create the i2 project solution and talk to the technology community about what we have achieved.
I would also like to thank Richard Haigh, my manager, for encouraging me to take on the book and all his support in my career since we started working together at Thomsons Online Benefits.
I would like to thank my team, the delivery enablement team at Paddy Power Betfair, for continually pushing the boundaries of what is possible with our solutions. You are the people who made the company a great innovative place to work.
I would like to thank all the great people I worked with throughout my career at Paddy Power Betfair, Thomsons Online Benefits, Cofunds, and Accenture, as without the opportunities I was given, I wouldn't have been able to pull in information from all those experiences to write this book.
I would also like to thank Nuage networks for permitting me to write about their software-defined networking solution in this book.
Daniel Jonathan Valik is an industry expert in cloud services, cloud native technologies, IOT, DevOps, infrastructure automation, containerization, virtualization, microservices, unified communications, collaborations technologies, Hosted PBX, telecommunications, WebRTC, unified messaging, Communications Enabled Business Process (CEBP) design, and Contact Center Technologies.
He has worked in several disciplines such as product management, product marketing, program management, evangelist, and strategic adviser for almost two decades in the industry.
He has lived and worked in Europe, South East Asia, and now in the US. Daniel is also an author of several books about cloud services, universal communications and collaborations technologies, which includes Microsoft, Cisco, Google, Avaya, and others.
He holds dual master’s degrees: Master of Business Administration (MBA) and Master of Advanced Studies (MAS) in general business. He also has a number of technical certifications, including Microsoft Certified Trainer (MCT). For more information about Daniel, refer to his blogs, videos, and profile on LinkedIn (https://www.linkedin.com/in/danielvalik).
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
https://www.packtpub.com/mapt
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.
The title of this book is "DevOps For Networking". DevOps, as you are probably well-aware, is an abbreviated amalgamation of "Development" and "Operations", so why does it have any significance to networking? It is true that there is no "Net" in the DevOps name, though it is fair to say that the remit of DevOps has extended well beyond its initial goal.
The initial DevOps movement sought to remove the "chucking it over the fence" and reactive mentality that existed between development and operations teams, but DevOps can be efficiently used to promote collaboration between all teams in IT, not just Development and Operations staff.
DevOps, as a concept, initially aimed to solve the scenario where developers would develop code, make significant architectural changes, and not consider the Operations team that needed to deploy the code to production. So when the time came for the operations team to deploy the developers' code changes to production, this would result in a broken deployment, meaning crucial software fixes or new products would not reach customers as planned, and the deployment process would typically take days or weeks to fix.
This led to frustration to all the teams involved, as developers would have to stop coding new features and instead would have to help operations staff fix the deployment process. Operations teams would also be frustrated, as often they would not have been told that infrastructure changes were required to deploy the new release to production. As a result, the operations team did not have the idea to adequately prepare the production environment to support the architectural changes.
This common IT scenario highlights the broken process and operational model that would happen continually and cause friction between development and operations teams.
"DevOps" was an initiative setup to foster an environment of collaboration and communication between these previously conflicting teams. It promotes teams to speak daily, making each other aware of changes and consequently preventing avoidable situations from happening. So, it just so happens that development and operations staff were the first set of silos that DevOps aimed to solve. Consequently, it branded DevOps as a way to unify the teams to work, as one consolidated fluid function, but it could easily have been called something else.
DevOps aims to create better working relationships between teams and a happier working environment, as frankly nobody enjoys conflict or firefighting preventable issues on a daily basis. It also aims to share knowledge between teams to prevent the development teams being viewed as "ignorant to infrastructure" and operations teams to be "blockers to changes" and "slowing down devs". These are the common misconceptions that teams working in silos have of one another when they don't take the time to understand each other's goals.
DevOps strives to build an office environment where teams appreciate other teams and their aims, and they are respectful of their common goals. DevOps is undoubtedly one most talked about topics in the IT industry today. It is not a coincidence that its popularity has risen with the emergence of agile software development, as an alternative to using the more traditional waterfall approach.
Waterfall development and the "V-Model" encompass the separate phases of analysis, design, implementation (coding), and testing. These phases are split up traditionally into different isolated teams, with formalized project hand-off dates that are set in stone.
Agile was born out of the realization that in the fast-paced software industry, long running projects were suboptimal and not the best way of delivering real value to customers. As a result, agile has moved projects to shorter iteration cycles that incorporated analysis, design, implementation, and testing into two-week cycles (sprints) and aimed at using a prototyping approach instead.
The prototyping approach uses the notion of incremental development, which has allowed companies to gather feedback on products earlier in the release cycle, rather than utilizing a big bang approach that delivered the full solution in one big chunk at the end of the implementation phase.
Delivering projects in a waterfall fashion at the end of the waterfall implementation stage ran the risk of delivering products that customers did not want or where developers had misinterpreted requirements. These issues were typically only discovered in the final test phase when the project was ready to be taken to market. This often resulted in projects being deemed a failure or resulting in huge delays, whereas costly rework and change requests could be actioned.
Agile software development for the best part has fostered the need to collapse down those team silos that were typically associated with waterfall software development, and this strengthened the need for daily collaboration.
With the introduction of agile software development, it also has changed the way software testing is carried out too, with the same DevOps principles also being applied to testing functions.
Quality assurance test teams can no longer afford to be reactive either, much like operations teams before them. So, this promoted the need for test teams to work more efficiently and not delay products reaching market. This, however, could not be done at the expense of the product, so they needed to find a way to make sure applications are tested adequately and pass all quality assurance checks while working in a smarter way.
It was readily accepted that quality assurance test teams can no longer operate in silos separate from development teams; instead, agile software development has promoted test cases being written in parallel to the software development, so they are not a separate activity. This is in stark contrast to code being deployed into a test environment and left to a team of testers to execute manual tests or run a set of test packs where they deal with issues reactively.
Agile has promoted developers and quality assurance testers to instead work together in scrum teams on a daily basis to test software before it is packaged for deployment, with those same tests then being maintained and kept up to date and used to seed the regression test packs.
This has been used to mitigate the friction caused by developers checking in code changes that break quality assurance test team's regression packs. With siloed test teams, a common scenario that would often cause friction is be that a graphical user interface (GUI) would suddenly be changed by a developer, resulting in a portion of regression tests breaking. This change would be done without notifying the test team. The tests would be failing because they were written for an old GUI and were suddenly outdated, as opposed to breaking because developers had actually introduced a software failure or a bug.
This reactive approach to testing did not build confidence in the validity of the test failures reported by automated test packs as they are not always conclusively down to a software failure, and this introduced unnecessary delays due to suboptimal IT structure.
Instead if the communication between development and test teams had been better, using the principles promoted by DevOps, then these delays and suboptimal ways of working can be avoided.
More recently, we have seen the emergence of DevSecOps that have looked at integrating security and compliance into the software delivery process, as opposed to being bolted on manual actions and separate reactive initiatives. DevSecOps has looked at using DevOps and agile philosophies and embraced the idea of embedding security engineers in scrum teams to make sure that security requirements are met at the point of inception.
This means that security and compliance can be integrated as automated phases in Continuous Delivery pipelines, to run security compliance checks on every software release, and not slow down the software development lifecycle for developers and generate the necessary feedback loops.
So networking teams can learn from DevOps methodologies too much like development, operations, quality assurance, and security teams. These teams have utilized agile processes to improve their interaction with the software development process and benefit from using feedback loops.
How many times have network engineers had no choice but to delay a software release, as network changes need to be implemented and have been done so inefficiently using ticket-based systems that are not aligned with the processes other departments use? How many times have manually implemented network changes broken production services? This isn't a criticism of network teams or the ability of network engineers; it's the realization that the operational model needs to change and they can.
This book will look at how networking changes can be made more efficient so as not to slow down the software development lifecycle. It will help outline strategies network engineers can adopt to automate network operations. We will focus on setting up network teams to succeed in an automation-driven environment, enabling the teams to work in a more collaborative fashion, and improve efficiency.
It will also show that network teams need to build new skills and learn configuration management tools such as Ansible to help them achieve this goal. The book will show the advantages that these tools bring, using the network modules they provide, and that they will help make automation easy and act as a self-starter guide.
We will focus on some of the cultural challenges that need to be overcome to influence and implement automation processes for network functions and convince network teams to make the most of networking APIs that are now provided by vendors can be trusted.
The book will discuss public and private clouds such as AWS and OpenStack, and ways they are used to provide networking to users. It will also discuss the emergence of software-defined networking solutions, such as Juniper Contrail, VMWare NSX, CISCO ACI, and focus on the Nokia Nuage VSP solution, which aims to make networking functions a self-service commodity.
The book will also highlight how continuous integration and delivery processes and deployment pipelines can be applied to govern network changes. It will also show ways that unit testing can be applied to automated network changes to integrate them with the software delivery lifecycle.
A detailed chapter overview for the book is detailed below:
Chapter 1, The Impact of Cloud on Networking, will discuss ways in which the emergence of AWS for public cloud and OpenStack for private cloud have changed the way developers want to consume networking. It will look at some of the networking services AWS and OpenStack provide out of the box and look at some of the networking features they provide. It will show examples of how these cloud platforms have made networking a commodity much like infrastructure.
Chapter 2, The Emergence of Software-defined Networking, will discuss how software-defined networking has emerged. It will look at the methodology and focus on some of the scaling benefits and features this provides over and above the out-of-the-box experience from AWS and OpenStack. It will illustrate how one of the market-leading SDN solutions, Nuage, applies these concepts and principles and discusses other SDN solutions on the market.
Chapter 3, Bringing DevOps to Network Operations, will detail the pros and cons of a top-down and bottom-up DevOps initiatives with regards to networking. It will give readers food for thought on some of the strategies that have been a success and which ones have typically failed. This chapter will help CTOs, senior managers, and engineers who are trying to initiate a DevOps model in their company's network department and outline some of the different strategies they could use to achieve the cultural changes they desire.
Chapter 4, Configuring Network Devices Using Ansible, will outline the benefits of using configuration management tools to install and push configuration to network devices and discuss some of the open source network modules available to do this at the moment and how they work. It will give some examples of process flows that could be adopted to maintain device configuration.
Chapter 5, Orchestrating Load Balancers Using Ansible, will describe the benefits of using Ansible to orchestrate load balancers and the approaches to roll new software releases into service without the need for downtime or manual intervention. It will give some examples of some process flows that could be adopted to allow orchestration of both immutable and static servers looking at the different load balancer technologies available.
Chapter 6, Orchestrating SDN Controllers Using Ansible, will outline the benefits of using Ansible to orchestrate SDN controllers. It will outline the benefits of software-defined networking and why it is paramount to automate the network functions that an SDN controller exposes. This includes setting ACL rules dynamically, which will allow network engineers to provide a Network as a Service (NaaS) allowing developers to self-service their networking needs. It will discuss deployment strategies such as blue green networks as well as exploring some of the process flows that could be used to implement a NaaS approach.
Chapter 7, Using Continuous Integration Builds for Network Configuration, will discuss moving to a model where network configuration is stored in source control management systems, so it is easily audited and versioned and changes can be rolled back.
It will look at workflows that can be used to set up network configuration CI builds using tools such as Jenkins and Git.
Chapter 8, Testing Network Changes, will outline the importance of using test environments to test network changes before applying them in production. It will explore some of the open source tooling available and walk through some of the test strategies that can be applied to make sure that network changes are thoroughly tested before applying them to production.
Chapter 9, Using Continuous Delivery Pipelines to Deploy Network Changes, will show readers how to use continuous integration and Continuous Delivery pipelines to deliver network changes to production and put them through associated test environments. It will give some examples of some process flows that could be adopted to deliver network changes to production and how they can easily sit alongside infrastructure and code changes in deployment pipelines.
Chapter 10, The Impact of Containers on Networking, dedicated container technologies such as Docker and container orchestration engines such as Kubernetes and Swarm are becoming more and more popular with companies that are moving to microservice architectures. As a result, this has changed networking requirements. This chapter will look at how containers operate and the impact they have had on networking.
Chapter 11, Securing the Network, will look at how this approach makes a security engineer's job of auditing the network easier. It will look at the possible attack vectors in a software-defined network and ways that security checks can be integrated into a DevOps model.
This book assumes a medium level on networking knowledge, a basic level of Linux knowledge, a basic knowledge of cloud computing technologies, and a broad knowledge of IT. It is focusing primarily on particular process workflows that can be implemented rather than base technologies, so the ideas and content can be applied to any organization, no matter the technology that is used.
However, that being said, it could be beneficial to readers to access the following technologies when digesting some of the chapters' content:
The target audience for this book is network engineers who want to automate the manual and repetitive parts of their job or developers or system admins who want to automate all network functions.
This book will also provide a good insight to CTOs or managers who want to understand ways in which they can make their network departments more agile and initiate real cultural change within their organizations.
The book will also aid casual readers who want to understand more about DevOps, continuous integration, and Continuous Delivery and how they can be applied to real-world scenarios as well as insights on some of the tooling that is available to facilitate automation.
In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.
Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "These services are then bound to the lbvserver entity."
Any command-line input or output is written as follows:
New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "Click the Search button on Google."
Warnings or important notes appear in a box like this.
Tips and tricks appear like this.
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.
To send us general feedback, simply e-mail <[email protected]>, and mention the book's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from http://www.packtpub.com/sites/default/files/downloads/DevOpsforNetworking_ColorImages.pdf.
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.
To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at <[email protected]> with a link to the suspected pirated material.
We appreciate your help in protecting our authors and our ability to bring you valuable content.
If you have a problem with any aspect of this book, you can contact us at <[email protected]>, and we will do our best to address the problem.
This chapter will look at ways that networking has changed in the private data centers and evolved in the past few years. It will focus on the emergence of Amazon Web Services (AWS) for public cloud and OpenStack for private cloud and ways in which this has changed the way developers want to consume networking. It will look at some of the networking services that AWS and OpenStack provide out of the box and look at some of the features they provide. It will show examples of how these cloud platforms have made networking a commodity much like infrastructure.
In this chapter, the following topics will be covered:
The cloud provider market is currently saturated with a multitude of different private, public, and hybrid cloud solutions, so choice is not a problem for companies looking to implement public, private, or hybrid cloud solutions.
Consequently, choosing a cloud solution can sometimes be quite a daunting task, given the array of different options that are available.
The battle between public and private cloud is still in its infancy, with only around 25 percent of the industry using public cloud, despite its perceived popularity, with solutions such as Amazon Web Services, Microsoft Azure, and Google Cloud taking a large majority of that market share. However, this still means that 75 percent of the cloud market share is available to be captured, so the cloud computing market will likely go through many iterations in the coming years.
So why are many companies considering public cloud in the first place and why does it differ from private and hybrid clouds?
Public clouds are essentially a set of data centers and infrastructure that are made publicly available over the Internet to consumers. Despite its name, it is not magical or fluffy in any way. Amazon Web Services launched their public cloud based on the idea that they could rent out their servers to other companies when they were not using them during busy periods of the year.
Public cloud resources can be accessed via a Graphical User Interface (GUI) or, programmatically, via a set of API endpoints. This allows end users of the public cloud to create infrastructure and networking to host their applications.
Public clouds are used by businesses for various reasons, such as the speed it takes to configure and using public cloud resources is relatively low. Once credit card details have been provided on a public cloud portal, end users have the freedom to create their own infrastructure and networking, which they can run their applications on.
This infrastructure can be elastically scaled up and down as required, all at a cost of course to the credit card.
Public cloud has become very popular as it removes a set of historical impediments associated with shadow IT. Developers are no longer hampered by the restrictions enforced upon them by bureaucratic and slow internal IT processes. Therefore, many businesses are seeing public cloud as a way to skip over these impediments and work in a more agile fashion allowing them to deliver new products to market at a greater frequency.
When a business moves its operations to a public cloud, they are taking the bold step to stop hosting their own data centers and instead use a publicly available public cloud provider, such as Amazon Web Services, Microsoft Azure, IBM BlueMix, Rackspace, or Google Cloud.
The reliance is then put upon the public cloud for uptime and Service Level Agreements (SLA), which can be a huge cultural shift for an established business.
Businesses that have moved to public cloud may find they no longer have a need for a large internal infrastructure team or network team, instead all infrastructure and networking is provided by the third-party public cloud, so it can in some quarters be viewed as giving up on internal IT.
Public cloud has proved a very successful model for many start-ups, given the agility it provides, where start-ups can put out products quickly using software-defined constructs without having to set up their own data center and remain product focused.
However, the Total Cost of Ownership (TCO) to run all of a business's infrastructure in a public cloud is a hotly debated topic, which can be an expensive model if it isn't managed and maintained correctly. The debate over public versus private cloud TCO rages on as some argue that public cloud is a great short-term fix but growing costs over a long period of time mean that it may not be a viable long-term solution compared with private cloud.
Private cloud is really just an extension of the initial benefits introduced by virtualization solutions, such as VMware, Hyper-V, and Citrix Xen, which were the cornerstone of the virtualization market. The private cloud world has moved on from just providing virtual machines, to providing software-defined networking and storage.
With the launch of public clouds, such as Amazon Web Services, private cloud solutions have sought to provide like-for-like capability by putting a software-defined layer on top of their current infrastructure. This infrastructure can be controlled in the same way as the public cloud via a GUI or programmatically using APIs.
Private cloud solutions such as Apache CloudStack and open source solutions such as OpenStack have been created to bridge the gap between the private cloud and the public cloud.
This has allowed vendors the agility of private cloud operations in their own data center by overlaying software-defined constructs on top of their existing hardware and networks.
However, the major benefit of private cloud is that this can be done within the security of a company's own data centers. Not all businesses can use public cloud for compliance, regularity, or performance reasons, so private cloud is still required for some businesses for particular workloads.
Hybrid cloud can often be seen as an amalgamation of multiple clouds. This allows a business to seamlessly run workloads across multiple clouds linked together by a network fabric. The business could select the placement of workloads based on cost or performance metrics.
A hybrid cloud can often be made up of private and public clouds. So, as an example, a business may have a set of web applications that it wishes to scale up for particular busy periods and are better suited to run on public cloud so they are placed there. However, the business also needs a highly regulated, PCI-compliant database, which would be better-suited to being deployed in a private on-premises cloud. So a true hybrid cloud gives a business these kinds of options and flexibility.
Hybrid cloud really works on the premise of using different clouds for different use cases, where each horse (application workload) needs to run a particular course (cloud). So, sometimes, a vendor-provided Platform as a Service (PaaS) layer can be used to place workloads across multiple clouds or alternately different configuration management tools, or container orchestration technologies can be used to orchestrate application workload placement across clouds.
The choice between public, private, or hybrid cloud really depends on the business, so there is no real right or wrong answer. Companies will likely use hybrid cloud models as their culture and processes evolve over the next few years.
If a business is using a public, private, or hybrid cloud, the common theme with all implementations is that they are moving towards a software-defined operational model.
So what does the term software-defined really mean? In simple terms, software-defined means running a software abstraction layer over hardware. This software abstraction layer allows graphical or programmatic control of the hardware. So, constructs, such as infrastructure, storage, and networking, can be software defined to help simplify operations, manageability as infrastructure and networks scale out.
When running private clouds, modifications need to be made to incumbent data centers to make them private cloud ready; sometimes, this is important, so the private data center needs to evolve to meet those needs.
When considering the private cloud, traditionally, company's private datacenters have implemented 3-tier layer 2 networks based on the Spanning Tree Protocol (STP), which doesn't lend itself well to modern software-defined networks. So, we will look at what a STP is in more depth as well as modern Leaf-Spine network architectures.
The implementation of STP provides a number of options for network architects in terms of implementation, but it also adds a layer of complexity to the network. Implementation of the STP gives network architects the certainty that it will prevent layer 2 loops from occurring in the network.
A typical representation of a 3-tier layer 2 STP-based network can be shown as follows:
The bottom of the tree is the Access layer; this is where bare metal (physical) or virtual machines connect to the network and are segmented using different VLANs.
The use of layer 2 networking and STP mean that at the access layer of the network will use VLANs spread throughout the network. The VLANs sit at the access layer, which is where virtual machines or bare metal servers are connected. Typically, these VLANs are grouped by type of application, and firewalls are used to further isolate and secure them.
Traditional networks are normally segregated into some combination of the following:
Applications communicate with each other by tunneling between these firewalls, with specific Access Control List (ACL) rules that are serviced by network teams and governed by security teams.
When using STP in a layer 2 network, all switches go through an election process to determine the root switch, which is granted to the switch with the lowest bridge id, with a bridge id encompassing the bridge priority and MAC address of the switch.
Once elected, the root switch becomes the base of the spanning tree; all other switches in the Spanning Tree are deemed non-root will calculate their shortest path to the root and then block any redundant links, so there is one clear path. The calculation process to work out the shortest path is referred to as network convergence. (For more information refer to the following link: http://etutorials.org/Networking/Lan+switching+fundamentals/Chapter+10.+Implementing+and+Tuning+Spanning+Tree/Spanning-Tree+Convergence/)
Network architects designing the layer 2 Spanning Tree network need to be careful about the placement of the root switch, as all network traffic will need to flow through it, so it should be selected with care and given an appropriate bridge priority as part of the network reference architecture design. If at any point, switches have been given the same bridge priority then the bridge with the lowest MAC address wins.
Network architects should also design the network for redundancy so that if a root switch fails, there is a nominated backup root switch with a priority of one value less than the nominated root switch, which will take over when a root switch fails. In the scenario, the root switch fails the election process will begin again and the network will converge, which can take some time.
The use of STP is not without its risks, if it does fail due to user configuration error, data center equipment failure or software failure on a switch or bad design, then the consequences to a network can be huge. The result can be that loops might form within the bridged network, which can result in a flood of broadcast, multicast or unknown-unicast storms that can potentially take down the entire network leading to long network outages. The complexity associated with network architects or engineers troubleshooting STP issues is important, so it is paramount that the network design is sound.
In recent years with the emergence of cloud computing, we have seen data centers move away from a STP in favor of a Leaf-Spine networking architecture. The Leaf-Spine architecture is shown in the following diagram:
In a Leaf-Spine architecture:
Leaf-Spine architectures are promoted by companies such as Arista, Juniper, and Cisco. A Leaf-Spine architecture is built on layer 3 routing principle to optimize throughput and reduce latency.
Both Leaf and Spine switches communicate with each other via external Border Gate Protocol (eBGP) as the routing protocol for the IP fabric. eBGP establishes a Transmission Control Protocol (TCP) connection to each of its BGP peers before BGP updates can be exchanged between the switches. Leaf switches in the implementation will sit at top of rack and can be configured in Multichassis Link Aggregation (MLAG) mode using Network Interface Controller (NIC) bonding.
MLAG was originally used with STP so that two or more switches are bonded to emulate like a single switch and used for redundancy so they appeared as one switch to STP. In the event of a failure this provided multiple uplinks for redundancy in the event of a failure as the switches are peered, and it worked around the need to disable redundant paths. Leaf switches can often have internal Border Gate Protocol (iBGP) configured between the pairs of switches for resiliency.
In a Leaf-Spine architecture, Spine switches do not connect to other Spine switches, and Leaf switches do not connect directly to other Leaf switches unless bonded top of rack using MLAG NIC bonding. All links in a Leaf-Spine architecture are set up to forward with no looping. Leaf-Spine architectures are typically configured to implement Equal Cost Multipathing (ECMP), which allows all routes to be configured on the switches so that they can access any Spine switch in the layer 3 routing fabric.
ECMP means that Leaf switches routing table has the next-hop configured to forward to each Spine switch. In an ECMP setup, each leaf node has multiple paths of equal distance to each Spine switch, so if a Spine or Leaf switch fails, there is no impact as long as there are other active paths to another adjacent Spine switches. ECMP is used to load balance flows and supports the routing of traffic across multiple paths. This is in contrast to the STP, which switches off all but one path to the root when the network converges.
Normally, Leaf-Spine architectures designed for high performance use 10G access ports at Leaf switches mapping to 40G Spine ports. When device port capacity becomes an issue, new Leaf switches can be added by connecting it to every Spine on the network while pushing the new configuration to every switch. This means that network teams can easily scale out the network horizontally without managing or disrupting the switching protocols or impacting the network performance.
An illustration of the protocols used in a Leaf-Spine architecture are shown later, with Spine switches connected to Leaf switches using BGP and ECMP and Leaf switches sitting top of rack and configured for redundancy using MLAG and iBGP:
The benefits of a Leaf-Spine architecture are as follows:
The one drawback of a Leaf-Spine topology is the amount of cables it consumes in the data center.
Modern switches have now moved towards open source standards, so they can use the same pluggable framework. The open standard for virtual switches is Open vSwitch, which was born out of the necessity to come up with an open standard that allowed a virtual switch to forward traffic to different virtual machines on the same physical host and physical network. Open vSwitch uses Open vSwitch database (OVSDB) that has a standard extensible schema.
Open vSwitch was initially deployed at the hypervisor level but is now being used in container technology too, which has Open vSwitch implementations for networking.
The following hypervisors currently implement Open vSwitch as their virtual switching technology:
Hyper-V has recently moved to support Open vSwitch using the implementation created by Cloudbase (https://cloudbase.it/), which is doing some fantastic work in the open source space and is testament to how Microsoft's business model has evolved and embraced open source technologies and standards in recent years. Who would have thought it? Microsoft technologies now run natively on Linux.
The Open vSwitch exchanges OpenFlow between virtual switch and physical switches in order to communicate and can be programmatically extended to fit the needs of vendors. In the following diagram, you can see the Open vSwitch architecture. Open vSwitch can run on a server using the KVM, Xen, or Hyper-V virtualization layer:
The ovsdb-server contains the OVSDB schema that holds all switching information for the virtual switch. The ovs-vswitchd daemon talks OpenFlow to any Control & Management Cluster, which could be any SDN controller that can communicate using the OpenFlow protocol.
Controllers use OpenFlow to install flow state on the virtual switch, and OpenFlow dictates what actions to take when packets are received by the virtual switch.
When Open vSwitch receives a packet it has never seen before and has no matching flow entries, it sends this packet to the controller. The controller then makes a decision on how to handle this packet based on the flow rules to either block or forward. The ability to configure Quality of Service (QoS) and other statistics is possible on Open vSwitch.
Open vSwitch is used to configure security rules and provision ACL rules at the switch level on a hypervisor.
A Leaf-Spine architecture allows overlay networks to be easily built, meaning that cloud and tenant environments are easily connected to the layer 3 routing fabric. Hardware Vxlan Tunnel Endpoints (VTEPs) IPs are associated with each Leaf switch or a pair of Leaf switches in MLAG mode and are connected to each physical compute host via Virtual Extensible LAN (VXLAN) to each Open vSwitch that is installed on a hypervisor.
This allows an SDN controller, which is provided by vendors, such as Cisco, Nokia, and Juniper to build an overlay network that creates VXLAN tunnels to the physical hypervisors using Open vSwitch. New VXLAN tunnels are created automatically if a new compute is scaled out, then SDN controllers can create new VXLAN tunnels on the Leaf switch as they are peered with the Leaf switch's hardware VXLAN Tunnel End Point (VTEP).
Modern switch vendors, such as Arista, Cisco, Cumulus, and many others, use OVSDB, and this allows SDN controllers to integrate at the Control & Management Cluster level. As long as an SDN controller uses OVSDB and OpenFlow protocol, they can seamlessly integrate with the switches and are not tied into specific vendors. This gives end users a greater depth of choice when choosing switch vendors and SDN controllers, which can be matched up as they communicate using the same open standard protocol.
It is unquestionable that the emergence of the AWS, which was launched in 2006, changed and shaped the networking landscape forever. AWS has allowed companies to rapidly develop their products on the AWS platform. AWS has created an innovative set of services for end users, so they can manage infrastructure, load balancing, and even databases. These services have led the way in making the DevOps ideology a reality, by allowing users to elastically scale up and down infrastructure. They need to develop products on demand, so infrastructure wait times are no longer an inhibitor to development teams. AWS rich feature set of technology allows users to create infrastructure by clicking on a portal or more advanced users that want to programmatically create infrastructure using configuration management tooling, such as Ansible, Chef, Puppet, Salt or Platform as a Service (PaaS) solutions.
In 2016, the AWS Virtual Private Cloud (VPC
