33,59 €
GitOps follows the practices of infrastructure as code (IaC), allowing developers to use their day-to-day tools and practices such as source control and pull requests to manage apps. With this book, you’ll understand how to apply GitOps bootstrap clusters in a repeatable manner, build CD pipelines for cloud-native apps running on Kubernetes, and minimize the failure of deployments.
You’ll start by installing Argo CD in a cluster, setting up user access using single sign-on, performing declarative configuration changes, and enabling observability and disaster recovery. Once you have a production-ready setup of Argo CD, you’ll explore how CD pipelines can be built using the pull method, how that increases security, and how the reconciliation process occurs when multi-cluster scenarios are involved. Next, you’ll go through the common troubleshooting scenarios, from installation to day-to-day operations, and learn how performance can be improved. Later, you’ll explore the tools that can be used to parse the YAML you write for deploying apps. You can then check if it is valid for new versions of Kubernetes, verify if it has any security or compliance misconfigurations, and that it follows the best practices for cloud-native apps running on Kubernetes.
By the end of this book, you’ll be able to build a real-world CD pipeline using Argo CD.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 284
Veröffentlichungsjahr: 2022
The GitOps way of managing cloud-native applications
Liviu Costea
Spiros Economakis
BIRMINGHAM—MUMBAI
Copyright © 2022 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author(s), nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Rahul Nair
Publishing Product Manager: Preet Ahuja
Senior Editor: Arun Nadar
Content Development Editor: Sujata Tripathi
Technical Editor: Rajat Sharma
Copy Editor: Safis Editing
Project Coordinator: Ajesh Devavaram
Proofreader: Safis Editing
Indexer: Tejal Daruwale Soni
Production Designer: Shankar Kalbhor
Marketing Coordinator: Nimisha Dua
First published: November 2022
Production reference: 1271022
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-80323-332-1
www.packt.com
To my sons, Tudor and Victor, and my wife, Alina, for giving me the strength and power to overcome all the challenges.
– Liviu Costea
To my sons, Yannis and Vasilis, and my wife, Anastasia, for reminding me every day that life is a continuous learning process in every aspect.
– Spiros Economakis
In their book, Liviu and Spiros provide an excellent introduction to Argo CD that helps you start using it in a matter of minutes. The book provides great introduction base concepts and the basic vocabulary of both GitOps and Argo CD. Besides teaching about Argo CD itself, the book covers a lot of ecosystem tools that are extremely useful and will prepare you for real-life use cases. Even the basic examples come with YAML snippets, which again will be helpful to solve real-life challenges.
Content gets more advanced and more interesting pretty quickly. You will learn lots of interesting details about advanced Argo CD features as well as about architecture and some internals. This in-depth material will be handy for DevOps engineers who are responsible for running Argo CD for a whole organization and need to deal with scalability and performance challenges. The book provides a description of the best practices and patterns for running and managing Argo CD. I would definitely recommend it to anyone who wants to get into GitOps or who is already familiar with or looking to learn about advanced topics.
Alexander Matyushentsev
Co-founder and Chief Architect at Akuity
Liviu Costea started as a developer in the early 2000s and his career path led him to different roles, from developer to coding architect, and from team lead to the Chief Technical Officer. In 2012, he transitioned to DevOps when, at a small company, someone had to start working on pipelines and automation because the traditional way wasn’t scalable anymore.
In 2018, he started with the platform team and then became the tech lead in the release team at Mambu, where they designed most of the Continuous Integration/Continuous Deployment (CI/CD) pipelines, adopting GitOps practices. They have been live with Argo CD since 2019. More recently, he joined Juni, a promising start-up, where they are planning GitOps adoption. For his contributions to OSS projects, including Argo CD, he was named a CNCF ambassador in August 2020.
Spiros Economakis started as a software engineer in 2010 and went through a series of jobs and roles, from software engineer and software architect to head of cloud. In 2013, he founded his own start-up, and that was his first encounter with DevOps culture. With a small team, he built a couple of CI/CD pipelines for a microservice architecture and mobile app releases. After this, with most of the companies he has been involved with, he has influenced DevOps culture and automation.
In 2019, he started as an SRE in Lenses (acquired by Celonis) and soon introduced the organization to Kubernetes, GitOps, and the cloud. He transitioned to a position as head of cloud, where he introduced GitOps across the whole company and used Argo CD to bootstrap K8s clusters and continuous delivery practices. Now, he works in an open source company called Mattermost as a senior engineering manager, where he transformed the old GitOps approach (fluxcd) to GitOps 2.0 with Argo CD and built a scalable architecture for multi-tenancy as the single GitOps platform in the company.
Roel Reijerse studied electrical engineering and computer science at Delft University of Technology, with a specialization in computer graphics as part of his MSc. After several years of working as an embedded software engineer, he moved to backend engineering. Currently, he is employed by Celonis, where he works on a real-time streaming data platform managed by Argo CD.
Sai Kothapalle works as the Lead Site Reliability engineer at Enix. His experience includes working on distributed systems, running Kubernetes and ArgoCD Tools at scale for cloud providers, fintech companies and clients.
GitOps is not a topic that is hard to understand; you use a Git repository to declaratively define the state of your environments and by doing so, you gain versioning and changes by merge requests, which makes the whole system auditable.
But once you start adopting it and use a tool such as Argo CD, things will start becoming more complex. First, you need to set up Argo CD correctly, keeping in mind things such as observability and high availability. Then, you need to think about the CI/CD pipelines and how the new GitOps repositories will be integrated with them. And there will be organizational challenges: how do you integrate each team into this new setup? Most likely, they had different types of Kubernetes access based on the namespace they were deploying to, so Role-based Access Control (RBAC) took time to be properly configured, and now you need to take into consideration how the existing teams’ access will be transferred to the new GitOps engine.
Of course, there are many resources out there (articles, videos, and courses), but it is not easy to navigate them as they only deal with parts of these topics, and not all of them have a good level of detail.
So, it is not easy to gain an idea of what the overall adoption of Argo CD means.
We wrote this book in order for you to have a guide to understand the steps you need to take to start using Argo CD, to allow you to see the complete picture, from installation to setting up proper access control, and the challenges you will face when running it in production, including advanced scenarios and troubleshooting.
We started with GitOps early at our companies and we both were able to see the journey up close. Initially, we even thought about building our own GitOps operator, (like, how hard can it be?), but after 2-3 weeks of analyzing what we needed to do, we dropped the idea. We faced many challenges, some we handled better, while some took us a lot of time to get right, but we learned from all of them, and this is what we want to share with you. We know that, by using this book, you will be able to accelerate your Argo CD and GitOps adoption.
If you’re a software developer, DevOps engineer, or SRE who is responsible for building CD pipelines for projects running on Kubernetes and you want to advance in your career, this book is for you. Basic knowledge of Kubernetes, Helm, or Kustomize, and CD pipelines will be useful to get the most out of this book.
Chapter 1, GitOps and Kubernetes, explores how Kubernetes made it possible to introduce the GitOps concept. We will discover its declarative APIs, and see how we can apply resources from files, folders, and, in the end, Git repositories.
Chapter 2, Getting Started with ArgoCD, explores the core concepts of Argo CD and its architectural overview and goes through the necessary vocabulary you need to know in order to be able to deep dive into the tool.
Chapter 3, Operating Argo CD, covers installing Argo CD using HA manifests, going through some of the most meaningful configuration options, preparing for disaster recovery, and discovering some relevant metrics being exposed.
Chapter 4, Access Control, discovers how to set up user access and the options for connecting via the CLI, web UI, or a CI/CD pipeline. It goes into detail about RBAC and SSO and the different options to configure them.
Chapter 5, Argo CD Bootstrap K8s Cluster, shows how we can create a Kubernetes cluster using infrastructure as code and then set up the required applications with Argo CD, identifying the security challenges you will encounter when deploying the applications.
Chapter 6, Designing Argo CD Delivery Pipelines, continues (based on the infrastructure setup of the previous chapter) to demonstrate real deployment strategies, including dealing with secrets and getting familiarized with Argo Rollouts.
Chapter 7, Troubleshooting Argo CD, addresses some of the issues you will most likely encounter during installation and your day-to-day work and also takes a look at ways to improve Argo CD performance.
Chapter 8, YAML and Kubernetes Manifests (Parsing and Verification), looks at the tools we can use to validate the YAML manifests we will write, to verify them with the common best practices, check against Kubernetes schemas, or even perform your own extended validations written in Rego.
Chapter 9, Future and Conclusion, deals with the GitOps engine and kubernetes-sigs/cli-utils, how it was factored out from Argo CD or the K8s community, and what the teams are trying to achieve with them – having a set of libraries to provide a set of basic GitOps features.
To run the code from all the chapters, you will need access to a Kubernetes cluster, which can be a local one, with the exception of the HA installation, which requires a cluster with multiple nodes. The tools we will use the most are kubectl, Helm, and Kustomize. In the Kubernetes cluster, we will install Argo CD, and the instructions can be found in Chapter 2, Getting Started with Argo CD for the normal installation or Chapter 3, Operating Argo CD for the HA one.
Software/hardware covered in the book
Operating system requirements
Argo CD v2.1 and v2.2
Windows, macOS, or Linux
For some of the chapters, such as Chapters 3, Operating Argo CD and Chapter 5, Argo CD Bootstrap K8s Cluster we work with AWS EKS clusters, so you will need an AWS account set up and the AWS CLI installed. In Chapter 3, we also mention the eksctl CLI in order to ease the creation of the cluster where we will perform the HA installation, while in Chapter 5, Argo CD Bootstrap k8s Cluster, we recommend using Terraform for the cluster creation.
If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.
You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/ArgoCD-in-Practice. If there’s an update to the code, it will be updated in the GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://packt.link/HfXCL.
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “Create the new file named argocd-rbac-cm.yaml in the same location where we have argocd-cm.yaml.”
A block of code is set as follows:
apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm data: accounts.alina: apiKey, loginWhen we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
patchesStrategicMerge: - patches/argocd-cm.yaml - patches/argocd-rbac-cm.yamlAny command-line input or output is written as follows:
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath='{.data.password}' | base64 -d
Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “You can use the UI by navigating to the User-Info section.”
Tips or Important Notes
Appear like this.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.
Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com
Once you’ve read Argo CD in Practice, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.
Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.
Thanks for purchasing this book!
Do you like to read on the go but are unable to carry your print books everywhere?
Is your eBook purchase not compatible with the device of your choice?
Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.
Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application.
The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily
Follow these simple steps to get the benefits:
Scan the QR code or visit the link belowhttps://packt.link/free-ebook/9781803233321
Submit your proof of purchaseThat’s it! We’ll send your free PDF and other benefits to your email directlyThis part serves as an introduction to GitOps as a practice and will cover the advantages of using it.
This part of the book comprises the following chapters:
Chapter 1, GitOps and KubernetesChapter 2, Getting Started with Argo CDIn this chapter, we’re going to see what GitOps is and how the idea makes a lot of sense in a Kubernetes cluster. We will get introduced to specific components, such as the application programming interface (API) server and controller manager that make the cluster react to state changes. We will start with imperative APIs and get through the declarative ones and will see how applying a file and a folder up to applying a Git repository was just one step—and, when it was taken, GitOps appeared.
We will cover the following main topics in this chapter:
What is GitOps?Kubernetes and GitOpsImperative and declarative APIsBuilding a simple GitOps operatorInfrastructure as code (IaC) and GitOpsFor this chapter, you will need access to a Kubernetes cluster, and a local one such as minikube (https://minikube.sigs.k8s.io/docs/) or kind (https://kind.sigs.k8s.io) will do. We are going to interact with the cluster and send commands to it, so you also need to have kubectl installed (https://kubernetes.io/docs/tasks/tools/#kubectl).
We are going to write some code, so a code editor will be needed. I am using Visual Studio Code (VS Code) (https://code.visualstudio.com), and we are going to use the Go language, which needs installation too: https://golang.org (the current version of Go is 1.16.7; the code should work with it). The code can be found at https://github.com/PacktPublishing/ArgoCD-in-Practice in the ch01 folder.
The term GitOps was coined back in 2017 by people from Weaveworks, who are also the authors of a GitOps tool called Flux. Since then, I have seen how GitOps turned into a buzzword, up to being named the next important thing after development-operations (DevOps). If you search for definitions and explanations, you will find a lot of them: it has been defined as operations via pull requests (PRs) (https://www.weave.works/blog/gitops-operations-by-pull-request) or taking development practices (version control, collaboration, compliance, continuous integration/continuous deployment (CI/CD)) and applying them to infrastructure automation (https://about.gitlab.com/topics/gitops/).
Still, I think there is one definition that stands out. I am referring to the one created by the GitOps Working Group (https://github.com/gitops-working-group/gitops-working-group), which is part of the Application Delivery Technical Advisory Group (Application Delivery TAG) from the Cloud Native Computing Foundation (CNCF). The Application Delivery TAG is specialized in building, deploying, managing, and operating cloud-native applications (https://github.com/cncf/tag-app-delivery). The workgroup is made up of people from different companies with the purpose of building a vendor-neutral, principle-led definition for GitOps, so I think these are good reasons to take a closer look at their work.
The definition is focused on the principles of GitOps, and five are identified so far (this is still a draft), as follows:
Declarative configurationVersion-controlled immutable storageAutomated deliverySoftware agentsClosed loopIt starts with declarative configuration, which means we want to express our intent, an end state, and not specific actions to execute. It is not an imperative style where you say, “Let’s start three more containers,” but instead, you declare that you want to have three containers for this application, and an agent will take care of reaching that number, which might mean it needs to stop two running containers if there are five up right now.
Git is being referred to here as version-controlled and immutable storage, which is fair because while it is the most used source control system right now, it is not the only one, and we could implement GitOps with other source control systems.
Automated delivery means that we shouldn’t have any manual actions once the changes reach the version control system (VCS). After the configuration is updated, it comes to software agents to make sure that the necessary actions to reach the new declared configuration are being taken. Because we are expressing the desired state, the actions to reach it need to be calculated. They result from the difference between the actual state of the system and the desired state from the version control—and this is what the closed loop part is trying to say.
While GitOps originated in the Kubernetes world, this definition is trying to take that out of the picture and bring the preceding principles to the whole software world. In our case, it is still interesting to see what made GitOps possible and dive a little bit deeper into what those software agents are in Kubernetes or how the closed loop is working here.
It is hard not to hear about Kubernetes these days—it is probably one of the most well-known open source projects at the moment. It originated somewhere around 2014 when a group of engineers from Google started building a container orchestrator based on the experience they accumulated working with Google’s own internal orchestrator named Borg. The project was open sourced in 2014 and reached its 1.0.0 version in 2015, a milestone that encouraged many companies to take a closer look at it.
Another reason that led to its fast and enthusiastic adoption by the community is the governance of CNCF (https://www.cncf.io). After making the project open source, Google started discussing with the Linux Foundation (https://www.linuxfoundation.org) creating a new nonprofit organization that would lead the adoption of open source cloud-native technologies. That’s how CNCF came to be created while Kubernetes became its seed project and KubeCon its major developer conference. When I said CNCF governance, I am referring mostly to the fact that every project or organization inside CNCF has a well-established structure of maintainers and details how they are nominated, how decisions are taken in these groups, and that no company can have a simple majority. This ensures that no decision will be taken without community involvement and that the overall community has an important role to play in a project life cycle.
Kubernetes has become so big and extensible that it is really hard to define it without using abstractions such as a platform for building platforms. This is because it is just a starting point—you get many pieces, but you have to put them together in a way that works for you (and GitOps is one of those pieces). If we say that it is a container orchestration platform, this is not entirely true because you can also run virtual machines (VMs) with it, not just containers (for more details, please check https://ubuntu.com/blog/what-is-kata-containers); still, the orchestration part remains true.
Its components are split into two main parts—first is the control plane, which is made of a REpresentational State Transfer (REST) API server with a database for storage (usually etcd), a controller manager used to run multiple control loops, a scheduler that has the job of assigning a node for our Pods (a Pod is a logical grouping of containers that helps to run them on the same node—find out more at https://kubernetes.io/docs/concepts/workloads/pods/), and a cloud controller manager to handle any cloud-specific work. The second piece is the data plane, and while the control plane is about managing the cluster, this one is about what happens on the nodes running the user workloads. A node that is part of a Kubernetes cluster will have a container runtime (which can be Docker, CRI-O, or containerd, and there are a few others), kubelet, which takes care of the connection between the REST API server and the container runtime of the node, and kube-proxy, responsible for abstracting the network at the node level. See the next diagram for details of how all the components work together and the central role played by the API server.
We are not going to enter into the details of all these components; instead, for us, the REST API server that makes the declarative part possible and the controller manager that makes the system converge to the desired state are important, so we want to dissect them a little bit.
The following diagram shows an overview of a typical Kubernetes architecture:
Figure 1.1 – Kubernetes architecture
Note
When looking at an architecture diagram, you need to know that it is only able to catch a part of the whole picture. For example, here, it seems that the cloud provider with its API is an external system, but actually, all the nodes and the control plane are created in that cloud provider.
Viewing Kubernetes from the perspective of the HyperText Transfer Protocol (HTTP) REST API server makes it like any classic application with REST endpoints and a database for storing state—in our case, usually etcd—and with multiple replicas of the web server for high availability (HA). What is important to emphasize is that anything we want to do with Kubernetes we need to do via the API; we can’t connect directly to any other component, and this is true also for the internal ones: they can’t talk directly between them—they need to go through the API.
From our client machines, we don’t query the API directly (such as by using curl), but instead, we use this kubectl client application that hides some of the complexity, such as authentication headers, preparing request content, parsing the response body, and so on.
Whenever we do a command such as kubectl get pods, there is an HTTP Secure (HTTPS) call to the API server. Then, the server goes to the database to fetch details about the Pods, and a response is created and pushed back to the client. The kubectl client application receives it, parses it, and is able to display a nice output suited to a human reader. In order to see what exactly happens, we can use the verbose global flag of kubectl (--v), for which the higher value we set, the more details we get.
For an exercise, do try kubectl get pods --v=6, when it just shows that a GET request is performed, and keep increasing --v to 7, 8, 9, and more so that you will see the HTTP request headers, the response headers, part or all of the JavaScript Object Notation (JSON) response, and many other details.
The API server itself is not responsible for actually changing the state of the cluster—it updates the database with the new values and, based on such updates, other things are happening. The actual state changes are done by controllers and components such as scheduler or kubelet. We are going to drill down into controllers as they are important for our GitOps understanding.
When reading about Kubernetes (or maybe listening to a podcast), you will hear the word controller quite often. The idea behind it comes from industrial automation or robots, and it is about the converging control loop.
Let’s say we have a robotic arm and we give it a simple command to move at a 90-degree position. The first thing that it will do is to analyze its current state; maybe it is already at 90 degrees and there is nothing to do. If it isn’t in the right position, the next thing is to calculate the actions to take in order to get to that position, and then, it will try to apply those actions to reach its relative place.
We start with the observe phase, where we compare the desired state with the current state, then we have the diff phase, where we calculate the actions to apply, and in the action phase, we perform those actions. And again, after we perform the actions, it starts the observe phase to see if it is in the right position; if not (maybe something blocked it from getting there), actions are calculated, and we get into applying the actions, and so on until it reaches the position or maybe runs out of battery or something. This control loop continues on and on until in the observe phase, the current state matches the desired state, so there will be no actions to calculate and apply. You can see a representation of the process in the following diagram:
Figure 1.2 – Control loop
In Kubernetes, there are many controllers. We have the following:
ReplicaSet: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/HorizontalPodAutoscaler (HPA): https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/And a few others can be found here, but this isn’t a complete list: https://kubernetes.io/docs/concepts/workloads/controllers/The ReplicaSet controller is responsible for running a fixed number of Pods. You create it via kubectl and ask to run three instances, which is the desired state. So, it starts by checking the current state: how many Pods we have running right now; it calculates the actions to take: how many more Pods to start or terminate in order to have three instances; it then performs those actions. There is also the HPA controller, which, based on some metrics, is able to increase or decrease the number of Pods for a Deployment (a Deployment is a construct built on top of Pods and ReplicaSets that allows us to define ways to update Pods (https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)), and a Deployment relies on a ReplicaSet controller it builds internally in order to update the number of Pods. After the number is modified, it is still the ReplicaSet controller that runs the control loop to reach the number of desired Pods.
The controller’s job is to make sure that the actual state matches the desired state, and they never stop trying to reach that final state. And, more than that, they are specialized in types of resources—each takes care of a small piece of the cluster.
In the preceding examples, we talked about internal Kubernetes controllers, but we can also write our own, and that’s what Argo CD really is—a controller, its control loop taking care that the state declared in a Git repository matches the state from the cluster. Well, actually, to be correct, it is not a controller but an operator, the difference being that controllers work with internal Kubernetes objects while operators deal with two domains: Kubernetes and something else. In our case, the Git repository is the outside part handled by the operator, and it does that using something called custom resources, a way to extend Kubernetes functionality (https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
So far, we have looked at the Kubernetes architecture with the API server connecting all the components and how the controllers are always working within control loops to get the cluster to the desired state. Next, we will get into details on how we can define the desired state: we will start with the imperative way, continue with the more important declarative way, and show how all these get us one step closer to GitOps.
We discussed a little bit about the differences between an imperative style where you clearly specify actions to take—such as start three more Pods—and a declarative one where you specify your intent—such as there should be three Pods running for the deployment—and actions need to be calculated (you might increase or decrease the Pods or do nothing if three are already running). Both imperative and declarative ways are implemented in the kubectl client.
Whenever we create, update, or delete a Kubernetes object, we can do it in an imperative style.
To create a namespace, run the following command:
kubectl create namespace test-imperative
Then, in order to see the created namespace, use the following command:
kubectl get namespace test-imperative
Create a deployment inside that namespace, like so:
kubectl create deployment nginx-imperative --image=nginx -n test-imperative
Then, you can use the following command to see the created deployment:
kubectl get deployment -n test-imperative nginx-imperative
To update any of the resources we created, we can use specific commands, such as kubectl label to modify the resource labels, kubectl scale to modify the number of Pods in a Deployment, ReplicaSet, StatefulSet, or kubectl set
