16,99 €
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Veröffentlichungsjahr: 2025
www.orangeava.com
Copyright © 2025 Orange Education Pvt Ltd, AVA®
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author nor Orange Education Pvt Ltd or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Orange Education Pvt Ltd has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capital. However, Orange Education Pvt Ltd cannot guarantee the accuracy of this information. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
First Published: October 2025
Published by: Orange Education Pvt Ltd, AVA®
Address: 9, Daryaganj, Delhi, 110002, India
275 New North Road Islington Suite 1314 London,
N1 7AA, United Kingdom
ISBN (PBK): 978-93-49888-10-4
ISBN (E-BOOK): 978-93-49888-62-3
Scan the QR code to explore our entire catalogue
www.orangeava.com
My Beloved Parents:
Smt Tara Devi and Sri Baskit Sah.
And
My Wife, Geethashri Ananda as well as My Son, Naitik.
- Dhirendra Kumar
How We Learn and Evolve in Life
आचार्यत् पद्मादत्ते पादम् पादम् शिष्य स्वमेधायः
पादम् सब्रह्मचारिभ्यः पादम् कालक्रमेण च
~
A student gets a quarter (knowledge) from his teacher, a quarter by his own intelligence.
A quarter from his fellow students, and a quarter in due course of time.
- Ishan Khare
Dhirendra Kumar is a seasoned DevOps engineer with deep expertise in cloud infrastructure and developer experience. He began his career at IBM in 2003, supporting Fortune 500 clients with virtualization and container orchestration using Kubernetes and OpenShift. Now based in Boston, he works at a leading fintech firm, where he scales infrastructure and builds automation solutions leveraging Kubernetes and AWS.
Dhirendra holds multiple industry certifications and is an active contributor to the open-source community. He is also passionate about IoT and cutting-edge technology innovations.
Ishan Khare is a distinguished engineer with a passion for technology and lifelong learning. He began his career as a full-stack developer, specializing in frontend (JavaScript and TypeScript) and backend (Python and Docker) at Wingify and ReBIT. He then focused on backend development with Golang at Goibibo/MakeMyTrip, deepening his expertise in building scalable systems.
Ishan's journey advanced into cloud-native engineering at Gojek Tech, where he worked extensively with Kubernetes, Istio, and Google Cloud in distributed systems and SRE. He further contributed to Kubernetes controllers at Porter.run, and became a founding engineer at Loft Labs. There, he played a critical role in building vCluster and vCluster.pro, leveraging Golang, Helm, and Kubernetes operators.
Recently, Ishan led development of a GPU cloud product at Cloudraft, integrating Golang, Kubernetes, and KubeVirt for a data center client. He now focuses on his own start-up, innovating at the intersection of GPU cloud, Kubernetes, and AI—including training custom models. Ishan also explores drone technology, from hardware assembly to software automation, reflecting his end-to-end engineering mindset.
Sachin Lobo has 17 years of experience in designing and engineering applications and infrastructure solutions across enterprise environments. He has worked across various industries, including telecommunications and banking, contributing to large-scale distributed systems and infrastructure modernization projects. He has had the pleasure of mentoring both junior and senior engineers, fostering technical excellence, and knowledge sharing within development organizations. Over the years, he has worked with industry leaders such as Infosys, J.P. Morgan, and Reliance Jio, among others.
At the moment, Sachin works as a Staff Product Engineer at InfraCloud where he focuses on building and enhancing cloud-native solutions that enable organizations to adopt modern infrastructure practices. With deep expertise in technologies such as Python, Golang, Docker and Kubernetes, he specializes in creating scalable and reliable systems for enterprise clients.
He is particularly passionate about cloud-native tools and technologies, especially in the field of container orchestration and infrastructure automation. His work is driven by a commitment to simplifying infrastructure, and enabling platform teams to build robust, production-grade systems.
Sachin holds a Bachelor of Engineering in Information Technology, and is based in Mumbai, India. His comprehensive understanding of both traditional enterprise systems and modern cloud-native architectures uniquely positions him to bridge the gap between legacy infrastructure and contemporary virtualization solutions.
We would like to thank our family for their unwavering love and continuous support throughout the writing of this book. Their encouragement and belief in us served as the greatest motivators, and without them, this project would not have been possible. We are deeply grateful for their patience and understanding, as we dedicated countless hours to bringing this book to life!
In today's rapidly evolving cloud-native landscape, organizations are continuously seeking innovative ways to optimize infrastructure, and streamline operations. The convergence of traditional Virtual Machine (VM) workloads with agile, containerized applications presents a significant challenge, yet also a powerful opportunity. This book, Ultimate KubeVirt for OpenShift Virtualization is crafted to guide you through this transformative journey, bridging the gap between established virtualization paradigms and the dynamic world of Kubernetes.
The book serves as your comprehensive guide to leveraging KubeVirt within OpenShift, empowering you to manage VMs as first-class citizens within your Kubernetes clusters. We delve into the core architectural components of KubeVirt, providing you with a fundamental understanding of how it seamlessly integrates with OpenShift's robust ecosystem. Beyond theory, we adopt a practical approach, offering step-by-step instructions for setting up your environment, managing VM lifecycles, and configuring intricate networking and storage solutions tailored for virtualized workloads.
As you progress, you will explore the advanced topics crucial for modern IT environments, including implementing robust security measures, automating VM management with GitOps, and optimizing performance for demanding workloads. We also examine specialized use cases such as running GPU-accelerated VMs, and compare KubeVirt's capabilities with other virtualization strategies. Our goal is to equip you with the knowledge and practical skills necessary to confidently design, deploy, and operate a unified platform for both your VMs and containers, ensuring agility, scalability, and efficiency in your hybrid cloud strategy. This book is divided into 15 chapters, designed to systematically build your expertise:
Chapter 1: Introduction to KubeVirt for OpenShift Virtualization will introduce KubeVirt, its relevance, and the benefits of combining Kubernetes with VMs.
Chapter 2: Setting Up the Environment provides a detailed guide for installing and configuring OpenShift Virtualization.
Chapter 3: Understanding the KubeVirt Architecture offers a comprehensive overview of KubeVirt's core components, and how it uses Kubernetes CRDs.
Chapter 4: Managing Virtual Machines (VMs) covers defining, deploying, and lifecycle management of VMs as Kubernetes-native objects.
Chapter 5: Networking in OpenShift Virtualization delves into KubeVirt’s networking architecture, including Multus and security considerations.
Chapter 6: Storage Integration explains persistent storage for VMs, focusing on DataVolumes and CDI.
Chapter 7: Security and Compliance explores the best practices for securing VMs, using RBAC, SELinux, and OpenShift policies.
Chapter 8: Automating Virtualization with GitOps introduces GitOps principles for managing VM configurations and CI/CD pipelines.
Chapter 9: Monitoring and Performance Optimization focuses on setting up monitoring and optimizing VM performance, using Prometheus and Grafana.
Chapter 10: Programming KubeVirt Functionality guides developers on extending and automating KubeVirt using the Go client library.
Chapter 11: KubeVirt vs. vCluster compares these two methods for running virtual machines within Kubernetes environments.
Chapter 12: Cloning, Golden VM Images, and the CDI Project covers advanced VM lifecycle management, including cloning and golden images with CDI.
Chapter 13: KubeVirt in Hybrid and Multi-Cloud Environments explores using KubeVirt to manage VMs across diverse cloud and on-premises infrastructures.
Chapter 14: Advanced Topics in KubeVirt delves into advanced use cases such as running GPU-accelerated workloads within VMs.
Chapter 15: Best Practices and Future Trends concludes the book by discussing the best practices and emerging trends in virtualization and cloud-native technologies.
We hope you are enjoying your recently purchased book! Your feedback is incredibly valuable to us, and to all other readers looking for great books.
If you found this book helpful or enjoyable, we would truly appreciate it, if you could take a moment to leave a short review with a 5 star rating on Amazon. It helps us grow, and lets other readers discover our books.
As a thank you, we would love to send you a free digital copy of this book, and a 30% discount code on your next cart value on our official websites:
www.orangeava.com
www.orangeava.in (For Indian Subcontinent)
Here's how:
Leave a review for the book on Amazon.
Take a screenshot of your review, and send an email to [email protected] (it can be just the confirmation screen).
Once, we receive your screenshot, we will send you the digital file, within 24 hours.
Thank you so much for your support - it means a lot to us!
Please follow the link or scan the QR code to download theCode Bundles and Images of the book:
The code bundles and images of the book are also hosted onhttps://rebrand.ly/29c0d3
In case there’s an update to the code, it will be updated on the existing GitHub repository.
We take immense pride in our work at Orange Education Pvt Ltd and follow best practices to ensure the accuracy of our content to provide an indulging reading experience to our subscribers. Our readers are our mirrors, and we use their inputs to reflect and improve upon human errors, if any, that may have occurred during the publishing processes involved. To let us maintain the quality and help us reach out to any readers who might be having difficulties due to any unforeseen errors, please write to us at :
Your support, suggestions, and feedback are highly appreciated.
Did you know that Orange Education Pvt Ltd offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.orangeava.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at: [email protected] for more details.
At www.orangeava.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on AVA® Books and eBooks.
If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If there is a topic that you have expertise in, and you are interested in either writing or contributing to a book, please write to us at [email protected]. We are on a journey to help developers and tech professionals to gain insights on the present technological advancements and innovations happening across the globe and build a community that believes Knowledge is best acquired by sharing and learning with others. Please reach out to us to learn what our audience demands and how you can be part of this educational reform. We also welcome ideas from tech experts and help them build learning and development content for their domains.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions. We at Orange Education would love to know what you think about our products, and our authors can learn from your feedback. Thank you!
For more information about Orange Education, please visit www.orangeava.com.
1. Introduction to KubeVirt for OpenShift Virtualization
Introduction
Structure
Evolution of Virtualization and Cloud-Native Applications
The Evolution of Virtualization
The Rise of Cloud Computing
The Shift to Cloud-Native Applications
Challenges in Managing Legacy and Cloud-Native Workloads
Introduction to KubeVirt: Bridging the Gap
Benefits of KubeVirt in Cloud-Native Environments
Adaptation
KubeVirt and its Relevance
The Evolution of Virtualization and the Rise of Containers
Challenges in Managing Legacy VM-Based Workloads
Key Components of KubeVirt
Benefits of Combining Kubernetes and Virtual Machines
Overview of OpenShift Virtualization
Evolution of Virtualization and Containerization
Key Components of OpenShift Virtualization
Key Use Cases for KubeVirt in Hybrid Workloads
Bridging Legacy Applications with Cloud-Native Environments
Hybrid Cloud Deployments
Dev/Test Environments for Legacy Applications
Multi-Tenancy and Secure Workload Isolation
Disaster Recovery and High Availability
Major Comparison with Traditional Virtualization Platforms
Architecture and Deployment Model
Performance and Resource Utilization
Management and Automation
Security and Isolation
Conclusion
2. Setting Up the Environment
Introduction
Structure
Prerequisites for KubeVirt Deployment
Platform Compatibility
Hardware Requirements
Networking Considerations
Software Dependencies
Security and Access Control
Installation Readiness Check
Installing OpenShift Virtualization Operator
Prerequisites
Step 1: Accessing the OpenShift Web Console
Step 2: Installing the OpenShift Virtualization Operator
Step 3: Verifying the Installation
Step 4: Enabling OpenShift Virtualization
Step 5: Configuring Storage and Networking
Real-World Example: Deploying a Virtual Machine
Configuring Cluster Resources for Virtualization
Understanding Resource Allocation in OpenShift Virtualization
Configuring Compute Resources
Storage Configuration for Virtualization
Using OpenShift Data Foundation (ODF) for Persistent Storage
Networking for Virtualization
Scaling and Scheduling Policies
Networking Setup and Integration
KubeVirt Networking Overview
Networking Setup in OpenShift with KubeVirt
Multus CNI for Multiple Network Interfaces
SR-IOV for High-Performance Networking
Verifying and Troubleshooting the Installation
Verifying Installation
Troubleshooting Common Installation Issues
Tools for Environment Preparation
Infrastructure Provisioning Tools
Kubernetes Cluster Management Tools
Networking and Load Balancing Tools
Storage Management Tools
Security and Compliance Tools
Conclusion
3. Understanding the KubeVirt Architecture
Introduction
Structure
The Core Components
virt-api: The Central API Server for KubeVirt
virt-controller: The Core Control Plane Component
virt-handler: The Agent on Each Node
Node Health Monitoring
Architecture and Communication Flow
virt-launcher: The per-VMI Process Manager
KubeVirt CRDs: A Prioritized Overview
VirtualMachineInstance (VMI)
VirtualMachine (VM)
DataVolume
VirtualMachineInstanceReplicaSet (VMIRS)
instancetype
Less Common CRDs
The Vital Role of libvirt and QEMU in KubeVirt
libvirt: The Virtualization API
QEMU: The Virtual Machine Emulator and Virtualizer
The Interplay of libvirt, QEMU, and KubeVirt
Conclusion
4. Managing Virtual Machines (VMs)
Introduction
Structure
Defining Virtual Machines (VMs) Manifests
Configuring VM Specifications
Configuring CPU
Number of vCPUs
CPU Topology
CPU Model
CPU Features
Configuring Memory
Memory Allocation
Configuring Storage
PersistentVolumeClaims
A Consolidated Example
Managing VM Templates and Replicas
Benefits of Using VM Templates
Creating VM Templates
Managing VM Templates
VM Replicas in KubeVirt
DataVolumes for VM Storage
Understanding the Containerized Data Importer (CDI)
Alternative Storage Options
Using DataVolumes with KubeVirt
DataVolume Features and Considerations
Uploading Disk Images with virtctl image-upload
Practical Use Cases for DataVolumes
Benefits and Limitations of DataVolumes
Lifecycle Management
Conclusion
5. Networking in OpenShift Virtualization
Introduction
Structure
An Overview of KubeVirt’s Networking Architecture
KubeVirt Networking Fundamentals
OpenShift Virtualization Networking Glossary
Virtual Network Interface Controller (vNIC)
Pod Network
KubeVirt’s Network Binding Mechanism
Common Networking Models
Masquerade Networking (NAT)
Multus CNI for Multi-Network Attachments
SR-IOV for High-Performance Networking
Networking Considerations and Best Practices
Real-World Deployment Scenario
Configuring Multus for Advanced Networking in OpenShift Virtualization
Understanding Multus in OpenShift Virtualization
Key Benefits of Multus
Installing and Configuring Multus in OpenShift
Real-World Use Cases
Troubleshooting Multus Issues
Connecting VMs to Kubernetes Services
Networking Strategies for VM and Kubernetes Service Connectivity
Istio for VM Service Mesh Integration
Hybrid Network Policies
Bridged and NAT Networking Models
Bridged Networking
Use Case in OpenShift Virtualization
NAT Networking
Troubleshooting Network Connectivity Issues in KubeVirt for OpenShift Virtualization
Common Network Connectivity Challenges in KubeVirt
Step-by-Step Troubleshooting Approach
Debugging Pod-to-Pod Connectivity Issues
Troubleshooting Service and Load Balancer Issues
DNS Resolution Debugging
Identifying MTU and Packet Fragmentation Issues
Performance and Packet Loss Troubleshooting
Security Considerations for Network Isolation
Understanding Network Isolation in KubeVirt
Key Network Isolation Models
Security Risks in Network Isolation
Best Practices for Securing Network Isolation
Real-World Use Case: Securing Multi-Tenant OpenShift Virtualization
Conclusion
6. Storage Integration
Introduction
Structure
Persistent Storage for Virtual Machines
Traditional Storage Solutions
Kubernetes Foundational Storage Elements
KubeVirt’s Utilization of these Foundational Elements
Storage Modes
Flexible Storage Provisioning with CDI and Kubernetes Primitives
Containerized Data Importer (CDI) Project
The DataVolume Custom Resource
HTTP/HTTPS URL Imports
Container Disk Imports from Container Registries
Efficient PVC Cloning
Local Disk Image Uploads
Creating Empty KubeVirt Virtual Machine Disks
Importing from oVirt Installations (imageio Source)
Importing from VMware Environments (vddk Source)
Specialized Content Type Handling
ContainerDisks in KubeVirt
CSI Drivers for Enhanced Storage Support
Leveraging CSI Drivers in KubeVirt
Enhanced Flexibility and Choice of Storage Backends
Popular CSI Drivers and KubeVirt Integration
Cloud Provider CSI Drivers
On-Premise CSI Drivers
Managing Storage Performance and Scalability
Latency in Volume Operations
Identifying and Measuring Storage Latency
Host Connection Limits and Connection Pressure
Strategies to Mitigate
Scalability Considerations with Local and Topology-Constrained Storage
Local Storage Scalability Challenges
Topology-Constrained Storage Scalability Considerations
Solutions and Best Practices
Impact of Access Modes on Live Migration
Storage Access Modes Defined
Impact of RWO on Live Migration
Impact of ROX on Live Migration
Impact of RWX on Live Migration
Troubleshooting Common Storage Issues
Volume Mounting Failures
Common Causes
Troubleshooting Steps
Storage Performance Bottlenecks
Common Causes
Diagnosing and Resolving Bottlenecks
Data Corruption Issues
Identifying and Addressing Corruption
Network Connectivity Problems
Impact on Storage Access and Solutions
Best Practices for Storage Integration
Selecting Appropriate Storage Types
Block, File, and Object Storage
Workload-Specific Recommendations
Recommended Configurations for StorageClasses and PersistentVolumes
StorageClass Configuration
PersistentVolume and PersistentVolumeClaim Configuration
Guidelines on Monitoring
Monitoring Tools and Metrics
Proactive Identification
Conclusion
7. Security and Compliance
Introduction
Structure
Role-Based Access Control for Virtual Machines
Understanding RBAC in KubeVirt
Core RBAC Components
RBAC Implementation in KubeVirt
Advanced RBAC Use Cases
Best Practices for RBAC in OpenShift Virtualization
Using SELinux and Seccomp Profiles for Enhanced Security
Introduction to SELinux and Seccomp
SELinux Overview
Seccomp Overview
Real-World Use Cases
Configuring OpenShift Security Policies
OpenShift Cluster Access
Overview of OpenShift Cluster Access
Real-World Use Case
Authentication Mechanisms
CLI and Web Console Access
Role-Based Access Control (RBAC)
Roles and RoleBindings
Security Context Constraints (SCCs)
Network Policies for Securing Virtualized Workloads
Compliance and Auditing
Auditing and Monitoring Virtual Machine Activity
Importance of Auditing and Monitoring
Logging VM Activity in KubeVirt
Capturing VM Logs
Audit Logging for Security and Compliance
Network Traffic Monitoring
Capturing Network Traffic Logs
Security Considerations
Kyverno: How It Can Help KubeVirt and OpenShift
Key Features of Kyverno
Applying Kyverno to KubeVirt
Enhancing OpenShift Security with Kyverno
Enforcing Network Policies
Enforcing OpenShift SCC Policies
Real-World Security Challenges and Solutions in KubeVirt and OpenShift Virtualization
Workload Isolation and Multi-Tenancy Risks
Network Security and East-West Traffic Protection
Storage Security and Data Protection
Supply Chain Security and Image Integrity
Identity and Access Management (IAM)
Best Practices for Securing OpenShift Virtualization
Conclusion
8. Automating Virtualization with GitOps
Introduction
Structure
Introduction to GitOps for Virtualization
Defining GitOps in the Context of KubeVirt and OpenShift
Advantages of GitOps for Virtualization Workloads
GitOps Architecture for KubeVirt on OpenShift
Real-World Use Case: DevSecOps for VMs in a Financial Institution
Integrating Persistent Volumes and DataVolumes into GitOps
Extending GitOps to Multi-Cloud and vCluster Deployments
Managing VM Configurations with GitOps Tools
Declarative VM Management with KubeVirt
GitOps Principles Applied to VM Lifecycle Management
Real-World Example
Integrating VMConfig Custom Resources
GitOps Pipelines for VM Deployments
Secrets and SSH Keys Management
Observability and Auditing
Challenges and Best Practices
Automating Deployment and Updates Using Pipelines
Understanding the Pipeline Philosophy in GitOps-Driven Virtualization
Architectural Design of Virtualization Pipelines
Real-World Example: Automating a Windows VM Rollout
Leveraging Tekton and Argo CD for Full Lifecycle Automation
Version Control and Environment Promotion
Secure Handling of Secrets and VM Configuration
Example: Injecting a Cloud-Init SSH Key
Monitoring, Observability, and Pipeline Resilience
Maintaining Consistency across Clusters
The Challenge of Multi-Cluster Consistency
GitOps as the Foundation for Consistency
Architecture for Multi-Cluster Consistency with KubeVirt
Managing KubeVirt-Specific Resources
Real-World Example: Deploying a Windows VM across Two Clusters
Synchronizing RBAC and Cluster Policies
CI/CD Integration for Virtualized Workloads
Architectural Considerations
Pipeline Design Patterns
Real-World Example: GitOps for Ubuntu-based Development Environments
Toolchain Recommendations
Security and Compliance Considerations
Troubleshooting GitOps Automation Issues
Understanding the GitOps Execution Path
Common Categories of Automation Failures
Diagnostic Tooling and Observability
Best Practices for Resilient GitOps Pipelines
Real-World Troubleshooting Scenario
Conclusion
9. Monitoring and Performance Optimization
Introduction
Structure
Setting Up Monitoring for KubeVirt Workloads
The Need for VM Observability in Kubernetes
Enabling KubeVirt Monitoring on OpenShift
Configuring Prometheus to Scrape KubeVirt Metrics
Configuring RBAC for Metrics Scraping
Visualizing Metrics with Grafana Dashboards
Importing KubeVirt Dashboards
Advanced Observability Techniques
Using Prometheus and Grafana for Visualization
Architecture Overview of Monitoring in OpenShift with KubeVirt
Configuring Prometheus to Monitor KubeVirt Components
Understanding Key KubeVirt Metrics
Integrating and Customizing Grafana Dashboards
Real-World Use Case: SLA Monitoring for Virtualized Workloads
Best Practices for Observability in KubeVirt-Enhanced OpenShift Clusters
Configuring Alerts for Proactive Monitoring
Alerting Architecture in OpenShift-KubeVirt Environments
Defining Effective Alerting Rules
Virtual Machine (VM) Status
Resource Saturation
Disk I/O Bottlenecks
Integrating with Alertmanager for Notification Routing
Tuning Alert Sensitivity and Preventing Alert Fatigue
Best Practices for Alert Lifecycle Management
Analyzing Performance Metrics for Optimization
Observability Architecture for KubeVirt in OpenShift
Key Performance Metrics for Optimization
Optimization Example
Setting Performance Baselines
Detecting Performance Bottlenecks
Alerts and Automation for Performance Degradation
Troubleshooting Performance Bottlenecks
Understanding the Anatomy of a Bottleneck
Symptom-Based Troubleshooting Framework
Tools and Techniques
Bottleneck Scenario 1: CPU Saturation in KubeVirt Nodes
Bottleneck Scenario 2: Memory Pressure and Ballooning
Bottleneck Scenario 3: Disk and I/O Performance
Bottleneck Scenario 4: Network Latency and Throughput
Resource Optimization Techniques for Virtual Machines
Understanding Resource Allocation Models in KubeVirt
Right-Sizing Virtual Machines
Leveraging HugePages for Memory Optimization
CPU Pinning and NUMA-Aware Scheduling
Disk I/O Optimization
Network Optimization for Virtual Machines
Using Live Migration Strategically
Automating Optimization with GitOps and Pipelines
Conclusion
10. Programming KubeVirt Functionality
Introduction
Structure
Setting Up Your Go Environment and KubeVirt Client
The Role of kubeconfig Files
Out-of-Cluster Authentication with client-go
In-Cluster Authentication Using ServiceAccounts
Using k8s.io/client-go/rest.InClusterConfig()
KubeVirt’s Go Client Libraries
Initializing the KubeVirt Clientset
Using Controller-Runtime Client Instead
Interacting with KubeVirt API Objects
Performing CRUD Operations on KubeVirt Resources
Listing VirtualMachineInstances (VMIs) in a Namespace
Getting a Specific VirtualMachineInstance
Creating a VirtualMachine
Updating a VirtualMachine (example, to change its state)
CRUD Operations with Controller-Runtime Based Client
Create VM
Delete VM
Advanced Operations
Programmatically Scaling Resources
Creating Platforms around KubeVirt
Creating the HTTP Server
Initializing the Containing Objects
Defining Routes and Attaching Handlers
Defining Handlers
Launch the Server
Advantages
Conclusion
11. KubeVirt vs. vCluster
Introduction
Structure
A Detailed Architectural Overview
KubeVirt
Explanation of the Diagram Elements and Flow
Core Philosophy and Goals
Key Architectural Components
Explanation of the Diagram Elements and Flow
Provisioning Isolated vCluster Instances
Leveraging the vCluster CLI
Advanced Deployment with Helm and vCluster.yaml
Accessing Your vCluster via Kubeconfig
Verifying Your vCluster Deployment
Secure Decommissioning: Deleting vClusters
Optimizing Resources: Pausing (Sleep Mode) and Resuming vClusters
Comparing Performance and Resource Utilization
Hybrid Workloads vs. Multi-Tenant Environments
Advantages in Hybrid Contexts
Integrating KubeVirt with vCluster
Benefits and Limitations of the Combined Approach
Decision Framework
Choosing KubeVirt
Choosing vCluster
Conclusion
12. Cloning, Golden VM Images, and the CDI Project
Introduction
Structure
Benefits and Use Cases
Advantages of KubeVirt VM Cloning
Accelerating VM Provisioning and Deployment
Ensuring Consistency and Standardization Across Environments
Enabling Efficient Scalability of Virtualized Workloads
Bolstering Disaster Recovery Capabilities
Creating and Managing Golden Images
The “Golden Disk Image” in the KubeVirt Ecosystem
The Central Role of PersistentVolumeClaims
CDI-Based Cloning
Leveraging the VirtualMachineClone API with VM Snapshots
Referencing Golden Images in VirtualMachine Definitions
Best Practices for Golden Image Lifecycle Management
Updating Live Virtual Machines from New Golden Image Versions
CDI and its Role in Image Management
Core Components
Interaction Flow for DataVolume Processing
Key CDI Functionalities for Image Management
Importing VM Images from Various Sources
Cloning the Existing PersistentVolumeClaims
Understanding contentType
KubeVirt (Default Content Type)
Creation, Maintenance, and Consumption Patterns
Creation
Maintenance and Updating
Customizing VM Clones for Unique Workload Requirements
Ensuring Uniqueness: MAC Addresses and SMBIOS
Modifying Virtual Hardware Configuration
Connecting to NetworkAttachmentDefinitions (NADs)
Interface Types (bridge, masquerade, slirp, SR-IOV)
Static IP and MAC Address Considerations
Conclusion
13. KubeVirt in Hybrid and Multi-Cloud Environments
Introduction
Structure
Understanding Hybrid and Multi-Cloud Architectures
Business Drivers behind Hybrid and Multi-Cloud Adoption
Architectural Patterns for Hybrid and Multi-Cloud Deployments
Network and Storage Considerations
KubeVirt’s Role in Hybrid and Multi-Cloud Ecosystems
Challenges and Best Practices
Workload Portability: Migrating VMs across Clouds with KubeVirt
Understanding Workload Portability in the Context of Virtualization
Architectural Foundations for Multi-Cloud VM Migration
Step-by-Step Process for Cross-Cloud VM Migration
Real-World Example: Migrating from On-Prem OpenShift to AWS ROSA
Considerations and Best Practices
Future Outlook: Live Migration and Edge Computing
Networking Considerations for Hybrid and Multi-Cloud Environments
Introduction to Hybrid and Multi-Cloud Networking
Network Design Principles
KubeVirt Networking Architecture
Inter-Cluster Networking
DNS and Service Discovery
Load Balancing and Traffic Management
Security and Policy Management
Observability and Troubleshooting
Multi-Cloud Workload Scaling and Disaster Recovery Strategies
Understanding Multi-Cloud Scalability and Its Challenges
Scaling Virtual Workloads across Multi-Cloud OpenShift Clusters
Horizontal Scaling across Clusters
Vertical Scaling within Nodes
Disaster Recovery (DR) for Virtual Machines in Multi-Cloud Setups
Cold DR with GitOps and Immutable VM Definitions
Warm DR Using Persistent Volumes and Periodic Sync
Hot DR with Continuous Replication
Practical Architecture for Multi-Cloud Scaling and DR
Best Practices and Recommendations
Security Challenges and Best Practices in Hybrid/Multi-Cloud Setups
Security Challenges in Hybrid and Multi-Cloud Architectures
Security Best Practices for KubeVirt in Hybrid/Multi-Cloud
Real-World Use Cases for KubeVirt in Hybrid and Multi-Cloud Scenarios
Modernizing Legacy Applications While Ensuring Continuity
Disaster Recovery and High Availability across Cloud Boundaries
Edge Computing and Telco Network Function Virtualization (NFV)
Cross-Cloud Bursting for Seasonal Demand
Dev/Test Environments for Heterogeneous Application Stacks
Security-Sensitive Workloads in Regulated Environments
Conclusion
14. Advanced Topics in KubeVirt
Introduction
Structure
GPU Workloads
Types of GPU Virtualization
Kubernetes Device Plugins for GPU Support
NVIDIA GPU Support
AMD GPU Support
GPU Passthrough for KubeVirt VMs
Host Preparation
Enable IOMMU (Input/Output Memory Management Unit)
Setting Kernel Parameters
Load vfio-pci Driver and Bind GPU
Configuring KubeVirt Custom Resource (CR) for permittedHostDevices
VirtualMachineInstance (VMI) Specification for GPU Passthrough
Best Practices for Performance Tuning of GPU Workloads
Leveraging NUMA Alignment and CPU Pinning
Utilizing Fractional GPU Resources
Storage Performance
HugePages for VMs
Managing GPU Resources and Monitoring Usage
Monitoring GPU Metrics with Prometheus and Grafana
NVIDIA Data Center GPU Manager (DCGM)
DCGM Exporter
Prometheus Setup
Grafana Dashboards
Key GPU Metrics for Monitoring
Challenges and Troubleshooting GPU Workloads in KubeVirt
vfio-pci Binding Problems
IOMMU Misconfigurations
Incorrect KubeVirt CR permittedHostDevices Configuration
GPU Slicing, GPU Time Sharing, and Multi-Instance GPU (MIG)
GPU Slicing/Time-Sharing (Primarily NVIDIA context)
Multi-Instance GPU (MIG) (NVIDIA Specific)
NVIDIA vGPU (Mediated Devices) in KubeVirt
Conclusion
15. Best Practices and Future Trends
Introduction
Structure
Trends in Hybrid and Multi-Cloud Virtualization
The Unified Control Plane: A New Operational Paradigm
Strategic Modernization
Best Practices for Production-Grade Networking
High-Performance, Resilient Storage in Hybrid/Multi-Cloud
The Role of Edge Computing and Virtualization
Integrating KubeVirt with Lightweight Kubernetes
Resource Management
High Availability (HA)
Security
AI and Automation in Virtual Workload Management
The Operator Pattern and Go Controllers
AIOps in Practice: Predictive Scaling and Anomaly Detection
Automating Day-2 Operations
Towards Self-Healing and Generative Infrastructure
Conclusion
Index
This chapter will talk about Virtualization which has transformed IT infrastructure from single-application servers to efficient, resource-optimized systems. With the rise of cloud-native technologies, organizations are increasingly adopting containers for their agility and scalability. However, many enterprises still rely on Virtual Machines (VMs) for legacy applications and compliance needs.
KubeVirt addresses this challenge by extending Kubernetes to manage VM workloads alongside containerized applications. It enables businesses to unify their infrastructure, improve resource utilization, and modernize legacy systems, without disrupting operations. OpenShift Virtualization, powered by KubeVirt, further enhances this capability by providing enterprise-grade tools for VM lifecycle management, security, and hybrid cloud deployments. Together, these technologies bridge the gap between traditional virtualization and modern cloud-native environments, supporting a gradual transition toward fully containerized ecosystems.
In this chapter, we will cover the following topics:
Evolution of virtualization and cloud-native applications
KubeVirt and its relevance
Benefits of combining Kubernetes and Virtual Machines
Overview of OpenShift Virtualization
Key use cases for KubeVirt in hybrid workloads
Comparison with traditional virtualization platforms
Virtualization has come a long way, from the early days of partitioning servers to the full-fledged cloud computing revolution, allowing businesses to get the most of their resources, do with less, and make the move to virtual infrastructures much more easily. But as organizations adopt cloud-native architectures, traditional machines (VMs) are left running alongside containerized applications, requiring creative solutions, for which KubeVirt provides a way.
It all started a long time back, when Virtualization was introduced to use the hardware efficiently. The legacy of establishing a data center was to run a single application on a single physical server which resulted in poor resource utilization and additional costs. By utilizing hypervisors, multiple Virtual Machines (VMs) can now operate on one server, providing greater resource efficiency and flexibility.
Mainframe Era (1960s–1970s):
IBM introduced virtualization in mainframes, allowing multiple users to run isolated workloads.
Server Virtualization
(1990s–2000s):
VMware popularized x86 server virtualization, enabling multiple OS instances on a single server.
Data Center Consolidation:
Organizations adopted hypervisors such as VMware ESXi, Microsoft Hyper-V, KVM, and IBM LPAR to reduce hardware footprints, and increase agility.
Figure 1.1: The Early Days of Hardware Virtualization
As virtualization matured, cloud computing emerged, shifting the focus from on-premises infrastructure to on-demand services. Infrastructure-as-a-Service (IaaS) platforms such as AWS EC2, Azure, IBM, HPE, and GCP leveraged virtualization to provide scalable computing resources.
Public Cloud (2006-Present):
AWS pioneered cloud services, allowing enterprises to rent virtualized infrastructure, instead of maintaining physical servers.
Private and Hybrid Clouds:
OpenStack, VMware vSphere, and Azure Stack enabled enterprises to create private cloud environments, while integrating with public cloud services.
Limitations:
Despite its benefits, VM-based cloud computing faced challenges in speed, resource efficiency, and orchestration complexity.
Figure 1.2: The Rise of Cloud Computing
The Emergence of Containers: Containers introduced a new paradigm for deploying and managing applications. Unlike VMs, which virtualize hardware, containers virtualize the operating system, making them lightweight and faster to deploy.
Figure 1.3: The Emergence of Containers
Docker Revolution
(2013-Present):
Docker standardized container packaging, simplifying application deployment.
Kubernetes
(2014-Present):
Open-sourced Kubernetes as a powerful orchestration tool, enabling large-scale containerized deployments.
Microservices Architecture:
Organizations adopted microservices to break monolithic applications into independently deployable services.
While containers offer agility, many enterprises still rely on legacy VM-based applications. Migrating entirely to containers is not always feasible due to dependencies, licensing, or performance considerations. This creates a need for hybrid environments that support both VMs and containers seamlessly.
Coexistence of VMs and Containers:
Enterprises need solutions that integrate VMs within Kubernetes environments.
Resource Efficiency and Orchestration:
Managing VMs alongside containers requires consistent networking, storage, and compute management.
Security and Compliance:
Legacy applications often require specific security and compliance policies that must be maintained in a containerized infrastructure.
KubeVirt is an open-source project that allows users to run virtual machines inside Kubernetes. It enables enterprises to run VM-based workloads within Kubernetes clusters, leveraging Kubernetes-native capabilities for networking, storage, and automation.
Seamless Integration:
KubeVirt allows VMs to run as Kubernetes objects, simplifying management.
Unified Orchestration:
IT teams can use Kubernetes tools such as kubectl and Helm to manage both VMs and containers.
Hybrid Cloud Ready:
KubeVirt supports multi-cloud and on-premises deployments, aligning with hybrid cloud strategies.
KubeVirt provides several advantages for organizations looking to modernize their infrastructure, without abandoning legacy applications.
Flexibility:
Run VMs alongside containers, without separate orchestration platforms.
Efficiency:
Utilize Kubernetes’ resource scheduling to optimize VMs and container workloads.
Cost Savings:
Reduce reliance on traditional hypervisors, while leveraging Kubernetes’ automation.
Future-Proofing:
Enable gradual migration from VMs to containers, without disrupting operations.
Figure 1.4: Key Evolutionary Stages for KubeVirt
Figure 1.5 shows the adaptation:
Figure 1.5: KubeVirt Adaptation
Traditional Virtualization:
Hypervisors and VM-based workloads before Kubernetes.
Containerization:
Rise of Docker, Kubernetes, and microservices.
Hybrid Workloads:
Initial efforts to run VMs alongside containers in Kubernetes.
KubeVirt Introduction:
Development of KubeVirt to run VMs within Kubernetes.
Adoption and Integration:
Growing enterprise adoption, integration with OpenShift, and multi-cloud environments.
In today’s rapidly evolving IT landscape, organizations are increasingly adopting cloud-native technologies to optimize infrastructure management, improve scalability, and accelerate application deployment. However, many enterprises still rely on traditional virtual machine or VM-based workloads due to legacy applications, compliance requirements, and operational familiarity. This is where KubeVirt comes into play, bridging the gap between conventional virtualization and containerized environments.
KubeVirt extends Kubernetes, the leading container orchestration platform, to run and manage virtual machines alongside containers seamlessly. It provides a unified platform where organizations can integrate their existing VM workloads into modern Kubernetes-based environments, enabling hybrid cloud adoption, and easing the transition to cloud-native computing.
Traditional Virtualization
Virtualization technology has been a cornerstone of IT infrastructure for decades, allowing multiple virtual machines to run on a single physical server. Popular hypervisors such as VMware ESXi, Microsoft Hyper-V, and KVM (Kernel-based Virtual Machine) have enabled efficient resource utilization, improved scalability, and isolated workloads.
However, as cloud-native computing gained momentum, organizations began shifting toward containerized architectures. Containers offer:
Lightweight runtime environments compared to VMs.
Faster startup times due to the absence of full operating system overhead.
Better scalability and orchestration using Kubernetes.
Figure 1.6: Architecture of Traditional System vs. Virtual System
Despite the benefits of containerization, many enterprises still rely on traditional VMs for:
Running stateful applications, and legacy software.
Compliance and security requirements in regulated industries.
Workloads that require full OS support not available in containers.
These challenges necessitate a solution that allows organizations to run both containers and VMs on a single platform, while leveraging the power of Kubernetes. This is where KubeVirt becomes highly relevant.
What is KubeVirt?
KubeVirt is an open-source virtualization solution designed to run virtual machines within Kubernetes clusters. Developed by the Kubernetes community, and backed by Red Hat, KubeVirt enables organizations to modernize their infrastructure by integrating VMs into their Kubernetes environments, without requiring a separate hypervisor or dedicated virtualization platform.
KubeVirt extends Kubernetes by introducing the following core components:
Virtual Machine (VM) Custom Resource:
Defines and manages virtual machines as native Kubernetes objects.
virt-launcher:
A pod that hosts the virtual machine, ensuring seamless VM execution within Kubernetes.
virt-handler:
A daemon running on Kubernetes nodes responsible for managing VM lifecycles.
virt-controller:
Manages high-level orchestration tasks, such as VM creation and scheduling.
libvirt and QEMU/KVM:
Underlying technologies that provide VM execution within Kubernetes pods.
KubeVirt API:
Extends the Kubernetes API to support virtualization-related workloads.
Why is KubeVirt Relevant?
Unifying VM and Container Workloads:
KubeVirt eliminates the need for separate infrastructure stacks by running both VMs and containers on Kubernetes, allowing organizations to streamline their DevOps and CI/CD pipelines, without migrating legacy applications immediately.
Enabling Hybrid Cloud and Multi-Cloud Strategies:
KubeVirt supports hybrid cloud deployments, allowing enterprises to run VM workloads on-premises, in public clouds, or across multiple cloud providers, while benefiting from Kubernetes’ orchestration capabilities.
Modernizing Legacy Applications:
Many enterprises operate legacy applications that cannot be easily containerized. KubeVirt provides a path for incremental modernization, allowing organizations to refactor applications at their own pace, while maintaining Kubernetes-native management.
Simplified Management and Automation:
With KubeVirt, IT teams can use Kubernetes-native tools ( for example, kubectl, Helm, and GitOps) to manage virtual machines, bringing consistency to Infrastructure as Code (IaC) practices.
Cost Efficiency and Resource Optimization:
KubeVirt helps optimize hardware utilization by allowing VMs and containers to share compute resources, reducing the need for separate virtualization infrastructure.
The integration of KubeVirt into Kubernetes environments offers many advantages:
Unified Management:
Organizations can manage both containerized and VM-based workloads, using a single Kubernetes control plane, simplifying administration, and reducing the need for multiple platforms.
Improved Resource Efficiency:
By running VMs and containers within the same cluster, organizations can optimize resource allocation, reduce hardware waste, and achieve better density and cost-effectiveness.
Hybrid Cloud Compatibility:
KubeVirt enables organizations to seamlessly extend their infrastructure across on-premises data centers and cloud environments, facilitating hybrid cloud adoption and migration strategies.
Enhanced Automation and CI/CD Integration:
Traditional VMs can be included in modern DevOps workflows, enabling automated deployments, scaling, and integration with Kubernetes-native tools such as Helm, GitOps, and CI/CD pipelines.
Security and Isolation:
KubeVirt allows organizations to leverage Kubernetes security policies, while maintaining VM-level isolation, ensuring secure multi-tenancy and workload separation.
Simplified Modernization of Legacy Applications:
Legacy applications that cannot be easily containerized can still be managed within Kubernetes clusters, allowing gradual modernization, without disrupting business operations.
OpenShift Virtualization, an enterprise-grade solution powered by KubeVirt, is designed to integrate traditional virtual machine workloads into modern Kubernetes-based container environments. As organizations embrace cloud-native architectures, they face the challenge of managing legacy VMs, while leveraging the flexibility and scalability of containers. OpenShift Virtualization bridges this gap by enabling users to run and manage VMs within an OpenShift cluster, ensuring a unified infrastructure for both containerized and non-containerized applications.
