VMware Software-Defined Storage - Martin Hosken - E-Book

VMware Software-Defined Storage E-Book

Martin Hosken

0,0
38,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The inside guide to the next generation of data storage technology VMware Software-Defined Storage, A Guide to the Policy Driven, Software-Defined Storage Era presents the most in-depth look at VMware's next-generation storage technology to help solutions architects and operational teams maximize quality storage design. Written by a double VMware Certified Design Expert, this book delves into the design factors and capabilities of Virtual SAN and Virtual Volumes to provide a uniquely detailed examination of the software-defined storage model. Storage-as-a-Service (STaaS) is discussed in terms of deployment through VMware technology, with insight into the provisioning of storage resources and operational management, while legacy storage and storage protocol concepts provide context and demonstrate how Virtual SAN and Virtual Volumes are meeting traditional challenges. The discussion on architecture emphasizes the economies of storage alongside specific design factors for next-generation VMware based storage solutions, and is followed by an example in which a solution is created based on the preferred option identified from a selection of cross-site design options. Storage hardware lifecycle management is an ongoing challenge for IT organizations and service providers. VMware is addressing these challenges through the software-defined storage model and Virtual SAN and Virtual Volumes technologies; this book provides unprecedented detail and expert guidance on the future of storage. * Understand the architectural design factors of VMware-based storage * Learn best practices for Virtual SAN stretched architecture implementation * Deploy STaaS through vRealize Automation and vRealize Orchestrator * Meet traditional storage challenges with next-generation storage technology Virtual SAN and Virtual Volumes are leading the way in efficiency, automation, and simplification, while maintaining enterprise-class features and performance. As organizations around the world are looking to cut costs without sacrificing performance, availability, or scalability, VMware-based next-generation storage solutions are the ideal platform for tomorrow's virtual infrastructure. VMware Software-Defined Storage provides detailed, practical guidance on the model that is set to transform all aspects of vSphere data center storage.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 802

Veröffentlichungsjahr: 2016

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Foreword by Duncan Epping

Introduction

Who Should Read This Book?

What Is Covered in This Book?

Chapter 1: Software-Defined Storage Design

Software-Defined Compute

Software-Defined Networking

Software-Defined Storage

Designing VMware Storage Environments

The Economics of Storage

Implementing a Software-Defined Storage Strategy

Software-Defined Storage Summary

Chapter 2: Classic Storage Models and Constructs

Classic Storage Concepts

vSphere Storage Technologies

Chapter 3: Fabric Connectivity and Storage I/O Architecture

Fibre Channel SAN

iSCSI Storage Transport Protocol

NFS Storage Transport Protocol

Fibre Channel over Ethernet Protocol

Multipathing Module

Direct-Attached Storage

Evaluating Switch Design Characteristics

Fabric Connectivity and Storage I/O Architecture Summary

Chapter 4: Policy-Driven Storage Design with Virtual SAN

Virtual SAN Overview

Virtual SAN Architecture

Virtual SAN Design Requirements

Virtual SAN Network Fabric Design

Virtual SAN Storage Policy Design

Virtual SAN Datastore Design and Sizing

Designing for Availability

Virtual SAN Internal Component Technologies

Virtual SAN Integration and Interoperability

Chapter 5: Virtual SAN Stretched Cluster Design

Stretched Cluster Use Cases

Fault Domain Architecture

Witness Appliance

Network Design Requirements

Stretched Cluster Deployment Scenarios

Default Gateway and Static Routes

Stretched Cluster Storage Policy Design

Preferred and Nonpreferred Site Concepts

Stretched Cluster Read/Write Locality

Distributed Resource Scheduler Configurations

High Availability Configuration

Stretched Cluster WAN Interconnect Design

Deploying Stretched VLANs

Data Center Interconnect Design Considerations Summary

Stretched Cluster Solution Architecture Example

Stretched Cluster Failure Scenarios

Stretched Cluster Interoperability

Chapter 6: Designing for Web-Scale Virtual SAN Platforms

Scale-up Architecture

Scale-out Architecture

Designing vSphere Host Clusters for Web-Scale

Building-Block Clusters and Scale-out Web-Scale Architecture

Scalability and Designing Physical Resources for Web-Scale

Leaf-Spine Web-Scale Architecture

Chapter 7: Virtual SAN Use Case Library

Use Cases Overview

Solution Architecture Example: Building a Cloud Management Platform with Virtual SAN

Chapter 8: Policy-Driven Storage Design with Virtual Volumes

Introduction to Virtual Volumes Technology

Management Plane

Data Plane

Storage Policy–Based Management with Virtual Volumes

Benefits of Designing for Virtual Volumes

Virtual Volumes Key Design Requirements

vSphere Storage Feature Interoperability

VAAI and Virtual Volumes

Virtual Volumes Summary

Chapter 9: Delivering a Storage-as-a-Service Design

STaaS Service Definition

Cloud Platforms Overview

Cloud Management Platform Architectural Overview

The Combined Solution Stack

Workflow Examples

Summary

Chapter 10: Monitoring and Storage Operations Design

Storage Monitoring

Storage Component Monitoring

Storage Monitoring Challenges

Common Storage Management and Monitoring Standards

Virtual SAN Monitoring and Operational Tools

vRealize Operations Manager

vRealize Log Insight

Log Insight Syslog Design

End-to-End Monitoring Solution Summary

Storage Capacity Management and Planning

Summary

End User License Agreement

Pages

iv

v

vii

xvii

xviii

xix

xx

xxi

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

367

368

369

370

371

372

373

374

375

376

377

378

379

381

382

383

384

385

386

387

388

389

390

391

392

393

394

395

396

397

398

399

400

401

402

403

404

405

406

407

408

409

410

411

412

413

414

415

416

417

418

419

420

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

436

437

438

439

440

441

442

443

444

445

446

447

448

449

450

451

452

453

454

455

456

457

458

459

460

461

462

463

464

465

466

467

468

469

470

471

472

473

474

475

476

477

478

479

480

481

482

483

484

485

486

487

488

489

490

491

492

493

494

495

496

497

498

499

500

501

502

503

504

505

506

507

508

Guide

Table of Contents

Begin Reading

List of Illustrations

Chapter 1: Software-Defined Storage Design

Figure 1.1 Software-defined data center conceptual model

Figure 1.2 Example of a design sequence methodology

Figure 1.3 Storage architecture business drivers and design factors

Figure 1.4 Hard disk drive cost per gigabyte

Figure 1.5 Hard disk drive capacity improvements

Figure 1.6 Breakdown of total cost of ownership of storage hardware

Figure 1.7 Simplifi ed annual total cost of ownership

Figure 1.8 Storage cost per gigabyte example

Figure 1.9 Information Lifecycle Management key challenges

Figure 1.10 Hybrid Virtual Volumes and Virtual SAN platform

Chapter 2: Classic Storage Models and Constructs

Figure 2.1 Classic storage model

Figure 2.2 Storage LUN provisioning mechanisms

Figure 2.3 Strips and stripes

Figure 2.4 Performance in striping

Figure 2.5 Redundancy through parity

Figure 2.6 Redundancy in disk mirroring

Figure 2.7 RAID 0 striped disk array without fault tolerance

Figure 2.8 RAID 1 disk mirroring and duplexing

Figure 2.9 RAID 1+0 mirroring and striping

Figure 2.10 RAID 3 parallel transfer with dedicated parity disk

Figure 2.11 RAID 5 independent data disks with distributed parity blocks

Figure 2.12 RAID 6 independent data disks with two independent parity schemes

Figure 2.13 Virtual provisioning

Figure 2.14 Traditional provisioning versus virtual provisioning

Figure 2.15 Virtual provisioning layering

Figure 2.16 Tiered storage systems

Figure 2.17 Storage tiering design example

Figure 2.18 Storage-tiering mechanisms

Figure 2.19 Scaling storage in a building-block approach

Figure 2.20 Snapshots and clones

Figure 2.21 vSphere Metro Storage Cluster design

Figure 2.22 Identifying the demarcation line between the vSphere layer and the storage array layer

Figure 2.23 vSphere storage controller stack

Figure 2.24 Example of a multiple storage controller virtual machine design, for splitting workload across storage controllers

Figure 2.25 Volume, datastore, and LUN

Figure 2.26 Types of datastore and storage network

Figure 2.27 VMFS datastores

Figure 2.28 Raw device mapping connection topology

Figure 2.29 Cluster Across Boxes, Windows Server Failover Clustering example

Figure 2.30 Datastore cluster design example

Figure 2.31 Storage DRS affinity rules

Figure 2.32 Storage I/O control mechanism

Figure 2.33 VASA 1.0 vCenter server and storage array integration

Figure 2.34 Classic storage policies

Figure 2.35 Static storage tier presentation model

Figure 2.36 Mixed storage tier presentation model

Figure 2.37 Fully auto-tiered presentation model

Figure 2.38 VMware

dedicated

disk subsystem

Figure 2.39 VMware

shared

disk subsystem

Chapter 3: Fabric Connectivity and Storage I/O Architecture

Figure 3.1 Fibre Channel Protocol layers

Figure 3.2 Fibre Channel : component topology

Figure 3.3 Physical storage array architecture

Figure 3.4 Fibre Channel address mechanism

Figure 3.5 Fibre Channel port naming

Figure 3.6 WWW device addressing

Figure 3.7 World Wide Name (WWN) device addressing

Figure 3.8 SAN management topology

Figure 3.9 Point-to-point (FC-P2P) topology

Figure 3.10 Arbitrated loop : (FC-AL) connectivity

Figure 3.11 Switched fabric : (FC-SW) connectivity

Figure 3.12 Single-core, core-edge fabric topology

Figure 3.13 Dual-core, core-edge fabric topology

Figure 3.14 Edge-core-edge, dual-core, fabric topology

Figure 3.15 Full mesh topology

Figure 3.16 Partial mesh topology

Figure 3.17 Fabric zoning

Figure 3.18 Zoning / zone set

Figure 3.19 Virtual Fabric architecture example

Figure 3.20 Virtual Fabric sample use case

Figure 3.21 N_Port Virtualization (NPV) and N_Port ID Virtualization (NPIV)

Figure 3.22 NPV and NPIV use cases

Figure 3.23 Boot from SAN example

Figure 3.24 iSCSI protocol : component architecture

Figure 3.25 Jumbo frames data path configuration

Figure 3.26 iSCSI Qualified Name (IQN) structure

Figure 3.27 iSCSI off-load adapter comparison

Figure 3.28 Network I/O Control design example

Figure 3.29 Single virtual switch iSCSI design

Figure 3.30 Multiple virtual switch iSCSI design

Figure 3.31 Aggregated switch IP SAN design example

Figure 3.32 NAS network clients

Figure 3.33 Unified NAS system architecture example

Figure 3.34 Gateway NAS : system architecture example

Figure 3.35 NFS export stack

Figure 3.36 Single virtual switch / single network design example

Figure 3.37 Single virtual switch / multiple network design example

Figure 3.38 Fibre Channel over Ethernet converged protocol

Figure 3.39 Fibre Channel over Ethernet frame

Figure 3.40 Converged network adapter (CNA)

Figure 3.41 Fibre Channel over Ethernet switch architecture

Figure 3.42 FCoE infrastructure example (Cisco UCS Blade system)

Figure 3.43 Edge Fibre Channel over Ethernet design

Figure 3.44 End-to-End Fibre Channel over Ethernet design

Figure 3.45 Fibre Channel : multipathing : example configuration

Figure 3.46 Active/passive disk arrays

Figure 3.47 ALUA-capable array path

Figure 3.48 vSphere Pluggable Storage Architecture

Figure 3.49 Native and third-party multipathing plug-ins

Figure 3.50 iSCSI storage multipathing failover and load balancing

Figure 3.51 NFS version 3 : configuration example

Figure 3.52 NFS version 4.1 configuration example

Figure 3.53 Direct-attached storage model at ROBO site

Figure 3.54 Lenovo’s Flex SEN with x240 Blade Series

Figure 3.55 Storage protocol design factors

Chapter 4: Policy-Driven Storage Design with Virtual SAN

Figure 4.1 Software-defined enterprise storage

Figure 4.2 Disk group configuration

Figure 4.3 Virtual SAN hybrid disk group configuration

Figure 4.4 Virtual SAN all-flash disk group configuration

Figure 4.5 Disk group configuration example

Figure 4.6 Anatomy of a hybrid solution read, write, and destaging operation

Figure 4.7 Anatomy of an all-flash solution read, write, and destaging operation

Figure 4.8 Deduplication and compression web client configuration

Figure 4.9 Deduplication mechanism

Figure 4.10 Virtual SAN distributed datastore

Figure 4.11 Multiple virtual SAN datastore design

Figure 4.12 Virtual SAN disk components

Figure 4.13 Witness metadata failure scenario

Figure 4.14 Software : checksum web : client configuration

Figure 4.15 Virtual SAN configuration with PCIe-based flash devices

Figure 4.16 Geometry of a mechanical disk

Figure 4.17 Tiered workload virtual SAN clusters

Figure 4.18 Virtual SAN logical network design

Figure 4.19 Network I/O Control

Figure 4.20 The core, aggregation, and access network model

Figure 4.21 Leaf-spine network model

Figure 4.22 Virtual SAN optimum rack design

Figure 4.23 Leaf-spine network oversubscription

Figure 4.24 Storage policy–based management framework via the vSphere web client

Figure 4.25 Virtual SAN storage policy object provisioning mechanism

Figure 4.26 Storage profile rule sets 253

Figure 4.27 Number of failures to tolerate component distribution 255

Figure 4.28 RAID 5 erasure coding

Figure 4.29 RAID 6 erasure coding

Figure 4.30 Erasure coding web client configuration

Figure 4.31 The Number of Disk Stripes per Object component distribution

Figure 4.32 Object space reservation capability

Figure 4.33 Flash read cache reservation capability

Figure 4.34 Virtual machine compliance status

Figure 4.35 Force provisioning capability

Figure 4.36 Quality of service (QoS) use case

Figure 4.37 Storage policy–based management quality of service rule

Figure 4.38 Storage capabilities and recommended practices

Figure 4.39 I/O blender effect

Figure 4.40 Multiple disk group building-block configuration

Figure 4.41 Virtual SAN total cost of ownership (TCO) and sizing calculator

Figure 4.42 Virtual SAN availability by design

Figure 4.43 Rebalance operations

Figure 4.44 Calculating vSphere HA admission control policy and the number of failures to tolerate capability

Figure 4.45 vSphere high availability network communication

Figure 4.46 Virtual SAN network partition scenario

Figure 4.47 Virtual SAN : maintenance mode evacuation options

Figure 4.48 Quorum logic failure scenario

Figure 4.49 Virtual SAN 1 object placement

Figure 4.50 Virtual SAN 6 object placement (fault domain–enabled environment)

Figure 4.51 Fault domain design

Figure 4.52 Fault domain sample architecture

Figure 4.53 Virtual SAN internal component technologies and driver architecture

Figure 4.54 Distributed Object Manager object mirror I/O path

Chapter 5: Virtual SAN Stretched Cluster Design

Figure 5.1 Virtual SAN stretched cluster

Figure 5.2 Stretched cluster fault domain architecture

Figure 5.3 Layer 2 extension

Figure 5.4 Virtual SAN stretched cluster overview

Figure 5.5 Stretched cluster optimal layer 2 and layer 3 configurations

Figure 5.6 Anatomy of stretched cluster local read operation

Figure 5.7 Anatomy of stretched cluster write operation

Figure 5.8 Stretched cluster vSphere DRS affinity rule configuration

Figure 5.9 Configuring a DRS affinity rule set for a Virtual SAN stretched cluster

Figure 5.10 Admission control policy configuration

Figure 5.11 Stretched Cluster host isolation advanced settings

Figure 5.12 Dark fiber interconnect

Figure 5.13 Dense wave division multiplexing (DWDM)

Figure 5.14 SONET or SDH

Figure 5.15 Multiprotocol Label Switching (MPLS)

Figure 5.16 Stretched VLANs

Figure 5.17 Stretched VLANs over dark fiber

Figure 5.18 Stretched VLANs over MPLS

Figure 5.19 Stretched VLANs over L2TP version

Figure 5.20 Use case example logical architecture

Figure 5.21 Physical architecture overview

Figure 5.22 Cisco vPC domain

Figure 5.23 OTV deployment over DWDM and dark fiber

Chapter 6: Designing for Web-Scale Virtual SAN Platforms

Figure 6.1 Disk group scale-up strategy (adding capacity disks)

Figure 6.2 Disk group scale-up strategy (adding disk groups)

Figure 6.3 Virtual SAN–enabled vSphere cluster scaled up and out to eight hosts

Figure 6.4 Web-scale pod logical architecture

Figure 6.5 Web-scale pod scale-out data-center strategy

Figure 6.6 Web-scale leaf-spine architecture

Chapter 7: Virtual SAN Use Case Library

Figure 7.1 Virtual SAN use cases overview

Figure 7.2 Virtual SAN island cluster design

Figure 7.3 Disaster-recovery solution architecture example

Figure 7.4 Isolated edge cluster design in an NSX implementation

Figure 7.5 Remote office / branch office fault domain architecture

Figure 7.6 Two-node ROBO solution architecture overview

Figure 7.7 Witness object metadata architecture

Figure 7.8 Virtual SAN and VDI architecture

Figure 7.9 Using Virtual SAN as a generic object storage platform

Figure 7.10 Architectural overview of enterprise cloud management cluster

Figure 7.11 Virtual SAN with Cisco UCS environment physical connectivity details

Figure 7.12 Percentage-based admission control

Figure 7.13 Network I/O Control

Figure 7.14 High-level physical network design

Figure 7.15 Virtual SAN Storage Configuration

Figure 7.16 Virtual SAN hybrid disk group configuration

Figure 7.17 vCenter Server migration option

Figure 7.18 vCenter Server bootstrap option

Chapter 8: Policy-Driven Storage Design with Virtual Volumes

Figure 8.1 Next-generation storage model

Figure 8.2 Comparing the classic storage architecture with Virtual Volumes

Figure 8.3 vSphere Virtual Volumes component architecture

Figure 8.4 VASA control path

Figure 8.5 Storage container architecture

Figure 8.6 Storage container provisioning process

Figure 8.7 Protocol endpoint architecture

Figure 8.8 Protocol endpoint provisioning process

Figure 8.9 Binding operations

Figure 8.10 Common management platform for policy-driven storage

Figure 8.11 Storage policy example

Figure 8.12 Storage policy–driven cloud platform

Chapter 9: Delivering a Storage-as-a-Service Design

Figure 9.1 Manual storage provisioning process

Figure 9.2 Complex storage provisioning process

Figure 9.3 Example of a storage-as-a-service request workflow

Figure 9.4 vRealize Automation : storage service : catalog example

Figure 9.5 IT optimization computing components, delivered as a service

Figure 9.6 Common cloud computing services

Figure 9.7 Hybrid cloud platform

Figure 9.8 STaaS cloud : software stack

Figure 9.9 vRealize Automation services

Figure 9.10 Advanced Service Design capability examples

Figure 9.11 Advanced Service Designer workflow example

Figure 9.12 Example of a workflow's logical configuration

Figure 9.13 STaaS NAS form design

Figure 9.14 STaaS access rights modification

Chapter 10: Monitoring and Storage Operations Design

Figure 10.1 Storage monitoring challenges

Figure 10.2 SMI-S design and specification

Figure 10.3 Target solution for storage and platform monitoring

Figure 10.4 Virtual SAN ESXCLI namespace options

Figure 10.5 Virtual SAN RVC namespaces options

Figure 10.6 VSAN Observer user interface

Figure 10.7 Performance : service status and policy configuration

Figure 10.8 Performance Service monitoring and reporting 491

Figure 10.9 Virtual SAN Health Service feature

Figure 10.10 vRealize Operations Manager logical design

Figure 10.11 Management Pack for Storage Devices dashboard view

Figure 10.12 Overview of vRealize Operations Manager integrated solution

Figure 10.13 Feature comparison—MPSD and storage vendor management packs

Figure 10.14 Syslog message structure

Figure 10.15 Design scenario

Figure 10.16 End-to-end monitoring

Figure 10.17 Capacity and performance management process

Figure 10.18 EMC Symmetrix VMAX layout and expansion

Figure 10.19 Virtual SAN elastic scaling of capacity and performance

List of Tables

Chapter 1: Software-Defined Storage Design

Table 1.1 Requirements gathering

Chapter 2: Classic Storage Models and Constructs

Table 2.1 Typical average I/O per second (per physical disk)

Table 2.2 RAID I/O penalty impact

Table 2.3 RAID 0—striped disk array without fault tolerance

Table 2.4 RAID 1—disk mirroring and duplexing

Table 2.5 RAID 1+0—mirroring and striping

Table 2.6 RAID 3—parallel transfer with dedicated parity disk

Table 2.7 RAID 5—independent data disks with distributed parity blocks

Table 2.8 RAID 6—independent data disks with two independent parity schemes

Table 2.9 Thick-provisioning example

Table 2.10 Virtual provisioning design considerations

Table 2.11 Design factors of virtual provisioning

Table 2.12 Advantages and drawbacks of automated storage tiering

Table 2.13 Capacity scalability of building-block architecture example

Table 2.14 Storage scalability design factors

Table 2.15 Multivendor SAN environment operational challenges

Table 2.16 Multitenanted storage design

Table 2.17 Virtual machine component files

Table 2.18 Advantages and drawbacks of lazy zeroed thick disks

Table 2.19 Advantages and drawbacks of eager zeroed thick disks

Table 2.20 Advantages and drawbacks of thin disks

Table 2.21 Making LUN sizing decisions

Table 2.22 Tiered Storage I/O Control latency values example

Table 2.23 Storage tiering design factors

Chapter 3: Fabric Connectivity and Storage I/O Architecture

Table 3.1 Fibre Channel Protocol layers

Table 3.2 Fabric services

Table 3.3 SAN security options

Table 3.4 iSCSI Qualified Name (IQN) structure

Table 3.5 CHAP security levels

Table 3.6 Sample Network I/O Control policy

Table 3.7 Storage protocol comparison

Table 3.8 NFS advanced host configuration

Table 3.9 Design example vmnic configuration

Table 3.10 Fibre Channel over Ethernet distance limitations

Table 3.11 Data center bridging attributes

Table 3.12 Pluggable Storage Architecture (PSA) third-party plug-in categories

Chapter 4: Policy-Driven Storage Design with Virtual SAN

Table 4.1 Virtual SAN major releases

Table 4.2 Virtual SAN object types

Table 4.3 On-disk file format version history and support configuration

Table 4.4 Virtual SAN logs and descriptions

Table 4.5 Virtual SAN trace file location

Table 4.6 Interfaces supporting solid-state drives

Table 4.7 SSD endurance classes and Virtual SAN tier classes

Table 4.8 Virtual SAN mechanical disk characteristics and rotational speeds

Table 4.9 Virtual SAN 6.2 feature licensing

Table 4.10 Virtual SAN network teaming

Table 4.11 Sample Virtual SAN cluster Network I/O Control policy

Table 4.12 Virtual SAN firewall port requirements

Table 4.13 Example Virtual SAN rule set

Table 4.14 The number of failures to tolerate capability host requirements

Table 4.15 RAID 1 capacity and configuration requirements

Table 4.16 Erasure coding capacity and configuration requirements

Table 4.17 Default storage policy values

Table 4.18 Example application uptime requirements

Table 4.19 Object policy defaults

Table 4.20 Flash capacity sizing example

Table 4.21 Virtual SAN object types

Table 4.22 Sizing factor values

Table 4.23 Design scenario customer requirements

Table 4.24 Design scenario additional storage factors

Table 4.25 Customer compute and storage requirements summary

Table 4.26 vSphere HA operational comparison

Table 4.27 Example Virtual SAN HA and DRS parameters

Table 4.28 Fault domain sample architecture

Table 4.29 Integrated and interoperable vSphere storage features

Table 4.30 Irrelevant, unviable, or unsupported vSphere storage features

Chapter 5: Virtual SAN Stretched Cluster Design

Table 5.1 Witness appliance sizing configuration options

Table 5.2 Virtual SAN stretched cluster layer 2 and layer 3 network requirements

Table 5.3 Network bandwidth and latency requirements

Table 5.4 Distance and estimated link latency

Table 5.5 Sample vSphere HA configuration for a Virtual SAN stretched cluster

Table 5.6 Design factors for extending VLANs across fiber-based data-center interconnects

Table 5.7 Data-center interconnect key design factors

Table 5.8 Data-center interconnect summary

Table 5.9 Virtual SAN stretched cluster failure scenarios

Chapter 6: Designing for Web-Scale Virtual SAN Platforms

Table 6.1 Example of capacity scalability of building-block web-scale architecture

Table 6.2 Other Virtual SAN 6.0, 6.1, or 6.2 maximums

Chapter 7: Virtual SAN Use Case Library

Table 7.1 ESXi host hardware specifications

Table 7.2 Host resources

Table 7.3 vSphere HA example design values

Table 7.4 vSphere DRS example design values

Table 7.5 Anti-affinity rule guidelines for cloud management cluster applications

Table 7.6 vSphere Distributed Switch configuration

Table 7.7 Example CMP Network I/O Control policy

Table 7.8 Cloud management platform virtual machine requirements

Table 7.9 Example design storage policy specification

Table 7.10 Cloud platform virtual machine security baseline

Table 7.11 Cisco C-Series hardening baseline

Table 7.12 Cisco Nexus 5548UP hardening baseline

Chapter 8: Policy-Driven Storage Design with Virtual Volumes

Table 8.1 vSphere operational priorities

Table 8.2 Virtual Volumes object types

Table 8.3 Comparison of storage container and classic Volumes/LUNs

Executive Editor: Jody Lefevere

Development Editor: David Clark

Technical Editor: Ray Heffer

Production Editor: Barath Kumar Rajasekaran

Copy Editor: Sharon Wilkey

Editorial Manager: Mary Beth Wakefi eld

Production Manager: Kathleen Wisor

Proofreader: Nancy Bell

Indexer: Nancy Guenther

Project Coordinator, Cover: Brent Savage

Cover Designer: Wiley

Cover Image: ©Mikhail hoboton Popov/Shutterstock

Copyright © 2016 by John Wiley & Sons, Inc., Indianapolis, Indiana

Published simultaneously in Canada

ISBN: 978-1-119-29277-7

ISBN: 978-1-119-29279-1 (ebk.)

ISBN: 978-1-119-29278-4 (ebk.)

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifi cally disclaim all warranties, including without limitation warranties of fi tness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or website is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or website may provide or recommendations it may make. Further, readers should be aware that Internet websites listed in this work may have changed or disappeared between when this work was written and when it is read.

For general information on our other products and services or to obtain technical support, please contact our Customer Care Department within the U.S. at (877) 762-2974, outside the U.S. at (317) 572-3993 or fax (317) 572-4002.

Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.

Library of Congress Control Number: 2016944021

TRADEMARKS: Wiley, the Wiley logo, and the Sybex logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affi liates, in the United States and other countries, and may not be used without written permission. VMware is a registered trademark of VMware, Inc. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

About the Author

Martin Hosken is employed as a global cloud architect within the VMware Global Cloud Practice, which is part of its Cloud Provider Software Business Unit.

He has extensive experience architecting and consulting with international customers and designing the transition of organizations' legacy infrastructure onto VMware cloud-based platforms. His broad and deep knowledge of physical and virtualized services, platforms, and cloud infrastructure solutions is based on involvement and leadership in the global architecture, design, development, and implementation of large-scale, complex, multitechnology projects for enterprises and cloud service providers. He is a specialist in designing, implementing, and integrating best-of-breed, fully redundant Cisco, EMC, IBM, HP, Dell, and VMware systems into enterprise environments and cloud service providers' infrastructure.

In addition, Martin is a double VMware Certified Design Expert (VCDX #117) in Data Center Virtualization and Cloud Management and Automation. (See the Official VCDX directory available at http://vcdx.vmware.com.) Martin also holds a range of industry certifications from other vendors such as EMC, Cisco, and Microsoft, including MCITP and MCSE in Windows Server and Messaging.

He has been awarded the annual VMware vExpert title for a number of years for his significant contribution to the community of VMware users. (See the VMware Community vExpert Directory available at https://communities.vmware.com/vexpert.jspa.) This title is awarded to individuals for their commitment to the sharing of knowledge and their passion for VMware technology beyond their job requirements. Martin is also a part of the CTO Ambassador Program, and as such is responsible for connecting the R&D team at VMware with customers, partners, and field employees.

Follow Martin on Twitter: @hoskenm.

About the Technical Reviewer

Ray Heffer is employed as a global cloud architect for VMware's Cloud Provider Software Business Unit. He is also a double VCDX #122 (Desktop and Datacenter). In his previous roles with End User Computing (EUC), Technical Marketing, and Professional Services at VMware, he has led many large-scale platform designs for service providers, manufacturing, and government organizations.

Since 1997 Ray has specialized in administering, designing, and implementing solutions ranging from Microsoft Exchange, Linux, Citrix, and VMware. He deployed his first VMware environment in 2004 while working at a hosting company in the United Kingdom.

Ray is also a regular presenter at VMworld and VMUG events, covering topics such as Linux desktops and VMware Horizon design best practices.

Foreword by Duncan Epping

I had just completed the final chapter of the Virtual SAN book I was working on when Martin reached out and asked if I wanted to write a foreword for his book. You can imagine I was surprised to find out that there was another person writing a book on software-defined storage, and pleasantly surprised to find out that VSAN is one of the major topics in this book. Not just surprised, but also very pleased. The world is changing rapidly, and administrators and architects need guidance along this journey, the journey toward a software-defined data center.

When talking to customers and partners on the subject of the software-defined data center, a couple of concerns typically arise. Two parts of the data center have always been historically challenging and/or problematic—namely, networking and storage. Networking problems and concerns (and those related to security, for that matter) have been largely addressed with VMware NSX, which allows virtualization and networking administrators to work closely together on providing a flexible yet very secure foundation for the workloads they manage. This is done by adding an abstraction layer on top of the physical environment and moving specific services closer to the workloads (for instance, firewalling and routing), where they belong.

Over 30 years ago, RAID was invented, which allowed you to create logical devices formed out of multiple hard disk drives. This allowed for more capacity, higher availability, and of course, depending on the type of RAID used, better performance. It is fair to say, however, that the RAID construct was created as a result of the many constraints at the time. Over time, all of these constraints have been lifted, and the hardware evolution started the (software-defined) storage revolution. SSDs, PCIe-based flash, NVMe, 10GbE, 25GbE (and higher), RDMA, 12 Gbps SAS, and many other technologies allowed storage vendors to innovate again and to make life simpler. No longer do we need to wide-stripe across many disks to meet performance expectations, as that single SSD device can now easily serve 50,000 IOPS. And although some of the abstraction layers, such as traditional RAID or disk groups, may have been removed, most storage systems today are not what I would consider admin/user friendly.

There are different protocols (iSCSI, FCoE, NFS, FC), different storage systems (spindles, hybrid, all flash), and many different data services and capabilities these systems provide. As a result, we cannot simply place an abstraction layer on top as we have done for networking with NSX. We still need to abstract the resources in some shape or form and most definitely present them in a different, simpler manner. Preferably, we leverage a common framework across the different types of solutions, whether that is a hyper-converged software solution like Virtual SAN or a more traditional iSCSI-based storage system with a combination of flash and spindles.

Storage policy–based management is this framework. If there is anything you need to take away from this book, then it is where your journey to software-defined storage should start, and that is the SPBM framework that comes as part of vSphere. SPBM is that abstraction layer that allows you to consume storage resources across many different types of storage (with different protocols) in a simple and uniform way by allowing you to create policies that are passed down to the respective storage system through the VMware APIs for Storage Awareness.

In order to be able to create an infrastructure that caters to the needs of your customers (application owners/users), it is essential that you, the administrator or architect, have a good understanding of all the capabilities of the different storage platforms, the requirements of the application, and how architectural decisions can impact availability, recoverability, and performance of your workloads.

But before you even get there, this book will provide you with a good foundational understanding of storage concepts including thin LUNs, protocols, RAID, and much more. This will be quickly followed by the software-defined storage options available in a VMware-based infrastructure, with a big focus on Virtual Volumes and Virtual SAN.

Many have written on the subject of software-defined storage, but not many are as qualified as Martin. Martin is one of the few folks who have managed to accrue two VCDX certifications, and as a global cloud architect has a wealth of experience in this field. He is going to take you on a journey through the world of software-defined storage in a VMware-based infrastructure and teach you the art of architecture along the way.

I hope you will enjoy reading this book as much as I have.

Duncan EppingChief Technologist, Storage and Availability, VMware

Introduction

Storage is typically the most important element of any virtual data center. It is the key component in system performance, availability, scalability, and manageability. It has also traditionally been the most expensive component from a capital and operational cost perspective.

The storage infrastructure must meet not only today's requirements, but also the business needs for years to come, because of the capital expenditure costs historically associated with the hardware. Storage and vSphere architects must therefore make the most informed choices possible, designing solutions that take into account multiple complex and contradictory business requirements, technical goals, forecasted data growth, constraints, and of course, budget.

In order for you to be confident about undertaking a vSphere storage design that can meet the needs of a whole range of business and organization types, you must understand the capabilities of the platform. Designing a solution that can meet the requirements and constraints set out by the customer requires calling on your experience and knowledge, as well as keeping up with advances in the IT industry. A successful design entails collecting information, correlating it into a solid design approach, and understanding the design trade-offs and design decisions.

The primary content of this book addresses various aspects of the VMware vSphere software-defined storage model, which includes separate components. Before you continue reading, you should ensure that you are already well acquainted with the core vSphere products, such as VMware vCenter Server and ESXi, the type 1 hypervisor on which the infrastructure's virtual machines and guest operating systems reside.

It is also assumed that you have a good understanding of shared storage technologies and networking, along with the wider infrastructure required to support the virtual environment, such as physical switches, firewalls, server hardware, array hardware, and the protocols associated with this type of equipment, which include, but are not limited to, Fibre Channel, iSCSI, NFS, Ethernet, and FCoE.

Who Should Read This Book?

This book will be most useful to infrastructure architects and consultants involved in designing new vSphere environments, and administrators charged with maintaining existing vSphere deployments who want to further optimize their infrastructure or gain additional knowledge about storage design. In addition, this book will be helpful for anyone with a VCA, VCP, or a good foundational knowledge who wants an in-depth understanding of the design process for new vSphere storage architectures. Prospective VCAP, VCIX, or VCDX candidates who already have a range of vSphere expertise but are searching for that extra bit of detailed knowledge will also benefit.

What Is Covered in This Book?

VMware-based storage infrastructure has changed a lot in recent years, with new technologies and new storage vendors stepping all over the established industry giants, such as EMC, IBM, and NetApp. However, life-cycle management of the storage platform remains an ongoing challenge for enterprise IT organizations and service providers, with hardware renewals occurring on an ongoing basis for many of VMware's global customer base.

This book aims to help vSphere architects, storage architects, and administrators alike understand and design for this new generation of VMware-focused software-defined storage, and to drive efficiency through simple, less complex technologies that do not require large numbers of highly trained storage administrators to maintain.

In addition, this book aims to help you understand the design factors associated with these new vSphere storage options. You will see how VMware is addressing these data-center challenges through its software-defined storage offerings, Virtual SAN and Virtual Volumes, as well as developing cloud automation approaches to these next-generation storage solutions to further simplify operations.

This book offers you deep knowledge and understanding of these new storage solutions by

Providing unique insight into Virtual SAN and Virtual Volumes storage technologies and design

Providing a detailed knowledge transfer of these technologies and an understanding of the design factors associated with the architecture of this next generation of VMware-based storage platform

Providing guidance over delivering storage as a service (STaaS) and enabling enterprise IT organizations and service providers to deploy and maintain storage resources via a fully automated cloud platform

Providing detailed and unique guidance in the design and implementation of a stretched Virtual SAN architecture, including an example solution

Providing a detailed knowledge transfer of legacy storage and protocol concepts, in order to help provide context to the VMware software-defined storage model

Finally, in writing this book, I hope to help you understand all of the design factors associated with these new vSphere storage options, and to provide a complete guide for solution architects and operational teams to maximize quality storage design for this new generation of technologies.

The following provides a brief summary of the content in each of the 10 chapters:

Chapter 1

: Software-Defined Storage Design

This chapter provides an overview of where vSphere storage technology is today, and how we've reached this point. This chapter also introduces software-defined storage, the economics of storage resources, and enabling storage as a service.

Chapter 2

: Classic Storage Models and Constructs

This chapter covers the legacy and classic storage technologies that have been used in the VMware infrastructure for the last decade. This chapter provides the background required for you to understand the focus of this book, VMware vSphere's next-generation storage technology design.

Chapter 3

: Fabric Connectivity and Storage I/O Architecture

This chapter presents storage connectivity and fabric architecture, which is relevant for legacy storage technologies as well as next-generation solutions including Virtual Volumes.

Chapter 4

: Policy-Driven Storage Design with Virtual SAN

This chapter addresses all of the design considerations associated with VMware's Virtual SAN storage technology. The chapter provides detailed coverage of Virtual SAN functionality, design factors, and architectural considerations.

Chapter 5

: Virtual SAN Stretched Cluster Design

This chapter focuses on one type of Virtual SAN solution, stretched cluster design. This type of solution has specific design and implementation considerations that are addressed in depth. This chapter also provides an example Virtual SAN stretched architecture design as a reference.

Chapter 6

: Designing for Web-Scale Virtual SAN Platforms

This chapter addresses specific considerations associated with large-scale deployments of Virtual SAN hyper-converged infrastructure, commonly referred to as

web-scale

.

Chapter 7

Virtual SAN Use Case Library

This chapter provides an overview of Virtual SAN use cases. It also provides a detailed solution architecture for a cloud management platform that you can use as a reference.

Chapter 8

: Policy-Driven Storage Design with Virtual Volumes

This chapter provides detailed coverage of VMware's Virtual Volumes technology and its associated policydriven storage concepts This chapter also provides a lowlevel knowledge transfer as well as addressing in detail the design factors and architectural concepts associated with implementing Virtual Volumes

Chapter 9

: Delivering a Storage-as-a-Service Design

This chapter explains how IT organizations and service providers can design and deliver storage as a service in a cloud-enabled data center by using VMware's cloud management platform technologies.

Chapter 10

: Monitoring and Storage Operations Design

To ensure that a storage design can deliver an operationally efficient storage platform end to end, this final chapter covers storage monitoring and alerting design in the software-defined storage data center.

Chapter 1Software-Defined Storage Design

VMware is the global leader in providing virtualization solutions. The VMware ESXi software provides a hypervisor platform that abstracts CPU, memory, and storage resources to run multiple virtual machines concurrently on the same physical server.

To successfully design a virtual infrastructure, other products are required in addition to the hypervisor, in order to manage, monitor, automate, and secure the environment. Fortunately, VMware also provides many of the products required to design an end-to-end solution, and to develop an infrastructure that is software driven, as opposed to hardware driven. This is commonly described as the software-defined data center (SDDC), illustrated in Figure 1.1.

Figure 1.1 Software-defined data center conceptual model

The SDDC is not a single product sold by VMware or anyone else. It is an approach whereby management and orchestration tools are configured to manage, monitor, and operationalize the entire infrastructure. This might include products such as vSphere, NSX, vRealize Automation, vRealize Operations Manager, and Virtual SAN from VMware, but it could also include solutions such as VMware Integrated OpenStack, CloudStack, or any custom cloud-management solution that can deliver the required platform management and orchestration capabilities.

The primary aim of the SDDC is to decouple the infrastructure from its underlying hardware, in order to allow software to take advantage of the physical network, server, and storage. This makes the SDDC location-independent, and as such, it may be housed in a single physical data center, span multiple private data centers, or even extend into hybrid and public cloud facilities.

From the end user’s perspective, applications that are delivered from an SDDC are consumed in exactly the same way as they otherwise would be—through mobile, desktop, and virtual desktop interfaces—from anywhere, any time, with any device.

However, with the SDDC infrastructure decoupled from the physical hardware, the operational model of a virtual machine—with on-demand provisioning, isolation, mobility, speed, and agility—can be replicated for the entire data-center environment (including networking and storage), with complete visibility, security, and scale.

The overall aim is that an SDDC can be achieved with the customer’s existing physical infrastructure, and also provide the flexibility for added capacity and new deployments.

Software-Defined Compute

In this book, software-defined compute refers to the compute virtualization of the x86 architecture. What is virtualization? If you don’t know the answer to this question, you’re probably reading the wrong book, but in any case, let’s make sure we’re on the same page.

In the IT industry, the term virtualization can refer to various technologies. However, from a VMware perspective, virtualization is the technique used for abstracting the physical hardware away from the operating system. This technique allows multiple guest operating systems (logical servers or desktops) to run concurrently on a single physical server. This allows these logical servers to become a portable virtual compute resource, called virtual machines. Each virtual machine runs its own guest operating system and applications in an isolated manner.

Compute virtualization is achieved by a hypervisor layer, which exists between the hardware of the physical server and the virtual machines. The hypervisor is used to provide hardware resources, such as CPU, memory, and network to all the virtual machines running on that physical host. A physical server can run numerous virtual machines, depending on the hardware resources available.

Although a virtual machine is a logical entity, to its operating system and end users, it seems like a physical host with its own CPU, memory, network controller, and disks. However, all virtual machines running on a host share the same underlying physical hardware, but each taking its own share in an isolated manner. From the hypervisor’s perspective, each virtual machine is simply a discrete set of files, which include a configuration file, virtual disk files, log files, and so on.

It is VMware’s ESXi software that provides the hypervisor platform, which is designed from the ground up to run multiple virtual machines concurrently, on the same physical server hardware.

Software-Defined Networking

Traditional physical network architectures can no longer scale sufficiently to meet the requirements of large enterprises and cloud service providers. This has come about as the daily operational management of networks is typically the most time-consuming aspect in the process of provisioning new virtual workloads. Software-defined networking helps to overcome this problem by providing networking to virtual environments, which allows network administrators to manage network services through an abstracted higher-level functionality.

As with all of the components that make up the SDDC model, the primary aim is to provide a simplified and more efficient mechanism to operationalize the virtual data-center platform. Through the use of software-defined networking, the majority of the time spent provisioning and configuring individual network components in the infrastructure can be performed programmatically, in a virtualized network environment. This approach allows network administrators to get around this inflexibility of having to pre-provision and configure physical networks, which has proved to be a major constraint to the development of cloud platforms.

In a software-defined networking architecture, the control and data planes are decoupled from one another, and the underlying physical network infrastructure is abstracted from the applications. As a result, enterprises and cloud service providers obtain unprecedented ­programmability, automation, and network control. This enables them to build highly scalable, flexible networks with cloud agility, which can easily adapt to changing business needs by

Providing centralized management and control of networking devices from multiple vendors.

Improving automation and management agility by employing common application program interfaces (APIs) to abstract the underlying networking from the orchestration and provisioning processes, without the need to configure individual devices.

Increasing network reliability and security as a result of centralized and automated management of the network devices, which provides this unified security policy enforcement model, which in turn reduces configuration errors.

Providing more-granular network control, with the ability to apply a wide range of policies at the session, user, device, or application level.

NSX is VMware’s software-defined networking platform, which enables this approach to be taken through an integrated stack of technologies. These include the NSX Controller, NSX vSwitch, NSX API, vCenter Server, and NSX Manager. By using these components, NSX can create layer 2 logical switches, which are associated with logical routers, both north/south and east/west firewalling, load balancers, security policies, VPNs, and much more.

Software-Defined Storage

Where the data lives! That is the description used by the marketing department of a large financial services organization that I worked at several years ago. The marketing team regularly used this term in an endearing way when trying to describe the business-critical storage systems that maintained customer data, its availability, performance level, and compliance status.

Since then, we have seen a monumental shift in the technologies available to vSphere for virtual machine and application storage, with more and more storage vendors trying to catch up, and for some, steam ahead. The way modern data centers operate to store data has been changing, and this is set to continue over the coming years with the continuing shift toward the next-generation data center, and what is commonly described as software-defined storage.

VMware has undoubtedly brought about massive change to enterprise IT organizations and service-provider data centers across the world, and has also significantly improved the operational management and fundamental economics of running IT infrastructure. However, as application workloads have become more demanding, storage devices have failed to keep up with IT organizations’ requirements for far more flexibility from their storage solutions, with greater scalability, performance, and availability. These design challenges have become an everyday conversation for operational teams and IT managers.

The primary challenge is that many of the most common storage systems we see in data centers all over the world are based on outdated technology, are complex to manage, and are highly proprietary. This ties organizations into long-term support deals with hardware vendors.

This approach is not how the biggest cloud providers have become so successful at scaling their storage operations. The likes of Amazon, Microsoft, and Google have scaled their cloud storage platforms by trading their traditional storage systems for low-cost commodity hardware, and employed the use of powerful software around it to achieve their goals, such as availability, data protection, operational simplification, and performance. With this approach, and through the economies of scale, these large public cloud providers have achieved their supremacy at a significantly lower cost than deploying traditional monolithic centralized storage systems. This methodology, known as web-scale, is addressed further in Chapter 6, “Designing for Web-Scale Virtual SAN Platforms (10,000 VMS+).”

The aim of this book is to help you understand the new vSphere storage options, and how VMware is addressing these data-center challenges through its software-defined storage offerings, Virtual SAN and Virtual Volumes. The primary aim of these two next-generation storage solutions is to drive efficiency through simple, less complex technologies that do not require large numbers of highly trained storage administrators to maintain. It is these software-defined data-center concepts that are going to completely transform all aspects of vSphere data-center storage, allowing these hypervisor-driven concepts to bind together the compute, networking, and software-defined storage layers.

The goal of software-defined storage is to separate the physical storage hardware from the logic that determines where the data lives, and what storage services are applied to the virtual machines and data during read and write operations.

As a result of VMware’s next-generation storage offerings, a storage layer can be achieved that is more flexible and that can easily be adjusted based on changing application requirements. In addition, the aim is to move away from complex proprietary vendor systems, to a virtual data center made up of a coherent data fabric that provides full visibility of each virtual machine through a single management toolset, the so-called single pane of glass. These features, along with lowered costs, automation, and application-centric services, are the primary drivers for enterprise IT organizations and cloud service providers to begin to rethink their entire storage architectural approach.

The next point to address is what software-defined storage isn’t, as it can sometimes be hard to wade through all the marketing hype typically generated by storage vendors. Just because a hardware vendor sells or bundles management software with their products, doesn’t make it a software-defined solution. Likewise, a data center full of different storage systems from a multitude of vendors, managed by a single common software platform, does not equate to a software-defined storage solution. As each of the underlining storage systems still has its legacy constructs, such as disk pools and LUNs, this is referred to as a federated storage solution and not software-defined. These two approaches are sometimes confused by storage vendors, as understandably, manufacturers always want to use the latest buzzwords in their marketing material.

Despite everything that has been said up until now, software-defined storage isn’t just about software. At some point, you have to consider the underlying disk system that provides the storage capacity and performance. If you go out and purchase a lot of preused 5,400 RPM hard drives from eBay, you can’t then expect solid-state flash-like performance just because you’ve put a smart layer of software on top of it.

Designing VMware Storage Environments

Gathering requirements and documenting driving factors is a key objective for you, the architect. Understanding the customer’s business objectives, challenges, and requirements should always be the first task you undertake, before any design can be produced. From this activity, you can translate the outcomes into design factors, requirements, constraints, risks, and assumptions, which are all critical to the success of the vSphere storage design.

Architects use many approaches and methodologies to provide customers with a meaningful design that meets their current and future needs. Figure 1.2 illustrates one such method, which provides an elastic sequence of activities that can typically fulfill all stages of the design process. However, many organizations have their own approach, which may dictate this process and mandate specific deliverables and project methodologies.

Figure 1.2 Example of a design sequence methodology

Technical Assessment and Requirements Gathering

The first step toward any design engagement is discovery, and the process of gathering the requirements for the environment in which the vSphere-based storage will be deployed. Many practices are available for gathering requirements, with each having value in different customer scenarios. As the architect, you must use the best technique to gain a complete picture from various stakeholders. This may include one-to-one meetings with IT organizational leaders and sponsors, facilitated sessions or workshops with the team responsible for managing the storage operations, and review of existing documents. Table 1.1 lists key questions that you need to ask stakeholder and operational teams.

Table 1.1 Requirements gathering

Architect Question

Architectural Objective

What will it be used for?

Focus on applications and systems

Who will be using it?

Users and stakeholders

What is the purpose?

Objectives and goals

What will it do? When? How?

Help create a scenario

What if something goes wrong with it?

Availability and recoverability

What quality? How fast? How reliable? How secure? How many?

Scaling, security, and performance

After all design factors and business drivers have been reviewed and analyzed, it is essential to take into account the integration of all components into the design, before beginning the qualification effort needed to sort through the available products and determine which solution will meet the customer’s objectives. The integration of all components within a design can take place only if factors such as data architecture, business drivers, application architecture, and technologies are put together.

The overall aim of all the questions is to quantify the objectives and business goals. For instance, these objectives and goals might include the following:

Performance

User numbers and application demands: Does the organization wish to implement a storage environment capable of handling an increase in user numbers and application storage demands, without sacrificing end-user experience?

Total Cost of Ownership

Does the organization wish to provide separate business units with a storage environment that provides significant cost relief?

Scalability

Does the organization wish to ensure capability and sustainability of the storage infrastructure for business continuity and future growth?

Management

Does the organization wish to provide a solution that simplifies the management of storage resources, and therefore requires improved tools to support this new approach?

Business Continuity and Disaster Recovery

Does the organization wish to provide a solution that can facilitate high levels of availability, disaster avoidance, and quick and reliable recovery from incidents?

In addition to focusing on these goals, you need to collect information relating to the existing infrastructure and any new technical requirements that might exist. These technical requirements will come about as a result of the business objectives and the current state analysis of the environment. However, these are likely to include the following:

Application classification

Physical and virtual network constraints

Host server options

Virtual machines and workload deployment methodology

Network-attached storage (NAS) systems

Storage area network (SAN) systems

Understanding the customer’s business goals is critical, but what makes it such a challenge is that no two projects are ever the same. Whether it is different hardware, operating systems, maintenance levels, physical or virtual servers, or number of volumes, the new design must be validated for each component within each customer’s specific infrastructure. In addition, just as every environment is different, no two workloads are the same either. For instance, peak times can vary from site to site and from customer to customer. These individual differentiators must be validated one by one, in order to determine the configuration required to meet the customer’s design objectives.

Establishing Storage Design Factors

Establishing storage design factors is key to any architecture. However, as previously stated, the elements will vary from one engagement to another. Nevertheless, and this is important, the design should focus on the business drivers and design factors, and not the product features or latest technology specification from the customer’s preferred storage hardware vendor.