Edge of Intelligence -  - E-Book

Edge of Intelligence E-Book

0,0
187,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

The book offers cutting-edge insights and practical applications for Edge AI, making it essential for anyone looking to stay ahead in the rapidly evolving landscape of artificial intelligence and Edge computing.

Edge of Intelligence: Exploring the Frontiers of AI at the Edge examines the transformative potential of edge AI, showcasing how artificial intelligence is being seamlessly integrated with Edge computing to revolutionize various industries. This book offers a comprehensive overview of the latest research, trends, and practical applications of Edge AI, providing readers with valuable insights into how this cutting-edge technology is enhancing efficiency, reducing latency, and enabling real-time decision-making. From optimizing vehicular networks in the era of 6G to the innovative use of AI in crop monitoring and educational technology, this book covers a broad spectrum of topics, making it an essential read for anyone interested in the future of AI and Edge computing.

Featuring contributions from leading experts and researchers, Edge of Intelligence highlights real-world examples and case studies that demonstrate the practical implementation of edge AI in diverse sectors such as smart cities, recruitment, and nano-process optimization. The book also addresses critical issues related to privacy, security, and the fusion of blockchain with edge computing, providing a holistic view of the challenges and opportunities in this rapidly evolving field.

Audience

Engineers, data scientists, IT professionals, researchers, and academics in the fields of artificial intelligence, computer science, and telecommunications, as well as industry professionals in sectors such as the automotive, agriculture, education, and urban planning industries.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 686

Veröffentlichungsjahr: 2025

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Table of Contents

Series Page

Title Page

Copyright Page

Preface

1 A Review on Computational Optimization Strategies and Collaborative Techniques of Vehicular Task Offloading in the Era of Internet of Vehicles and 6G

1.1 Introduction

1.2 Computational Optimization Strategies

1.3 Collaborative Techniques

1.4 Security

1.5 Challenges and Future Research Directions

1.6 Conclusion

References

2 A Study on EDGE AI Application in Crop Monitoring

2.1 Introduction

2.2 Crop Monitoring AI Basics

2.3 AI Applications in Crop Monitoring

2.4 Challenges and Possible Future Paths of AI in Crop Monitoring

2.5 Conclusion

References

3 A Survey on Reconfigurable Co-Processors Computing Linear Transformations

3.1 Different Linear Transforms

3.2 Reconfigurable Computing

3.3 Field Programmable Gate Array

3.4 Survey of Existing Work

3.5 Performance Comparison of Different Reconfigurable Co-Processors Implementing Linear Transformation(s)

3.6 Conclusions and Future Work

References

4 Conversational AI Model for Effective Responses with Augmented Retrieval (CAMERA) Based Chatbot on NVIDIA Jetson Nano

4.1 Introduction

4.2 Background

4.3 Literature Review

4.4 Proposed Framework

4.5 Results

4.6 Conclusion and Future Scope

References

5 Edge Computing in Educational Technology: The Power of Edge AI for Dynamic and Personalized Learning

5.1 Introduction: Unveiling the Potential of Edge AI in Educational Technologies

5.2 Challenges of Traditional Education in the Digital Age

5.3 Edge Computing and AI: Revolutionizing Educational Dynamics

5.4 Enhancing Education Through Video Lecture Summarization: An Exemplary Scenario

5.5 Benefits of the Edge AI for Learning

5.6 Discussions on Edge AI for Education

5.7 Ethical Considerations in Edge AI for Educational Settings

5.8 Future of Education with Edge AI

5.9 Conclusion

References

6 Edge Computing Revolution: Unleashing Artificial Intelligence Potential in the World of Edge Intelligence

6.1 Introduction

6.2 Definitions

6.3 Concepts and Architecture

6.4 Algorithms for Artificial Intelligence in Edge Computing

6.5 Optimization of Edge Devices Using a Class of Neural Networks

6.6 Bio-Inspired Algorithms for Edge Computing

6.7 Real-Time Intelligence-Based Edge Device

6.8 Conclusion

References

7 Ensuring Privacy and Security in Machine Learning: A Novel Approach to Efficient Data Removal

7.1 Introduction

7.2 Related Works

7.3 Objectives

7.4 System Design

7.5 Experimental Results

7.6 Conclusion and Future Scope

References

8 Federated Learning in Secure Smart City Sensing: Challenges and Opportunities

8.1 Introduction

8.2 Related Work

8.3 Federated Learning-Based Smart Cities Sensing Architecture for IoT-Enabled Smart Cities Sensing

8.4 Open Issues, Related Challenges and Opportunities

8.5 Conclusions

Acknowledgment

References

9 Fusion of Blockchain and Edge Computing for Seamless Convergence

9.1 Introduction to Blockchain and Edge Computing

9.2 Key Components of Blockchain and Edge Integration

9.3 Challenges and Opportunities in Integration

9.4 Security Considerations in a Converged Environment

9.5 Use Cases and Applications

9.6 Benefits of Blockchain and Edge Integration

9.7 Regulatory and Compliance Issues

9.8 Future Trends and Innovations

9.9 Recommendations

9.10 Conclusion

References

10 Industry Adapting the Machine Learning Scenario in Recruitment and Selection of Employees

10.1 Introduction

10.2 Evolution of Machine Learning in Recruitment

10.3 Methodological Insights and Study Contexts

10.4 Ensuring Reliability and Replicability

10.5 Ethical Implications of ML in Hiring

10.6 Addressing Ethical Concerns in Real-World Applications

10.7 Ensuring Data Privacy in ML Models for Hiring

10.8 Areas for Future Research in ML for Hiring

References

11 Machine Learning for Nano Process Optimization

Introduction

Literature Review

Conclusion

References

12 Quantum Computing for Cryptography: An Extensive Survey

12.1 Introduction

12.2 Related Works

12.3 Statistical Analysis

12.4 Comparative Analysis

12.5 Conclusion and Future Scope

References

13 Role of Blockchain Technology in e-HRM in the Era of Artificial Intelligence: Focus on the Indian Market

13.1 Introduction

13.2 Literature Review

13.3 Blockchains for Business and EHRM

13.4 Case Studies

13.5 Integration of Blockchain with Industry 4.0 Technologies in HRM

13.6 Ethical Implications of Implementing Blockchain in HRM

13.7 Conclusion

References

14 Smart City Innovations and IoT as a Frontier of AI at the Edge of Intelligence

Introduction: Smart City Innovations and Internet of Things for Data Analytics

Concept of Smart Cities and the Significance of Data-Driven Decision-Making

Uses of IoT-Enabled Data Analytics in Smart Cities

Future Prospects and Emerging Trends of Smart City Innovations and Internet of Things (IoT) for Data Analytics

Conclusion

References

15 Synergies Unleashed: The Convergence of AI and Edge Computing in Transformative Technologies

15.1 Introduction

15.2 Related Study

15.3 Reduction of Latency

15.4 Bandwidth Efficiency

15.5 Privacy and Security

15.6 Real-Time Decision-Making: Decision Made with an Example

15.7 Distributed Architecture: Decentralized Processing Occurs with an Example

15.8 Edge Computing Use Cases

15.9 Challenges and Advancements

15.10 Future Trends

15.11 Conclusion

References

Index

End User License Agreement

List of Tables

Chapter 1

Table 1.1 Existing surveys in vehicular task offloading.

Table 1.2 Summary of game theory-based task offloading strategies.

Table 1.3 Summary of mathematical optimization approaches of task offloading.

Table 1.4 Summary of custom-tailored approaches of task offloading strategies.

Table 1.5 Summary of value-based DRL methods for task-offloading strategies.

Table 1.6 Summary of policy-based DRL methods for task offloading strategies.

Table 1.7 Summary of MADRL methods for task-offloading strategies.

Table 1.8 SDN in the vehicular network for task offloading.

Table 1.9 UAV in the vehicular network for task offloading.

Table 1.10 Secure task offloading.

Chapter 3

Table 3.1 Comparison table of reconfigurable processor implementing transforma...

Chapter 4

Table 4.1 Literature review of LLMs and conversational AI systems.

Chapter 6

Table 6.1 Comparison of cloud computing and edge computing.

Table 6.2 Various AI techniques with its goals and contributions.

Table 6.3 Summary of AI algorithms and architectures.

Chapter 8

Table 8.1 Federated learning technologies for smart city sensing.

Table 8.2 Federated learning technologies and IoT technologies in smart city s...

Table 8.3 Application security issues and solutions.

Table 8.4 Outlines open issues and related challenges and opportunities.

Chapter 12

Table 12.1 Comparison between traditional and quantum computation [12].

Table 12.2 Relationship between spin and bit values used in quantum cryptograp...

Table 12.3 Comparison of different quantum key distribution (QKD) schemes.

Table 12.4 Evolution in quantum cryptography over the years.

Table 12.5 Comparison of classical cryptography and quantum cryptography.

List of Illustrations

Chapter 1

Figure 1.1 Vehicular network.

Figure 1.2 Task offloading.

Figure 1.3 Taxonomy of the survey.

Figure 1.4 Algorithm-based strategies.

Figure 1.5 DRL-based strategies.

Figure 1.6 SDN-VANET architecture.

Figure 1.7 UAV-VANET architecture.

Figure 1.8 Security techniques in VANET.

Chapter 2

Figure 2.1 AI applications in crop monitoring.

Figure 2.2 Computer vision applications in agriculture.

Figure 2.3 Edge computing in precision agriculture [13].

Chapter 3

Figure 3.1 (a) The dataflow diagram of FFT algorithm [3], (b) The dataflow dia...

Figure 3.2 A discrete wavelet transform system [4].

Figure 3.3 Structure of FPGA

Chapter 4

Figure 4.1 NVIDIA Jetson Nano developer kit.

Figure 4.2 Different steps involved in the development and deployment of a CAM...

Figure 4.3 Logical flow of a chatbot.

Figure 4.4 Main function.

Figure 4.5 Graphical user interface of CAMERA chatbot.

Chapter 5

Figure 5.1 The various challenges faced in traditional education.

Figure 5.2 Difference between edge computing and cloud computing.

Figure 5.3 AI-powered adaptive learning platform deployed at the edge enabling...

Figure 5.4 Activity diagram.

Figure 5.5 Data flow diagram.

Figure 5.6 Flowchart of audio extraction and summarization.

Figure 5.7 Auto-chapter examples.

Figure 5.8 Important content highlighted.

Figure 5.9 Accurate analysis of board contents.

Figure 5.10 Impartus lecture video audio summary.

Figure 5.11 Benefits of the system.

Chapter 6

Figure 6.1 Edge intelligence new possibilities.

Figure 6.2 Paradigm of fog computing.

Figure 6.3 Concept of cloudlet.

Figure 6.4 Three levels of edge computing.

Figure 6.5 Edge intelligence in smart city.

Figure 6.6 Smart portable medical equipment.

Figure 6.7 An example of a smart city’s smart energy management framework.

Figure 6.8 Primitive IoV structure.

Chapter 7

Figure 7.1 SISA Model’s architecture diagram.

Figure 7.2 Distribution aware sharding module diagram.

Figure 7.3 Module design for isolated training module.

Figure 7.4 Module design of aggregative prediction module.

Figure 7.5 Removing request and retraining the model.

Figure 7.6 Techniques vs. metrics.

Figure 7.7 Different methods used for unlearning.

Figure 7.8 No. of shards vs. total accuracy.

Figure 7.9 No. of shards vs. retraining time.

Figure 7.10 No. of shards vs. retraining rate.

Figure 7.11 Retraining rate vs. avg. retraining time.

Figure 7.12 No. of shards vs. avg. accuracy.

Figure 7.13 No. of shards vs. no. of requests vs. retraining rate.

Figure 7.14 No. of shards vs. no. of requests vs. retraining rate.

Chapter 8

Figure 8.1 Convergence of federated smart learning and IoT technologies.

Figure 8.2 Represent of four models of smart city sensing.

Figure 8.3 Illustration of federated learning framework architecture.

Figure 8.4 Demonstrates the applications of smart city sensing.

Figure 8.5 Challenges of federated learning for smart city sensing for IoT tec...

Figure 8.6 Service scenarios of federated learning in smart city.

Chapter 9

Figure 9.1 Blockchain technology.

Figure 9.2 Edge computing architecture.

Figure 9.3 Key component of blockchain and edge computing integration.

Figure 9.4 Key components of edge computing.

Figure 9.5 Integration of blockchain technology with edge computing for the he...

Chapter 11

Figure 11.1 Predictive control of a plasma etch process.

Figure 11.2 Closed loop optimization of nanoparticle synthesis.

Figure 11.3 Nanoimprint lithography.

Figure 11.4 Collective carbon nanotube growth.

Chapter 12

Figure 12.1 Qubits and classical bits comparison.

Figure 12.2 Quantum cryptography using Alice-Bob model.

Figure 12.3 Beam splitter and coin state analogy using photon beam and beam sp...

Figure 12.4 Flow of the paper.

Figure 12.5 Share of organizations with quantum security in 2022, by different...

Figure 12.6 Predicted quantum security market revenue from 2021 to 2030.

Figure 12.7 Quantum Cryptography publications and patents in the past 20 years...

Figure 12.8 Number of reviewed papers published in journals and conferences in...

Figure 12.9 Number of reviewed papers published in journals and conferences fr...

Figure 12.10 Number of reviewed papers published in journals and conferences f...

Figure 12.11 Quantum machine learning model and traditional model comparison.

Figure 12.12 Neural cryptography model.

Chapter 14

Figure 14.1 Smart city data analytics.

Figure 14.2 An overview of waste management.

Chapter 15

Figure 15.1 Overview of converging and the implications of AI and edge computi...

Figure 15.2 Healthcare monitoring device.

Figure 15.3 Smart home automation.

Figure 15.4 Distributed architecture system with decentralized processing.

Figure 15.5 Integration of edge computing within Internet of Things.

Figure 15.6 5G technology integrated with Edge computing.

Figure 15.7 Hybrid cloud-edge architectures.

Guide

Cover Page

Table of Contents

Series Page

Title Page

Copyright Page

Preface

Begin Reading

Index

WILEY END USER LICENSE AGREEMENT

Pages

ii

iii

iv

xvii

xviii

xix

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

387

388

389

390

391

392

393

394

395

396

397

398

399

400

401

402

403

404

405

406

407

408

409

410

411

412

413

414

415

416

417

418

419

420

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

436

437

438

Scrivener Publishing100 Cummings Center, Suite 541JBeverly, MA 01915-6106

Publishers at ScrivenerMartin Scrivener ([email protected])Phillip Carmical ([email protected])

Edge of Intelligence

Exploring the Frontiers of AI at the Edge

Edited by

Shubham Mahajan

Amity School of Engineering and Technology (ASET), Amity University, Gurugram, Panchgaon, Haryana

Sathyan Munirathinam

ASML Corporation, San Diego, California, USA

and

Pethuru Raj

Reliance Jio Platforms Ltd., Bangalore, India

This edition first published 2025 by John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA and Scrivener Publishing LLC, 100 Cummings Center, Suite 541J, Beverly, MA 01915, USA© 2025 Scrivener Publishing LLCFor more information about Scrivener publications please visit www.scrivenerpublishing.com.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

Wiley Global Headquarters111 River Street, Hoboken, NJ 07030, USA

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchant-ability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials, or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read.

Library of Congress Cataloging-in-Publication Data

ISBN 978-1-394-31437-9

Front cover image courtesy of Adobe FireflyCover design by Russell Richardson

Preface

This book explores the transformative world of Edge AI, examining how artificial intelligence is integrated into edge computing to create innovative and powerful solutions across various domains. As we approach the 6G era, the convergence of AI and edge computing promises to revolutionize our interaction with technology, delivering faster, more efficient, and highly personalized experiences.

Edge AI refers to the deployment of AI algorithms and models directly on devices at the edge of the network, closer to where data is generated. This approach reduces latency, enhances privacy, and enables real-time decision-making without heavy reliance on centralized cloud infrastructures. As the demand for intelligent and autonomous systems grows, Edge AI is becoming crucial in sectors ranging from automotive and agriculture to education and smart cities.

This book is structured into fifteen comprehensive chapters, each addressing a unique aspect of Edge AI and its applications.

A Review on Computational Optimization Strategies and Collaborative Techniques of Vehicular Task Offloading in the Era of Internet of Vehicles and 6G: Explores how vehicular networks and 6G technology enhance task offloading, improving efficiency and connectivity in smart transportation systems.

A Study on EDGE AI Application in Crop Monitoring: Discovers how Edge AI is revolutionizing agriculture by providing real-time crop monitoring solutions, leading to more efficient and sustainable farming practices.

A Survey on Reconfigurable Co-Processors Computing Linear Transformations: Investigates advancements in reconfigurable co-processors that optimize computational tasks, driving performance improvements across various applications.

Conversational AI Model for Effective Responses with Augmented Retrieval (CAMERA) Based Chatbot on NVIDIA Jetson Nano: Elucidates on developing an advanced chatbot leveraging Edge AI for natural and effective communication.

Edge Computing in Educational Technology: The Power of Edge AI for Dynamic and Personalized Learning: Examines how Edge AI transforms educational technology by delivering personalized learning experiences and dynamic content.

Edge Computing Revolution: Unleashing Artificial Intelligence Potential in the World of Edge Intelligence: Explains the broader impact of Edge AI across industries, highlighting its potential to drive innovation and efficiency.

Ensuring Privacy and Security in Machine Learning: Delves into strategies and technologies designed to protect data privacy and ensure security in AI-driven systems.

Federated Learning in Secure Smart City Sensing: Challenges and Opportunities: Explores the role of federated learning in smart cities, addressing challenges and opportunities in creating secure and intelligent urban environments.

Fusion of Blockchain and Edge Computing for Seamless Convergence: Investigates how integrating blockchain with edge computing enhances security and transparency across various applications.

Industry Adapting the Machine Learning Scenario to Recruitment and Selection of Employees: Shows how machine learning is being applied to streamline and improve recruitment and selection processes in the corporate world.

Machine Learning for Nano Process Optimization: Exlpores how machine learning optimizes nanoscale processes, driving advancements in nanotechnology and materials science.

Quantum Computing for Cryptography: An Extensive Survey: Examines the potential of quantum computing to revolutionize cryptography, providing enhanced security solutions.

Role of Blockchain Technology in E-HRM in the Era of Artificial Intelligence: Focus on the Indian Market: Informs about the impact of blockchain and AI on electronic human resource management, with a focus on the Indian market.

Smart City Innovations and IoT as a Frontier of AI at the Edge of Intelligence: Explores the role of IoT and AI in developing smart city innovations, enhancing urban living and sustainability.

Synergies Unleashed: The Convergence of AI and Edge Computing in Transformative Technologies: Shows how AI and edge computing are driving transformative technologies and shaping the future of industries.

As you journey through the chapters of this book, you will gain insights into the latest research, trends, and applications of Edge AI. The convergence of AI and edge computing is not just a technological evolution but a paradigm shift that will redefine how we interact with the digital world. We extend our gratitude to everyone who contributed to this important work, and to Martin Scrivener and Scrivener Publishing for making its publication possible.

Welcome to the edge of intelligence.

The EditorsDecember 2024

1A Review on Computational Optimization Strategies and Collaborative Techniques of Vehicular Task Offloading in the Era of Internet of Vehicles and 6G

Aishwarya R.*, V. Vetriselvi and Meignanamoorthi D.

Department of Computer Science and Engineering, Anna University, Guindy, Chennai, Tamil Nadu, India

Abstract

The Internet of Vehicles (IoV) and emerging 6G communication technology have recently advanced, empowering intelligent vehicles to support pervasive services while also providing an efficient and convenient driving experience. Furthermore, massive amounts of data are being generated by vehicular applications. The in-vehicle computing capability is insufficient to meet vehicular applications’ time-sensitive and computation-intensive demands. In such a scenario, task offloading towards other resource-rich computing devices can be considered to process vehicular tasks, thereby improving the application’s Quality of Services (QoS). In this paper, a comprehensive review of task-offloading strategies and collaborative techniques for task offloading is presented. Computational optimization strategies are classified according to the solutions provided for task offloading via various methods such as algorithmic techniques and Deep Reinforcement Learning (DRL) techniques. Collaborative techniques such as caching, Software Defined Networks (SDN), and Unmanned Aerial Vehicles (UAV) along with a vehicular network for task offloading are extensively reviewed. The security aspect of vehicular task offloading is discussed as well. Furthermore, open issues and future directions of vehicular task offloading are highlighted.

Keywords: IoV, vehicular edge computing, task offloading, security, 6G, multiaccess edge computing, VANET

1.1 Introduction

In the past two decades, there has been a noticeable trend towards the development of intelligent vehicles with substantial developments in communication and computing technologies [1]. It is estimated that the automobile sector will provide the biggest market opportunity for 5G Internet of Things (IoT) solutions by 2023 with the development of intelligent vehicles [2]. IoV, a typical IoT technology application in the Intelligent Transportation System (ITS), is a widely distributed system for information exchange and wireless communication that intelligently supports traffic management, dynamic information services, and vehicle control [3]. Vehicular Technology and Road Side Units (RSU) have progressed rapidly comprising computing units and storage capacities. Utilizing 6G technology, the IoV can achieve seamless connectivity through Space Air Ground Integrated Networks (SAGIN), enabling interoperability between terrestrial and non-terrestrial networks and providing ubiquitous coverage.

With the advent of IoV and 6G, vital developments have emerged in vehicular applications by providing global coverage. The vehicular applications include image-aided navigation, online games, intelligent vehicle control, and other social media applications. Those applications are used to improve traffic efficiency, enhance road safety, as well as provide convenient and comfortable user services [4]. Each of these applications requires ultra-low latency, massive connectivity, high mobility, and scalability support that can be provided by Beyond 5G and Next Generation Networks [5, 6]. Incorporating various applications such as advanced driver assistance systems in smart vehicles poses a significant challenge for in-vehicle computing systems, as they grapple with the escalating demand for processing power. Due to space and power constraints, integrating a supercomputer directly into vehicles is impractical. The limited computing resources, including CPU, memory, and storage, may prove inadequate to meet the rising computational requirements. Thus, it necessitates offloading [7].

Initially, Cloud computing was proposed as an effective solution for resource-constrained vehicles to offload tasks to geographically centralized data centers which improves computation performance and resource utilization [8]. However, the cloud computing architecture makes it very hard to satisfy the real-time processing demands of emerging vehicular applications due to the long propagation delay [9]. Hence, to extend the processing capacity of cloud computing to the edge of a network near the vehicles, Multi-Access Edge Computing (MEC) [10] and Fog Computing have been introduced [11]. Vehicular Edge Computing (VEC) provides processing and storage resources close to vehicular users by integrating MEC and vehicular networks [12]. Figure 1.1 represents the VEC architecture consisting of smart vehicles and infrastructures. Onboard units with resource capabilities are incorporated in smart vehicles that permit close-range wireless transmission i.e., communicate with each other and an RSU. For network accessibility, RSUs are often dispersed along the roads and connected to the backbone network [13]. The illustration of task offloading is depicted in Figure 1.2. Now vehicles may transfer latency-sensitive and computationally heavy tasks to neighboring MEC servers with little overhead. It can significantly alleviate the overload of resource-constrained vehicles. Cellular Base Stations (BS) and RSUs, as well as both, are suitable for placing MEC servers near the network’s edge [14]. Using MEC services with the aid of 6G technology will improve the Quality of Experience (QoE) for vehicular applications. Yet, because of the peculiar features of vehicle networks, particularly the rapid mobility of nodes and the fluctuating channel conditions, it is quite challenging to create an effective edge-enabled task offloading strategy [15].

The MEC servers on distinct BSs may offer a range of services, and the workload on the MEC servers varies over time. MEC servers are less resourceful than cloud servers. To effectively utilize the MEC servers’ resources, consistent load distribution across the MEC servers has to be guaranteed while offloading the tasks from vehicles. The storage and computing capabilities of edge devices are typically constrained. To ensure effective resource usage of the MEC server, some tasks may still need to be performed either locally or on the cloud platform depending on their QoS. Offloading, therefore, requires absolute cooperation between the cloud and the edge. The tasks can also differ in terms of processing overhead, advance, urgency, and other related factors depending on the necessary QoE criteria. As a result, the issue of selecting the best task-offloading strategy for achieving the best performance of an application while effectively using MEC resources arises [16]. It is conceivable to employ both algorithm-based and DRL-based strategies to address this multi-objective optimization problem.

Figure 1.1 Vehicular network.

Figure 1.2 Task offloading.

1.1.1 Study of Existing Surveys

The existing survey papers [1, 17–20] were oriented by vehicular task offloading. Ahmed et al.[17] highlighted the classification of vehicular task offloading based on V2V, V2I, and V2X communication models. Liu et al.[1] presented the classification of vehicular task offloading based on DRL methods as value-based and policy-based solutions leveraging MEC servers, nearby vehicles, and both as edge clouds.

Hamdi et al.[18] majorly analyzed task offloading in vehicular fog computing and elaborated on the fog node selection for task offloading. Boukerche and Soto [19] categorized each step involved in the task offloading process i.e., partitioning, scheduling, and data retrieval as well as analyzed various methods used for these processes. Nevertheless, this survey classifies based on the optimization strategies of task offloading and incorporates associated techniques that enhance vehicular task offloading.

Table 1.1 Existing surveys in vehicular task offloading.

Paper

Year

Categorization criteria

Collaborative techniques

Security

Ahmed

et al

.

2022

Vehicular Communication modes

No

No

Hamdi

et al

.

2022

Fog node selection

No

No

Boukerche and Soto

2020

Task offloading process

No

No

Liu

et al

.

2022

RL/DRL algorithm

No

No

Dziyauddin

et al

.

2021

Optimization objective (QoS, Energy, Revenue)

Caching

Yes

Our survey

2023

Solution of Optimization Strategies for task offloading

Caching, SDN, UAV

Yes

Although Dziyauddin et al.[20] have also discussed content caching and security along with computational offloading, there exists a research gap in categorizing optimization strategies applied to task offloading problem and collaborative techniques to improve its performance. Table 1.1 summarizes the existing surveys in vehicular task offloading. This survey systematically categorizes various optimization strategies that are used to address the task offloading problem and other techniques associated with task offloading along with security.

1.1.2 Contributions

The key contributions of this review are

We articulated and examined the computational offloading strategies of vehicular task offloading under each subcategory of algorithm-based strategies (i.e., Game Theory, Mathematical methods, and Custom-Tailored algorithms) and DRL-based strategies (i.e., Value-based DRL methods, Policy-based DRL methods, and Multi-Agent Deep Reinforcement Learning (MADRL) methods)

6G technology empowers the IoV with interconnected intelligence and widespread connectivity, enabling a variety of intelligent vehicular applications. This heightened demand for task offloading to meet user QoE requirements prompts the integration of collaborative techniques like caching, SDN, and UAV into vehicular task offloading, to enhance its efficiency and performance. We discussed those relevant papers.

The security-related works of task offloading are also analyzed in terms of privacy, and trust management. Open issues, challenges, and future research prospects for vehicular task offloading are discussed.

1.1.3 Survey Organization

Figure 1.3 illustrates the structure of this survey. The rest of the paper is organized as follows. Section 1.2 discusses a description of vehicular networks and task offloading, section 1.3 provides the types of optimization strategies of computational offloading and section 1.4 presents the collaborative techniques of task offloading. Section 1.5 discusses the security aspect of vehicular task offloading. Section 1.6 emphasizes the open issues and future works. Section 1.7 depicts the conclusion.

Figure 1.3 Taxonomy of the survey.

1.2 Computational Optimization Strategies

The intelligent vehicles would be laid in the 6G full-coverage communication environment. It can achieve interconnection with peripheral facilities like nearby vehicles, BSs, streetlights, and more [21]. Also, the vehicular network has grown increasingly dynamic, and large-scale as the number of intelligent vehicles rises and the Vehicle-to-Everything (V2X) connections tend to increase. Edge AI plays an important role in achieving an intelligent vehicular network through 6G communications, supporting AI/Machine Learning optimization approaches [22].

It is challenging for individual vehicles with constrained computing resources to execute rapidly evolving low-latency and computational heavy vehicular applications. Depending on the QoS requirements of the application, vehicles transfer their computation to other resource-rich destinations, which may include MEC servers, neighboring vehicles, or clouds using V2V, V2I, and V2X communication network modes. A task is a fundamental piece of operation that must be carried out to accomplish any vehicular application. Formally, a task can be defined [23–25] using three parameters: S, C, and T. S represents the size of the task’s input data in bits; C represents the computing resources needed by the task in CPU cycles; and T represents the task’s maximum allowable delay; if the time it takes to receive a result is longer than T, the task is timed out and fails. A task also has additional factors, such as priority and degree of reliance. Depending on the complexity and requirements of the vehicle application, a task may be divided into several subtasks. These subtasks may either operate independently of other tasks or depend on them. Depending on the task demands and their characteristics, a decision can be made between binary offloading and proportion offloading. Simple tasks that cannot be broken down into several dependent or independent sub-tasks must be completed as a whole, either locally at the vehicles or through a process known as binary offloading that offloads them to MEC servers. Some tasks can be divided into numerous dependent or independent sub-tasks and various sub-tasks can be executed locally, in the MEC server as well as in the cloud through a process known as proportion offloading. To achieve optimum computation efficiency, the task offloading choice must be made by the vehicles in dynamic network conditions. The dynamic network topology and rapidly changing channel characteristics make task offloading challenging because of the vehicles’ rapid mobility and the devices’ energy constraints.

It is vital to employ a competent computation offloading technique to execute tasks in a wide range of circumstances. The key performance metric and the most challenging QoS constrained among various vehicular applications is the delay which consists of transmission and computation time. The duration of transmission is dictated by the volume of data for transmission and the transmission rate. Similarly, the duration of computation depends on the wait time and processing capacity of the offloading destination. An intolerable delay in vehicular safety applications may cause serious damage to the lives of the people. Hence, effective and expeditious offloading strategies are needed to meet the demands of vehicular applications in the future. The offloading strategies are broadly classified as algorithm-based strategies and DRL-based strategies. The existing research work is manifold and varied in terms of optimization objective parameters, assumption of a task model, and task offloading problem formulation. The offloading strategies are broadly classified into two different perspectives, which are further subdivided into various methods based on problem-solving approaches.

The main aim of this paper is to investigate and assess diverse computational optimization approaches tailored for addressing the task offloading challenges within the IoV. The IoV ecosystem is rapidly evolving, marketed by the emergence of high-resource-demanding applications and personalized user experiences, particularly in the context of advancing 6G technology. As 6G-enabled IoV intensifies the demand for efficient task-offloading solutions, this study also explores potential collaborative techniques to enhance task-offloading performance.

Figure 1.4 Algorithm-based strategies.

1.2.1 Algorithm-Based Strategies

The research works of algorithm-based strategies for task offloading problem have been summarized in this section. The algorithm-based strategies are classified into Game Theory, Mathematical methods, and Custom Tailored algorithms for solving task offloading problem in IoV as shown in Figure 1.4.

1.2.1.1 Game Theory

Many rational players (i.e.,) vehicles with tasks to offload participate in the game, and the problem of decision-making of task offloading to achieve a goal of delay, cost, or energy minimization among multiple players can be effectively solved by designing decentralized mechanisms. The literature focusing on game theory-based decision-making for task-offloading is summarized in Table 1.2 and the details are explained in this section.

A distributed game [26] is employed for a distributed computation offloading decision that takes into account energy use, communication costs, and computation costs as the offloading costs to reduce delay and the associated costs. The distributed computation offloading scenario is represented as G = (N, A, U), comprising three elements: N denotes the group of participants. A represents the range of actions the participants can take, and U stands for the utility function. To reduce its own joint cost, the vehicle assumes the role of a player and decides to offload. It is designed as a distributed computation offloading game where players are free to choose computation offloading decisions, to achieve Nash Equilibrium (NE). Here, all of the vehicles have reached a mutually agreeable solution, and none of them are willing to change policy. To make the best possible response decision, each vehicle learns from its data as well as the decision patterns of other vehicles. The process of updating the better response is finite and results in an NE.

The Stackelberg game model [27] is introduced to analyze the interaction dynamics between task vehicles and service vehicles. This involves adapting the pricing approach of service vehicles according to the real demand from task vehicles. To achieve balance and optimize unit cost advantages, a cost model is developed, taking into account the responsiveness of task vehicles to both costs and time delays. The two parts of the Stackelberg game are as follows: The task vehicles compete against one another in the first part to determine the allocation of computing resources across various service vehicles, guided by the unit pricing of computing resources offered by each service vehicle. In the second part, service vehicles adjust their unit prices for computing resources in response to the purchasing demands of task vehicles, aiming to optimize their revenue generation. The revenue made from selling computing resources determines a service vehicle’s utility. Time and cost both affect a task vehicle’s utility. Maximizing a vehicle’s utilitarian features is its main objective. Both service and task vehicles arrive at NE after several iterations of the distributed gradient iterative method.

Table 1.2 Summary of game theory-based task offloading strategies.

Paper

Modes

Type

Objective

Method

[26]

V2R

Binary

Minimization of latency and offloading cost

A distributed computation offloading game based on self-learning

[27]

V2V

Binary

Maximization of the utility function

Stackelberg game

[28]

V2R

Binary

Reducing the latency of time-sensitive tasks and ensuring that best-effort tasks are not neglected.

A coalition formation game

[29]

V2V/V2R

Binary

Minimization of task residence time

A non-cooperative strategic form game

[30]

V2R

Binary

Maximization of computational efficiency

A game theory approach

Latency-aware task-offloading scheme [28] is proposed considering the minimization of delay of time-critical tasks as well as to save the best-effort tasks from starvation. A task service framework that takes into account the resource availability at fog nodes and the task requirements is described. Fog nodes, the entities providing the service, have constrained resources. As a result, fog nodes decide collectively whether to accept the task service based on the task’s needs. A coalition formation game is developed to represent the effective task service by an appropriate fog node. According to the QoS requirements, coalitions are established among the tasks. The aim is to optimize the effectiveness of coalitions that depend on efficient task servicing. Tasks are allocated to a coalition within the maximum permitted waiting time. A latency-aware task offloading scheme is employed to address the coalition formation game. The time-critical tasks and best-effort tasks are assigned different priorities. The priority of the best-effort task has been incremented after the passage of a certain percentage of waiting time to ensure it does not suffer from starvation. As the priority increases it gets added to the coalition before its maximum allowable waiting time and gets the task service.

A decentralized model lacking collective resource intelligence may lead to an uneven distribution of workload. For effective resource use, Shabir et al.[29] presented a distributed, non-cooperative task offloading paradigm. In this paradigm, an offloading decision profile incorporating contextual information is treated as a non-cooperative strategic-form game, aiming to minimize overall service delay and enhance QoE in a diverse resource-sharing vehicular network. Vehicles communicate with their neighbors to obtain contextual data, such as resource category, task retention time, system cost, and offloading inference, to review their offloading choices. There is NE, and it is examined in a brief proof.

In the VEC scenario, computation efficiency—defined as the ratio of computed bits to the total energy consumed is examined by Raza et al.[30], where vehicles offload to maximize computational efficiency. To balance time and energy usage, optimizing task offloading and resource allocation is crucial for better computational efficiency. This involves solving an optimization problem to maximize overall system utility. Since this problem is complex and involves mixed integer programming, it’s divided into two parts: deciding how tasks are offloaded and allocating computation resources. These are tackled using different techniques: the Lagrange multiplier method and game theory. In this setup, vehicles with tasks act like players in a game, competing for resources to maximize their own benefit. The task offloading strategy is continuously refined until reaching Nash Equilibrium (NE), where each vehicle sticks to its chosen offloading approach.

1.2.1.2 Mathematical Optimization Methods

Typically, the task offloading problem entails a balance between different factors, including energy usage, latency, and computation duration. Mathematical optimization techniques like Lagrangian dual decomposition, semi-definite relaxation, probabilistic approaches, and others can effectively manage these trade-offs to attain the desired QoS for various vehicular tasks. The relevant literature is summarized in Table 1.3 and the details are explained in this section.

Dai et al.[31] examined a novel technique that takes into account task upload coordination across many vehicles, task migration between MEC/cloud servers, and the diverse computation capabilities of MEC/cloud servers. The collaborative computation offloading challenge, which seeks to minimize the expected service delay by examining the likelihood of optimal allocation, is framed within a queuing theory-based framework. A probabilistic strategy with an online and offline phase is proposed, and the objective function’s convexity is examined. During the offline phase of probabilistic computation offloading, the objective function is transformed into an augmented Lagrangian by introducing dual variables. This process employs the alternating direction method of the multipliers approach, which iteratively produces the optimal solution. In the online phase, a probabilistic method based on the optimal allocation probability is employed to decide the scheduling for each new task.

Table 1.3 Summary of mathematical optimization approaches of task offloading.

Paper

Modes

Type

Objective

Method

[31]

V2R

Binary

Minimization of system service delay

Probabilistic approach

[32]

V2V

Binary

Minimization of response delay

Semi-definite relaxation approach

[33]

V2V

Binary

Minimization of delay

Levy-Kopt algorithm

[34]

V2V, V2Cloud

Binary

Minimization of average response time

Binary linear programming

[35]

V2V, V2R

Binary

Minimization of average completion time

Multidimensional multichoice knapsack problem, branch, and bound method

[36]

V2V/V2R

Proportion

Minimization of average cost

Matching theory and Lagrangian-based algorithm

Based on vehicle mobility analysis, Liu et al.[32] suggested a task offloading scheme. The offloading policy is created by considering service vehicle mobility into account, which is distinctive. The following processes are involved in task processing: service vehicle discovery, task assignment, and task execution. If the distance between the vehicles, which is estimated with beacons, is within the communication range of the vehicles, the link connection is present and tasks are assigned. The problem of multi-hop task offloading, taking into account mobility, is framed as a utility minimization challenge to improve the QoE for client vehicles. The weighted average of task processing latency and the total cost is referred to as the utility function and is expressed in Equation 1.1 as follows

(1.1)

where λc is a positive weighting factor that accounts for the vehicle’s preference regarding task execution time t(Y) and computation cost c(Y). N denotes the set of tasks, while M represents the set of service vehicles. yki denotes the offloading variable of the task, Tki represents the total execution time of the task, and Cki indicates the total cost to process the task. The formulated problem is addressed using a semidefinite relaxation method alongside an adaptive adjustment algorithm, leading to significant improvements in response delay performance.

The tasks can be offloaded from the task vehicle to the surrounding parked vehicles for execution. An optimal task offloading path [33] is found using the levy-Kopt algorithm. There will be a long delay of connection interruption if the server interacts with all vehicles hence, selected vehicles to offload are connected into a path where the MEC server interacts with only one vehicle. The connection path for offloading from parked vehicles is determined using the K-opt neighborhood of the Levy flight method, which relies on the concept of solving random numbers through the normal distribution.

The idea of parked edge computing [35] can be used to overcome the resource constraint problem in physical edge servers by utilizing the rich and underutilized resources of parked vehicles to assist edge servers to handle the offloaded task. Parked vehicles are clustered and treated as virtual edge servers, providing options for task offloading. Depending on the task scheduling algorithm, tasks can be offloaded to either physical or virtual edge servers. The task scheduling challenge is subdivided into two parts: optimal resource allocation and optimal server selection. The optimal resource allocation is addressed through a multidimensional multichoice knapsack problem. The server selection problem, represented as Equation 1.2, is solved using the branch and bound method.

(1.2)

where L represents the set of edge servers, N represents the set of moving vehicles, T represents the set of tasks, yk,ij represents the server choice and tk.ij represents the total time to finish the task. Random Forest algorithm is used for driving trajectory prediction which aids in returning the tasks’ result to the source vehicle when it went out of communication range.

Khadir et al.[34] considered that vehicles offload the compute-intensive and low response time task to fog nodes, but the limited resources of fog nodes make it impossible to meet the demand. In such cases, tasks need to be offloaded to other suitable destinations. Initially, the vehicle attempts to offload its task to a fog node within its coverage area. If the average completion time of the task meets the deadline, it’s considered feasible and executed in the corresponding fog node. Otherwise, the infeasible request is forwarded to the SDN controller. The possible offloading destinations of infeasible requests are cloud, parked vehicles, and moving vehicles. To decide the optimal destination, the stretch time, defined as the time between a task’s deadline and the typical response time of the destination node, is used as the optimization parameter. This decision-making problem is formulated as binary linear programming and solved using CPLEX software.

Liu et al.[36] designed an optimization problem to reduce the average cost of all task vehicles under the restrictions of latency and processing capacity. The problem is decoupled into offloading node selection and resource allocation subproblems and solved using a distributed iterative algorithm involving matching theory and a Lagrangian-based algorithm. The differential pricing scheme achieves greater revenue for servers by choosing a higher price for closer vehicles thereby avoiding multiple vehicles offloading to the same server simultaneously. In matching theory, to solve the optimization problem, the preference function has to be designed and then the preference list has to be prepared for the participating agents. The revenue is considered the preference function for roadside units and server vehicles as shown in Equations 1.3 and 1.4 and the reciprocal of the cost function is considered the preference function for task vehicles.

(1.3)
(1.4)

where K represents set of RSUs, L represents set of server vehicles, I represents set of task vehicles, xi,n & yi,j represents offloading decision variables and ei,n & ei,j represents offloading expenses. The preference list is generated by arranging function values in descending order. Then, an iterative matching process is employed to determine the optimal offloading node. The optimization of resource allocation is addressed using the Lagrangian method.

1.2.1.3 Custom-Tailored Algorithms

Custom-tailored algorithms like token-based predictive schemes, dynamic programming for finding the shortest path, matching theory, and fuzzy logic techniques can enhance the execution of dependent and independent subtasks of vehicular applications while adhering to their respective constraints. The literature focusing on task offloading decision-making by custom-tailored techniques is summarized in Table 1.4 and the details are explained in this section.

The execution of lengthy computation-intensive tasks in heterogeneous vehicular applications, such as safety, infotainment, gaming, AR, and smart driving services, within their delay constraints poses challenges due to the high mobility of vehicles. To address this, tasks can be divided into sub-tasks to enable parallel processing. Distributed task offloading is performed efficiently [4] by selecting service vehicles and making offloading decisions based on the link’s lifetime. Task offloading cost minimization is formulated as an optimization problem, where the offloading cost is defined as the weighted sum of latency and processing cost. Service vehicle selection is based on performance values calculated using a custom fuzzy logic algorithm, considering communication and computation factors of VANETs such as distance, relative velocity, link reliability, and available computational resources. The interaction pattern of selected service vehicles is analyzed to estimate the level of trust among users.

Table 1.4 Summary of custom-tailored approaches of task offloading strategies.

Paper

Modes

Type

Objective

Method

[37]

V2R

Binary

Minimization of cost

Threshold-based parameter.

[38]

V2R

Binary

Minimization of average delay of task execution

Token-based predictive scheme.

[4]

V2V

Proportion

Minimization of task offloading cost

Link’s lifetime, Fuzzy logic algorithm.

[39]

V2R

Binary

Minimization of latency

Matching theory.

[40]

V2V, V2R, V2Cloud

Proportion

Minimization of average service delay of tasks

Shortest path finding using Dynamic Programming.

[41]

V2R

Binary

Improved Task completion rate and reduced average service time

Multi-Period Task Offloading and Transmission Method.

The token-based predictive offloading scheme [38] is proposed to offload the computational task of the vehicle to an MEC server to reduce the average delay. Two queues are maintained in the MEC server. Queue 1 has the number of tokens tasks, other extra tasks are in Queue 2. Tasks from Queue 2 are transferred to Queue 1 once any task in Queue 1 is processed. Round-robin scheduling is used to process tasks in the queue. Information regarding consumed and available tokens is stored in individual tables for each MEC server. Additionally, vehicles maintain a separate table to track which vehicles are within the range of each server. The vehicle transfers the task to the MEC server by requesting the token. If the request is declined, the vehicle uses V2V communication to send the task to another server.

NOMA, a burgeoning technology for vehicular networks, allows multiple users to utilize the same wireless resources, boosting spectrum utilization and system capacity. By jointly optimizing offloading decision-making, Vehicular User Equipment (VUE) clustering for subchannel allocation, computation resource allocation, and transmission power control, Du et al.[37] builds the cost minimization problem based on the NOMA-based VECN model. As it’s a mixed integer nonlinear programming problem, the task offloading and resource assignment issues are decoupled and addressed individually using tailor-made heuristic methods. By adjusting a parameter that considers task characteristics and wireless channel quality, and comparing it with a threshold, an offloading decision is made. Resource allocation is done based on cost-benefit analysis.

In cellular V2X technology, the Uu cellular interface facilitates Vehicle-to-Network (V2N) communication, while the side link PC5 interface supports V2V communication. An efficient task offloading scheme is proposed in which the task assignments are done based on custom matching theory [39] to reduce latency and improve the offloading reliability. Vehicles are grouped into clusters to ease the communication overhead on the cellular network hence, only the cluster heads can communicate with the network through the enodeB. A cluster is formed using a greedy iterative algorithm and the cluster head is selected based on metrics like velocity, link lifetime, and mean distance between communicating vehicles. Pairing a set of subtasks with a set of servers i.e., VEC or MEC server is done by matching theory. The preference list is obtained using the utility value with which members from one set are mapped with the members from another set.

A dependency-aware task offloading problem [40] is formulated to minimize the average service delay of the task. It’s characterized as a mixed integer nonlinear programming issue. To represent the task, a directed acyclic graph can be employed. Utilizing a graph model, a three-step heuristic approach is outlined. The task graph’s critical path is determined by considering the average processing capacity of the available nodes. A feasible solution along the critical path is translated into a hierarchical directed graph, converting the problem into a shortest path identification task, which is then resolved using dynamic programming. The sub-task is scheduled using a greedy method on the non-critical path.

Most of the research work provided resource scheduling methods and task-offloading strategies within the same time frame. Yet, the scheduling of resource allocation in subsequent times is influenced by the usage and release of resources in preceding instances. Zhang et al.[41] proposed a multi-period task offloading strategy for the task offloading problem at various time instances. The problem of task offloading is sliced with multiple periods. The offloading task request from the client vehicle at a certain period is sent to the BS. After identifying appropriate offloading destinations for each task vehicle, the system evaluates whether the task can be offloaded in the current period. Subsequently, it selects the suitable offloading destination based on the service time and offloading cost. The destination node information of this period is updated before receiving the task request in the next period.

In this section, the algorithmic-based computational offloading strategies for vehicular networks have been reviewed. Game Theory methods, mathematical algorithms, and custom-tailored algorithms have improved the successful computational offloading strategies with constraints such as latency, energy consumption, and resource availability. However, in Game Theory it is difficult to conclude the final decision when the number of vehicles increases. As well as it is also challenging for mathematical, and custom-tailored algorithms to attain stable optimized performance for a long time in a large-scale complex vehicular environment. The integration of connected intelligence and V2X communications in the 6G era significantly expands the task offloading challenge within the IoV environment. Traditional algorithm-based approaches struggle to achieve optimal results in this complex setting, underscoring the necessity for DRL-based strategies.

1.2.2 DRL-Based Strategies

AI has been leveraged in several domains, such as Industry 4.0, IoT, automotive networks, etc., to maximize its advantages. The widespread use and comprehensive integration of AI technologies with wireless systems can enhance network functionality and decision-making in a cost-effective manner [42]. It is anticipated that distributed and pervasive AI will be incorporated into 6G wireless communication networks. In ITS, AI has enhanced many functionalities [43] like traffic flow optimization, speed prediction, anomalous driving detection, cooperative lane change, etc. The apps and services used in modern vehicles are time-sensitive and based on AI algorithms. In general, Machine learning and Deep learning models demand substantial datasets for effective training. Nevertheless, obtaining datasets for task offloading is often impractical, and these models may not readily adapt to the swiftly changing and dynamic vehicular environment. Hence, the effectiveness of vehicular task offloading can be improved by RL and DRL [44]. Traditional algorithms struggle to handle vehicular task offloading in vast, complicated environments, whereas RL/DRL systems can perform better in such scenarios [1]. The majority of traditional approaches using one-shot optimization may not be able to achieve consistent long-term optimum performance when taking into account the complex and dynamic VANET caused by rapidly changing channels and compute workload offloading circumstances.

While RL methods like Q-learning and State-Action-Reward-State-Action (SARSA) are effective, they face challenges in vehicular environments with large state space complexities, such as continuous state and action spaces. DRL addresses this issue by leveraging neural networks as function approximators, thus enhancing the scalability of RL for complex vehicular environments. DRL determines the optimal offloading decision strategy based on past experiences. Although the training process for DRL involves significant time and resource consumption, once converged, it enables faster real-time decision-making. An optimization problem is formulated as a Markov Decision Process (MDP) for DRL algorithms to solve. This MDP is described by a 4-tuple (S, A, P, R), where S represents the set of states, A denotes the set of actions, R signifies rewards, and P provides transition probabilities. The transition probability P (s′|s,a) indicates the likelihood of transitioning to a new state s’ given the current state s and action a. The reward function R : SXA → R reflects the rewards obtained after taking an action. The discount factor γ, ranging from 0 to 1, influences the quality of the reward function and affects the probability of the next state s’ and the next reward r, as shown in Equation 1.5.

(1.5)

The literature focusing on task-offloading decision-making by DRL techniques is classified as Value-based, Policy-based, and multi-agent DRL techniques as shown in Figure 1.5 and summarized below.

Figure 1.5 DRL-based strategies.

1.2.2.1 Value-Based DRL Algorithms

Value-based DRL algorithms aim to determine the best policy for agents by evaluating and improving the values associated with states and actions within an environment, to maximize the anticipated cumulative reward over time. Every policy π is characterized by a state value function Vπ : S → R and an action value function Qπ : SXA → R, as described in Equations 1.6 and 1.7, respectively.

(1.6)
(1.7)

The optimal policy π*, as illustrated in Equation 1.8, is derived from the optimal action-value function Q* (s,a), which provides the highest expected reward for any state-action pair across all conceivable policies.

(1.8)

Deep Q Network (DQN), Double DQN, and Dueling DQN are value-based DRL algorithms, and their relevance is explained in this section. Table 1.5 describes the State, Action, and Reward considered in the literature.

Edge cloud computing cooperation in VEC involves task execution across three locations: local, edge server, or cloud server. The task offloading algorithm, leveraging DQN [47], aims to minimize the average delay in task processing. Each task’s completion time is determined based on its offloading destination node. The offloading process is modeled as an MDP, aiming to identify optimal offloading decisions with low computational complexity, thereby minimizing processing delay while considering computation and communication resource constraints. Upon receiving a task request from a vehicle, the DQN agent (MEC server) observes the state and selects an action based on the current offloading strategy, receiving a reward and transitioning to the next state accordingly. The offloading strategy is iteratively updated using rewards, employing the epsilon-greedy strategy to balance exploration and exploitation in action selection. Experience replay is utilized to enhance the training rate.

Table 1.5 Summary of value-based DRL methods for task-offloading strategies.

Paper

Modes

Type

State

Action

Reward

[45]

V2Fog

Binary

Computing power, load on each fog node

Selection of fog node

Difference between utility function and the sum of traffic load probability function and end-to-end delay.

[46]

V2Fog

Binary

Servers and task information

Selection of offloading node

Task deadline and delay

[47]

V2R

Binary

Available communication and computation resources

Selection of offloading node

The reduced processing delay of a task

[48]

V2V

Proportion

Remaining resources of vehicles

Task offloading strategy

The negative of the optimization function

[49]

V2Fog

Binary

Task to be allocated, the remaining tasks in the queue

Offloading tasks to fog nodes and determining the quantity of tasks for offloading.

Difference between utility and summation of delay and overhead

A framework for AI-based V2X [45] is proposed to provide ultra-reliable and low-latency communications in a highly dynamic environment. RSU or BS fog nodes are connected to an SDN controller, which is responsible for collecting information and making decisions. The proposed AI-based resource allocation and task offloading in vehicular networks is done in