173,99 €
AUTONOMOUS VEHICLES Addressing the current challenges, approaches and applications relating to autonomous vehicles, this groundbreaking new volume presents the research and techniques in this growing area, using Internet of Things (IoT), Machine Learning (ML), Deep Learning, and Artificial Intelligence (AI). This book provides and addresses the current challenges, approaches, and applications relating to autonomous vehicles, using Internet of Things (IoT), machine learning, deep learning, and Artificial Intelligence (AI) techniques. Several self-driving or autonomous ("driverless") cars, trucks, and drones incorporate a variety of IoT devices and sensing technologies such as sensors, gyroscopes, cloud computing, and fog layer, allowing the vehicles to sense, process, and maintain massive amounts of data on traffic, routes, suitable times to travel, potholes, sharp turns, and robots for pipe inspection in the construction and mining industries. Few books are available on the practical applications of unmanned aerial vehicles (UAVs) and autonomous vehicles from a multidisciplinary approach. Further, the available books only cover a few applications and designs in a very limited scope. This new, groundbreaking volume covers real-life applications, business modeling, issues, and solutions that the engineer or industry professional faces every day that can be transformed using intelligent systems design of autonomous systems. Whether for the student, veteran engineer, or another industry professional, this book, and its companion volume, are must-haves for any library.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 489
Veröffentlichungsjahr: 2022
Cover
Series Page
Title Page
Copyright Page
Preface
1 Anomalous Activity Detection Using Deep Learning Techniques in Autonomous Vehicles
1.1 Introduction
1.2 Literature Review
1.3 Artificial Intelligence in Autonomous Vehicles
1.4 Technologies Inside Autonomous Vehicle
1.5 Major Tasks in Autonomous Vehicle Using AI
1.6 Benefits of Autonomous Vehicle
1.7 Applications of Autonomous Vehicle
1.8 Anomalous Activities and Their Categorization
1.9 Deep Learning Methods in Autonomous Vehicle
1.10 Working of Yolo
1.11 Proposed Methodology
1.12 Proposed Algorithms
1.13 Comparative Study and Discussion
1.14 Conclusion
References
2 Algorithms and Difficulties for Autonomous Cars Based on Artificial Intelligence
2.1 Introduction
2.2 In Autonomous Cars, AI Algorithms are Applied
2.3 AI’s Challenges with Self-Driving Vehicles
2.4 Conclusion
References
3 Trusted Multipath Routing for Internet of Vehicles against DDoS Assault Using Brink Controller in Road Awareness (TMRBC-IOV)
3.1 Introduction
3.2 Related Work
3.3 VANET Grouping Algorithm (VGA)
3.4 Extension of Trusted Multipath Distance Vector Routing (TMDR-Ext)
3.5 Conclusion
References
4 Technological Transformation of Middleware and Heuristic Approaches for Intelligent Transport System
4.1 Introduction
4.2 Evolution of VANET
4.3 Middleware Approach
4.4 Heuristic Search
4.5 Reviews of Middleware Approaches
4.6 Reviews of Heuristic Approaches
4.7 Conclusion and Future Scope
References
5 Recent Advancements and Research Challenges in Design and Implementation of Autonomous Vehicles
5.1 Introduction
5.2 Modules/Major Components of Autonomous Vehicles
5.3 Testing and Analysis of An Autonomous Vehicle in a Virtual Prototyping Environment
5.4 Application Areas of Autonomous Vehicles
5.5 Artificial Intelligence (AI) Approaches for Autonomous Vehicles
5.6 Challenges to Design Autonomous Vehicles
5.7 Conclusion
References
6 Review on Security Vulnerabilities and Defense Mechanism in Drone Technology
6.1 Introduction
6.2 Background
6.3 Security Threats in Drones
6.4 Defense Mechanism and Countermeasure Against Attacks
6.5 Conclusion
References
7 Review of IoT-Based Smart City and Smart Homes Security Standards in Smart Cities and Home Automation
7.1 Introduction
7.2 Overview and Motivation
7.3 Existing Research Work
7.4 Different Security Threats Identified in IoT-Used Smart Cities and Smart Homes
7.5 Security Solutions For IoT-Based Environment in Smart Cities and Smart Homes
7.6 Conclusion
References
8 Traffic Management for Smart City Using Deep Learning
8.1 Introduction
8.2 Literature Review
8.3 Proposed Method
8.4 Experimental Evaluation
8.5 Conclusion
References
9 Cyber Security and Threat Analysis in Autonomous Vehicles
9.1 Introduction
9.2 Autonomous Vehicles
9.3 Related Works
9.4 Security Problems in Autonomous Vehicles
9.5 Possible Attacks in Autonomous Vehicles
9.6 Defence Strategies against Autonomous Vehicle Attacks
9.7 Cyber Threat Analysis
9.8 Security and Safety Standards in AVs
9.9 Conclusion
References
10 Big Data Technologies in UAV’s Traffic Management System: Importance, Benefits, Challenges and Applications
10.1 Introduction
10.2 Literature Review
10.3 Overview of UAV’s Traffic Management System
10.4 Importance of Big Data Technologies and Algorithm
10.5 Benefits of Big Data Techniques in UTM
10.6 Challenges of Big Data Techniques in UTM
10.7 Applications of Big Data Techniques in UTM
10.8 Case Study and Future Aspects
10.9 Conclusion
References
11 Reliable Machine Learning-Based Detection for Cyber Security Attacks on Connected and Autonomous Vehicles
11.1 Introduction
11.2 Literature Survey
11.3 Proposed Architecture
11.4 Experimental Results
11.5 Analysis of the Proposal
11.6 Conclusion
References
12 Multitask Learning for Security and Privacy in IoV (Internet of Vehicles)
12.1 Introduction
12.2 IoT Architecture [5]
12.3 Taxonomy of Various Security Attacks in Internet of Things [5]
12.4 Machine Learning Algorithms for Security and Privacy in IoV
12.5 A Machine Learning-Based Learning Analytics Methodology for Security and Privacy in Internet of Vehicles
12.6 Conclusion
References
13 ML Techniques for Attack and Anomaly Detection in Internet of Things Networks
13.1 Introduction
13.2 Internet of Things
13.3 Cyber-Attack in IoT
13.4 IoT Attack Detection in ML Technics
13.5 Conclusion
References
14 Applying Nature-Inspired Algorithms for Threat Modeling in Autonomous Vehicles
14.1 Introduction
14.2 Related Work
14.3 Proposed Mechanism
14.4 Performance Results
14.5 Future Directions
14.6 Conclusion
References
15 The Smart City Based on AI and Infrastructure: A New Mobility Concepts and Realities
15.1 Introduction
15.2 Research Method
15.3 Vehicles that are Both Networked and Autonomous
15.4 Personal Aerial Automobile Vehicles and Unmanned Aerial Automobile Vehicles
15.5 Mobile Connectivity as a Service
15.6 Major Role for Smart City Development with IoT and Industry 4.0
15.7 Conclusion
References
Index
End User License Agreement
Chapter 1
Table 1.1 Comparative analysis.
Chapter 2
Table 2.1 SAE level of automation [17].
Chapter 3
Table 3.1 Simulation setup.
Chapter 5
Table 5.1 Companies manufacturing and researching autonomous vehicles.
Chapter 6
Table 6.1 Survey of security attacks and vulnerabilities in drones.
Table 6.2 Security vulnerabilities of various types of networks.
Table 6.3 Frequency of occurrence of security vulnerabilities in various types...
Table 6.4 Vulnerabilities and their countermeasures in drone technology protoc...
Chapter 7
Table 7.1 Current research work and its main focus regions.
Chapter 10
Table 10.1 Analysis of UAV traffic management system.
Chapter 11
Table 11.1 AI algorithm used to select the appropriate database.
Table 11.2 Parameters used in the proposal.
Chapter 13
Table 13.1 Comparing the performance of Ml techniques for intrusion detection ...
Chapter 15
Table 15.1 Possibilities and difficulties in the development, adoption, and ut...
Chapter 1
Figure 1.1 Levels of automation in vehicles.
Figure 1.2 Possible vehicle action prediction based on features.
Figure 1.3 Original YOLO framework. (Source: [19].)
Figure 1.4 Prediction of bounding box for each grid cell.
Figure 1.5 Coordinates bx,by,bh and bw specifies the bounding box.
Figure 1.6 The zigzag drive of a nearby vehicle can be an unsafe drive for an ...
Figure 1.7 Flow of proposed method.
Chapter 3
Figure 3.1 Assaults in VANET.
Figure 3.2 Vehicle movement area.
Figure 3.3 Simulation area.
Figure 3.4 Packet delivery ratio.
Figure 3.5 Throughput.
Figure 3.6 End-to-end delay.
Chapter 4
Figure 4.1 Vehicular Ad hoc networks.
Figure 4.2 Heuristic search methods.
Chapter 5
Figure 5.1 Features of autonomous vehicles.
Figure 5.2 Levels of automation present in autonomous vehicles.
Figure 5.3 Components present in the autonomous vehicles.
Figure 5.4 Working of Sim-ATAV simulation model.
Figure 5.5 Autonomous vehicle detecting pedestrians on their way.
Figure 5.6 Flowchart of the traditional approach of detecting pedestrians.
Figure 5.7 Flowchart of the modern approach of detecting pedestrians.
Figure 5.8 Some general road signs.
Figure 5.9 Flowchart describing the road and traffic signs detection algorithm...
Figure 5.10 Lane detection algorithm involving noise reduction and edge detect...
Chapter 6
Figure 6.1 Types of attacks in drones.
Chapter 7
Figure 7.1 The four layer IoT architecture in smart environment.
Chapter 8
Figure 8.1 Detection of objects using a faster R-CNN.
Figure 8.2 Identify all vehicle and scan traffic on road.
Figure 8.3 Identify cars (vehicles) on road.
Figure 8.4 RMSE value in respect of number of hidden layers.
Chapter 9
Figure 9.1 Demonstrating different cyber attacks.
Chapter 10
Figure 10.1 Characteristics of UTM.
Figure 10.2 Big data techniques in UTM.
Figure 10.3 Benefits of big data technologies in UTM.
Figure 10.4 Applications of big data techniques in UTM.
Chapter 11
Figure 11.1 Representation of the comparison of both work w.r.t reliability.
Figure 11.2 Graphical representation of the speed to overcome the attack.
Chapter 12
Figure 12.1 Internet of Things in real-life applications.
Figure 12.2 Security and privacy challenges in IoV.
Figure 12.3 Three layer architecture.
Figure 12.4 Taxonomy of various securities related attacks in IoT based on arc...
Figure 12.5 Result comparison of classifiers.
Chapter 13
Figure 13.1 IoT in action.
Figure 13.2 The architecture of IoT layers.
Figure 13.3 Complete list of IoT attacks, including various attacks, attack su...
Figure 13.4 Different forms of cyber-attacks.
Figure 13.5 Active and passive attacks.
Figure 13.6 Clarification of different type point wise, cooperative, prescribe...
Figure 13.7 Classify the category of machine learning.
Chapter 14
Figure 14.1 Effect of tuning factors against the percentage of attack severity...
Figure 14.2 Generation of threats against minimum cost related to threats.
Figure 14.3 Comparative performance of popular threat models with threat model...
Figure 14.4 Comparative performance of popular threat models with threat model...
Chapter 15
Figure 15.1 Application model of artificial intelligence.
Figure 15.2 Analysis or different type research publication for Smart Industry...
Figure 15.3 Smart industry role for internet technologies.
Cover Page
Series Page
Title Page
Copyright Page
Preface
Table of Contents
Begin Reading
Index
WILEY END USER LICENSE AGREEMENT
ii
iii
iv
xv
xiv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
27
28
29
30
31
32
33
34
35
36
37
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
149
150
151
152
153
154
155
156
157
158
159
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
203
204
205
206
207
208
209
210
211
212
213
214
215
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
235
236
237
238
239
240
241
242
243
244
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
297
298
299
Scrivener Publishing100 Cummings Center, Suite 541JBeverly, MA 01915-6106
Publishers at ScrivenerMartin Scrivener ([email protected])Phillip Carmical ([email protected])
Edited by
Romil RawatA. Mary SowjanyaSyed Imran PatelVarshali JaiswalImran KhanandAllam Balaram
This edition first published 2023 by John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA and Scrivener Publishing LLC, 100 Cummings Center, Suite 541J, Beverly, MA 01915, USA© 2023 Scrivener Publishing LLCFor more information about Scrivener publications please visit www.scrivenerpublishing.com.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
Wiley Global Headquarters111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchant-ability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials, or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read.
Library of Congress Cataloging-in-Publication Data
ISBN 9781119871958
Cover image: Pixabay.comCover design by Russell Richardson
This groundbreaking new volume provides and addresses the current challenges, approaches, and applications relating to autonomous vehicles using Internet-of-Things (IoT), Machine Learning (ML), Deep Learning, Data Science, Cloud, Fog and Edge Computing, Computer Vision, ANN, and Artificial Intelligence (AI) techniques and covers the applications in driverless intelligent systems based on advance computer science algorithms applicable for business engineering, automation, heath informatics, military applications, transportation, education, and so on. Several companies are working on autonomous systems and developing cars, truck, drones and even aero-taxis (unmanned aerial vehicles [UAVs]) for frequent travelling having an easy, flexible, and fast mode of transportation on the road or in the air for sharing real time data and activities. Several self-driving or autonomous (“driverless”) cars, trucks, and drones incorporate a variety of IoT devices and sensing technologies such as sensors, gyroscopes, cloud computing, and fog layer, allowing the vehicles to sense, process, and maintain massive amounts of data on traffic, routes, suitable time to travel, potholes, sharp turns, and robots for pipe inspection in the construction and mining industries. Autonomous vehicle is a broad class of automation used for farming vehicles such as irrigators, tractors, and buggies, mining vehicles such as drilling rigs, with industrial vehicles like forklifts and car crash testing vehicles.
The Autonomous Vehicles project is a drawn-out logical examination program that plans to contemplate and comprehend these worldwide wonders through a computational, data-driven methodology. We have created different multilingual data mining, text mining, and web mining procedures to perform connection examination, content investigation, web measurements (specialized refinement) examination, notion investigation, origin investigation, and video investigation in our exploration. The methodologies and strategies created in this venture add to propelling the field of Autonomous Vehicles applications.
Autonomous Vehicles: Using Machine Intelligence is organized into divergent distinct sections that provide comprehensive coverage of important topics. The sections are:
Activity and Threat Analysis Using Machine Intelligence
Algorithms for Autonomous Cars and Vehicles
Drone Technology for Internet of Vehicles
IoT-Based Smart City Automation for Connected Vehicles
Traffic Management Systems for UAV’s and Smart City Models
This book aims to convey an overview of autonomous vehicle methodologies, suggest a systematic, computational approach to handling problem-solving, and lay out various tactics and strategies. Researchers, security professionals, experts in autonomy, and strategists will all benefit from the knowledge it carries. The monograph can likewise fill in as a source of perspective material or reading material in alumni level courses identified with state of the art models of Autonomous Vehicles: Using Machine Intelligence.
The book is targeted to academic researchers, research scholars, practitioners, students, traffic engineers, city planners, consultants, traffic planners, vehicle designers and all key stakeholders working on intelligent engineering solutions. Contributions are based on conceptual and theoretical solutions, quantitative or empirical results, and model/simulation-based experimental studies.
Amit Juyal1,2, Sachin Sharma1 and Priya Matta1*
1Department of Computer Science and Engineering, Graphic Era (Deemed to be University), Dehradun, Uttarakhand, India
2School of Computing, Graphic Era Hill University, Dehradun, Uttarakhand, India
Autonomous driving is self-driving without the intervention of a human driver. A self-driving autonomous vehicle is designed with the help of high-technology sensors that can sense the traffic and traffic signals in the surroundings and move accordingly. It becomes necessary for a self-driving vehicle to take a right decision at the right time in an uncertain traffic environment. Any unusual anomalous activity or unexpected obstacle that could not be detected by an autonomous vehicle can lead to a road accident. For decision making in autonomous vehicles, very precisely designed and optimized programming software are developed and intensively trained to install in vehicle’s computer system. But in spite of these trained software some of the anomalous activity could become a hindrance to detect promptly during self-driving. Therefore, automatic detection and recognition of anomalies in autonomous vehicles is critical to a safe drive. In this chapter we discuss and propos deep learning method for anonymous activity detection of other vehicles that can be danger for safe driving in an autonomous vehicle. The present chapter focuses on various conditions and possible anomalies that should be known to handle while developing software for autonomous vehicles using deep learning models. A variety of deep learning models were tested to detect abnormalities, and we discovered that deep learning models can detect anomalies in real time. We have also observed that incremental development in YOLO (You Only Look Once) make it more accurate and agile in object detection. We suggest that anomalies should be detected in real time and YOLO can play a vital role in anomalous activity.
Keywords: Autonomous self-driving, AI, deep learning, YOLO, R-CNN, Fast R-CNN, Faster R-CNN, SSD
A crucial problem for the success of autonomous vehicles is ensuring safe driving. Before being released to the general public, self-driving cars must be thoroughly trained and tested. It should not compromise the safety of passengers or other traffic objects like vehicles, bikers, cyclists, pedestrians, etc. It should be thoroughly tested before the actual launch. Self-driving cars are controlled by software and the software must be trained in such a way that it can perform well under all circumstances or conditions. The following points need to be considered while developing software for autonomous vehicles.
Infrastructure: In the case of self-driving vehicles, infrastructure can be crucial. Almost the majority of the world’s roads and transportation infrastructure are now designed for human use. Autonomous vehicles will be required to operate inside existing infrastructure. For a self-driving vehicle, it is a challenging task to use current infrastructure. The software should be trained in such a way that it can easily adapt to the existing road infrastructure.
Traffic conditions: In real time, it is very difficult to predict traffic that what will happen next. It is almost impossible to accommodate all scenarios of traffic conditions while developing software for autonomous vehicles. However, AI-based algorithms should be developed in such a way that it can learn by itself with time and experience.
Weather condition: Weather can affect driving ability in autonomous vehicles. It may be possible that the inputs from various sensors and cameras get damaged due to bad weather, and in heavy rain or in a snowstorm various road streaks and lanes information can be hidden. An autonomous vehicle navigation system should be developed while considering weather conditions and it should be trained and tested in all weather conditions.
Software security: Self-driving cars completely depend on software, and software can be hacked and can be infected by viruses (a malicious computer code). Computer viruses can cause unexpected glitches in self-driving cars. These glitches can be harmful to self-driving cars especially while driving at a high speed. So the software needs to be secure for unauthorized access and viruses for safe driving.
The rest of the chapter is outlined as follows. Section 1.2 gives the literature review. Section 1.3 describes an artificial intelligence approach in autonomous vehicles, section 1.4 outlines technologies inside an autonomous vehicle, section 1.5 shows major tasks in autonomous vehicle using AI, section 1.6 shows the benefits of autonomous vehicle, section 1.7 describes applications of autonomous vehicles. In section 1.8, anomalous activities and their categorization are described, while section 1.9 describes deep learning methods in an autonomous vehicle. Section 1.10 shows the working of YOLO, and section 1.11 shows the proposed method. Section 1.12 shows the proposed algorithm, while section 1.13 is a comparative study and discussion, and section 1.14 presents the conclusion of this chapter.
A security model has been suggested that can deal with three types of cyber-attacks for Electronic Control Units (ECUs). Over the years, the automobile sector has improved technology and there has been advancement in car production. To make the vehicle more comfortable and automated, companies are doing research on new technology. One of the advancements is that companies are replacing some mechanical parts with electronic components to introduce automation into vehicles. ECU is an electronic control unit that can communicate with other ECUs by messages. ECUs are modern technology that relies upon Control Area Network (CAN) and ensures that all the critical parts of a vehicle, like braking, engine, airbag, steering wheel, fuel indication, and acceleration are working properly. Due to the lack of security on the CAN bus network, it can be hacked and attackers can perform malicious activities in the ECU. The author’s security mechanism can solve three types of message attacks like fuzzy, Denial of service (DoS), and impersonation attacks. Deep learning-based network, Deep Denoising Autoencoder was adopted in the proposed security framework. Ecogeography-based optimization (EBO) algorithm was integrated with deep denoising autoencoder. For experimental data, malicious messages injected in CAN traffic were used. The experiment result showed that the proposed deep denoising autoencoder method outperforms the other machine learning models on three different CAN traffic datasets by achieving the highest hit rate and lowest miss rate [1].
A network called mIoUNe for detecting failure cases in semantic segmentation was proposed. To identify identical pixels and then label identical pixels with corresponding class is image segmentation. It may be possible that in the predicted semantic segmentation map, some pixels are labeled with the wrong class. For real-time applications like autonomous vehicles, this type of anomaly can lead to unsafe driving and results in accidents. The authors proposed a method using a neural network. Their network predicts the mean of the intersection of union (mIoU) to ensure that all the pixels were accurately classified. CNN and FCN were used in mIoUNet. Experimental results revealed that the proposed method achieved an accuracy of 93.21% mIoU prediction and 84.8% failure detection. In another experiment with HMG’s SVM camera acquisition dataset, the method achieved 90.51% mIoU prediction accuracy and 83.33% failure detection accuracy [2].
Road infrastructure will play a key role in the success of self-driving cars. There are many causes of traffic accidents and one of them is bad conditions. Android application was developed using OpenCV library to detect potholes and cracks in roads in real time. The proposed Automatic Pavement Distress Recognition (APDR) system was developed by combining the Android framework with the Open CV library. The system can detect road anomalies like fatigue cracks, longitudinal, potholes and transversal cracks. The Local Binary Pattern (LBP) feature cascade classifier was employed to train the system for positive samples and negative samples. A custom image dataset of the streets of Rome (Italy) was constructed for the experimental work. Using the LBP feature cascade classifier, the proposed Android system can detect road anomalies directly from live video frames. The system was tested on three android devices. Results showed that the system performed well in an older version android device as well as in a new device. This showed the portability of the proposed system [3].
Connected and automated vehicles (CAVs) capture the surrounding information from various sensors and cameras. Accurate information is very important for self-driving cars because autonomous vehicles are controlled by software. It may be possible that a sensor can provide an anomalous reading due to a faulty sensor or cyber-attacks. A faulty reading in an autonomous vehicle can leads to accidents. Therefore, real-time detection of anomalies is important. The experimental result of an anomaly detection method using CNN and Kalman filtering showed that the proposed approach can detect anomalies and identify their sources with high accuracy, sensitivity, and F1 score [4].
In autonomous vehicles, LiDAR, RADAR, cameras, and various sensors work together to collect surrounding traffic information. Self-driving cars or autonomous vehicles totally depend upon electronic devices for decision-making. It may be possible that electronic devices in autonomous vehicles may fail. If an electronic component fails it can be dangerous for driving in an autonomous vehicle. A federated sensor data fusion architecture to deal with sensor faults was proposed. Their architecture can detect a faulty sensor. SVM was used to detect a faulty sensor while obstacle detection in autonomous vehicles. For experimental work, they used KITTI dataset, and results showed that the proposed architecture can detect soft and hard faults from a particular sensor [5].
In an autonomous vehicle environment, vehicle identification is a challenging task. The impeded vehicle on the road can block traffic and the result may be a traffic jam. Such types of roadside anomalies should be detected on time so that roadside assistance can be provided to the stalled vehicle. A deep learning-based method to detect anomalous vehicles and reidentification of vehicles was proposed. For vehicle reidentification, the input image was cropped in the image pre-processing step of the proposed method. In the next step, features were extracted using adaptive attention. The final step is post-processing in which relatively discriminative features were generated. Anomalous vehicles move at a slower speed than normal vehicles. Mask RCNN was used as a detector. Target vehicles are immobile and for anomalous vehicle detection, they are considered vehicles of interest. The authors proposed a two-step method for anomalous vehicle detection. In the first stage, the model was trained to identify slower vehicles than the normal vehicles by searching sequential bounding boxes that have overlapped spatially. In the second stage, the initial proposals were reined in by the establishment of a cuboid search radius over the initial search [6].
Highway traffic anomaly (HTA) dataset to detect anomalous traffic incidents and patterns was proposed. HTA dataset was systematically constructed from the Berkeley Deep Drive dataset. Berkeley Deep Drive dataset has high-resolution dash cam videos. CGAN, FlowNet, PredNet N+1, and PredNEt N +6 models were evaluated using the HTA dataset. To measure the accuracy, AUC scores were considered. The experimental result showed that state-of-the-art models achieved low AUC scores which are not acceptable in real-time applications like in self-driving cars. Except for impeded vehicles detection, PredNet N+ 6 achieved the highest AUC score among other models in detecting anomalies [7].
A deep learning-based fault detection method was proposed. Autonomous vehicles may malfunction due to electrical or mechanical failure. For safe driving in autonomous vehicles, faults must be automatically detected so that appropriate action can be taken by self-driving cars. Simulation data was generated. Using wavelet transform, the generated one-dimensional data was converted into a two-dimensional image. Three different channels of these 2D signals were fed into the deep learning model to classify whether the signal is faulty or not [8].
For safe driving in autonomous vehicles, the intelligent traffic monitoring system can be helpful for autonomous vehicles. Intelligent traffic monitoring systems can prevent traffic jams and other accidents by providing information of the time and place of accidents to other vehicles. A decision tree-based method extracts road accident anomalies from video. Deep learning (YOLOv5) was used in the proposed method. Traffic videos were sorted using a video sorting system. Then foreground object detection and background estimation were performed concurrently. Background images were fed to an object detector to characterize potential anomalies. Then a decision tree algorithm based on a predefined rule was used to classify anomalies. The proposed method achieved a 0.8571 F1 scores and 0.5686 S4 score [9].
One of the studies [10] proposed a novel unsupervised learning model based on deep autoencoder to detect the self-reported location anomaly in Connected and Autonomous vehicles (CAV) by vehicle location and the Received Signal Strength Indicator (RSSI) as features. The proposed model is trained and tested on simulation data generated by OMNeT++, modified INET framework and SUMO traffic generator, in an area of Bristol, UK. The DAE approach proposed in the study is useful in detecting CAV location anomaly in a complicated scenario. And this model connects unsupervised learning and anomaly detection in CAVs which might be beneficial as powerful properties of unsupervised learning in ITSs. During past years many researchers introduced intrusion detection methods for CAVs using deep learning and discriminated analysis [11, 12] and a CAV misbehavior detection method was also proposed for service management [13]. Usually synthetic datasets, generated by means of simulation frameworks was used for this kind of researches [14].
A study proposed Hidden Markov Model (HMM) based behavior analysis method to assess vehicles’ driving behavior on the road and detect anomalous patterns. The real-time velocity and position of the surrounding vehicles-based algorithm was used, which has been provided by the Conditional Monte Carlo Dense Occupancy Tracker (CMCDOT) framework. The vehicle has been classified into several observation states, in association with road information. These were named as Approaching, Braking, Lane Changing, and Lane Keeping. By this, the ego-vehicle could observe the movements of nearby vehicles and infer their driving behavior by tracking and analyzing their motion and position. CARLA simulator was used to generate several typical abnormal movements for two vehicles. The results showed that the proposed method can successfully detect the risk moments [15].
For detection of anomalous activity and safe driving in an autonomous vehicle, approaches have focused on an anomaly segmentation framework that combines two complementary approaches to anomaly detection: uncertainty and resynthesis methods to resynthesize the image from the semantic label map to find dissimilarities with the input image. The work comprises two methodologies to produce robust predictions for anomaly segmentation [16]. A pixel-wise anomaly detection Framework for complex driving scenes (i.e., urban landscapes) were designed that uses uncertainty maps, like softmax entropy, softmax distance and perceptual differences to improve over existing resynthesis methods in finding dissimilarities between the input and generated images. This method also works for lighter segmentation and synthesis networks, making it ready for deployment in autonomous machines. It is the best overall method across datasets on the Fishyscapes benchmark and can be used with any already trained state-of-the-art segmentation model [16].
To avoid anomalies like signal interference, software and hardware errors, cyber-attacks and many more, a deep learning method, to propose a model based on robust real-time technique was approached. A LSTM auto recorder was used to extract signal features and classification. The model impact was detected using channel boosting, and accuracy obtained was 95.5% and precision of 94.2% [17].
In a real-world scenario, for safe driving, autonomous vehicles need to track another vehicle’s motion also. Sudden braking by another vehicle may cause accidents in autonomous vehicles. One way to keep track of another vehicle’s motion is to detect a brake light. The brake light is an indicator for other vehicles that the vehicle is about to slow down. Therefore, in self-driving cars intelligent brake light detection and recognition can help self-driving cars for flawless and secure driving. A two-stage method has been proposed to detect vehicles and identify the vehicle’s tail brake light. The method is able to detect vehicles and recognize vehicle brake lights in real time from a single image. BVLC AlexNet, an 8-layer CNN model was trained on the rear vehicle database. In-vehicle detection algorithm road segmentation and vanishing point detection improves the accuracy of the proposed method. At the detection level, the lidar and vision were fused. This two-stage method is capable to detect and recognize vehicle tail brake lights from the noisy images also [18].
Although many industries and researchers are working on autonomous vehicles, this technology still needs a lot of improvement before it really matures. Autonomous vehicle means a vehicle that can be used for transport in the real world without human intervention. But still autonomous vehicles, which were launched by many automobile industries, were tested in controlled environments. The autonomous vehicle needs a lot of improvement before it can actually face real-world conditions. In the actual world, distance from other vehicles, pedestrians, animals, speed-breakers, traffic signals, and other unpredictable dynamic surroundings all play a role in driving a safe vehicle. The autonomous car must be equipped with a reliable and sophisticated system that can correctly interpret information from numerous sensors and gadgets in order to comprehend what is going on around it and drive accordingly. An autonomous vehicle is a vehicle that can drive itself without any human intervention. Figure 1.1 shows the types of vehicles. In Figure 1.1, type VI represents the “fully autonomous” scenario whereas type I represents “fully human driving”. From type I to type VI represents the levels of human and machine driving.
Many industries like agriculture, textiles, health, and automobile are already using Artificial Intelligence (AI) to automate things. The automobile industry also uses AI in vehicle manufacturing and exploring AI to build autonomous vehicles. AI can make it possible to see self-driving cars or autonomous vehicles for everyday transportation. AI is an important and revolutionary component in manufacturing of autonomous vehicles. To mimic human driving skills is a challenging and tough task for autonomous vehicles. Autonomous vehicles rely on various sensors, cameras, and other electronic equipment to simultaneously collect surrounding traffic information. An autonomous vehicle needs intelligent software that can process the collected data and plan the trajectory for a safe autonomous drive. AI techniques such as deep learning and machine learning can be used to recognize surrounding objects, detect lanes, plan routes, and control mechanical components of a vehicle. For a safe drive-in autonomous vehicle, object detection is critical. Object detection can be accomplished using a variety of deep learning techniques such as R-CNN, Fast R-CNN, and Faster R-CNN; however, autonomous vehicles must identify barriers and objects in real time. Therefore, autonomous or self-driving cars require such algorithms that should have agility in object detection. You Only Look Once (YOLO) and Single-shot detector (SSD) are deep learning algorithms that can detect objects in real time; therefore, these algorithms can be used in autonomous vehicles. The deep learning algorithms need to interact with various hardware components like sensors, cameras for nearby data and to process data in real time. To process data in real time needs intensive computational power. Therefore, traditional CPU with limited memory is not sufficient to execute deep learning algorithms. Therefore, in autonomous vehicle extra processing power like Graphical processing unit (GPUs) for real-time object detection is required.
Figure 1.1 Levels of automation in vehicles.
Hardware and software are the main components in autonomous vehicles that are primarily responsible for driving. Hardware (cameras, sensors, LiDAR and RADAR) is responsible for collecting data from the surroundings and software is responsible for processing the data. Computer vision, machine learning, LiDAR, and RADAR are essential technologies for autonomous vehicle functioning. In an autonomous vehicle, machine learning algorithms continuously learn for themselves from the surrounding environment and predict possible outcomes.
A. HARDWARE TECHNOLOGIES
Cameras for Computer Vision: The camera is the main source to record visual data or live video in an autonomous vehicle. This captured data is used by deep learning algorithms to detect the objects.
Radio Detection and Ranging (RADAR): RADAR technology is used to detect objects such as aircraft, ships, spacecraft, etc. It uses radio waves to determine the range, velocity and angle of objects. In an autonomous vehicle, radar is used to detect other vehicles, pedestrians, and obstacles. Radar can be used in inclement weather and low intensity of light.
Light Detection and Ranging (LiDAR): LiDAR is a remote sensing technology that uses laser light pulses to measure the distance from objects like pedestrians, other vehicles, and unmovable obstacles. To detect small objects LiDAR is better than RADAR and can build an exact 3D monochromatic image of an object. In bad weather and nighttime LiDAR does not perform well, and also it is more expensive than RADAR.
Sensors: A sensor is a small device that reacts to environmental things such as light, speed, pressure, humidity, temperature. The autonomous vehicle uses sensors to detect what is actually happening around it. It also uses the traditional Global Positioning System (GPS) for path planning.
Computer resources: A fully autonomous vehicle needs extreme computational power. Graphical Processing Unit (GPU) is required with traditional Central Processing Unit (CPU) to execute and process complex Machine learning and deep learning algorithms.
B. SOFTWARE TECHNOLOGIES
In autonomous vehicles, machine learning algorithms continuously interpret the inputs from various devices and predict possible changes. As shown in Figure 1.2, the image captured by the camera goes to the processing unit for important feature selection for object detection. Depending on the features, the vehicle performs possible actions based on the detected object.
Images captured by cameras may contain irrelevant data. Pattern recognition algorithms like Support Vector Machine (SVM), K-nearest neighbour, and Decision Tree are used to remove irrelevant data points. Pattern recognition helps in object classifications or object recognition.
Figure 1.2 Possible vehicle action prediction based on features.
Regression Algorithms: Regression algorithms are used for object detection and localization. Neural network regression, Random Forest and Bayesian regression are the well-known examples of regression algorithms.
Clustering algorithms: Clustering algorithms are mainly used for object recognition. It may be possible that classification algorithms may predict false results in object recognition; the reason may be deformed images, low-resolution images or blurred images. Clustering algorithms create clusters or groups of similar data points. Commonly used clustering algorithms are K-means and Multi-class neural network.
Decision-Making Algorithms: After detecting and classification of object, some algorithms are used in decision making and take appropriate action for a safe drive. Commonly used decision-making algorithms are Decision Tree and Random Forest.
Object classification, localization and detection are computer vision problems. Object classification is to identify the class of object present in the digital image. To find the exact position in the image is object localization and to classify and localize more than one object in a single image is object detection. Studies done in the recent past show that deep learning algorithms achieve marvelous performance in object classification, localization and detection.
Object detection is very important for safe driving in autonomous vehicles. After detection of an object, a vehicle can decide the next action, whether to take a left, a right, accelerate or brake. By detecting objects in real time, deep learning algorithms can make possible self-driving for everyday transportation in the future.
Path plan and execution: Planning better and secure paths is important in autonomous vehicles. Accidents can be avoided with accurate path planning and can be helpful in reducing traffic congestion. Path planning helps the vehicle to choose a safe, convenient, economically efficient path. Machine learning, part of AI, can learn from past experience, hence predicting the best path plan for the vehicle which can be accurate and safe.
The next step is to follow the path safely in order to execute the set plan of the path. For a safe drive, autonomous vehicles have to detect traffic objects such as cars, bicycles, pedestrians, traffic signs, and signals. The main inputs for autonomous vehicles are digital images, and vehicles must detect objects in digital images. With the help of deep learning algorithms, vehicles can predict the class and exact location of an object in a given image.
Vehicle monitoring: The vehicle monitoring system is an important component in autonomous vehicles. It can inform about the current condition of the vehicle and can predict the problems that may arise for the vehicle. For such type of prediction, machine learning algorithms can be used.
Improving traffic safety: There are many causes of vehicle crashes, like high speed, adverse weather, rash driving, drinking and driving, careless driving, taking naps during long drives. Vehicle crashes are the main cause of human casualties. In comparison to the human-driven vehicle, an autonomous vehicle could provide increased safety while traveling. Autonomous vehicles can reduce vehicle crashes because software installed in the vehicle instructs the autonomous vehicle’s control system rather than humans, and software makes fewer errors than humans.
Easy transportation for physically handicapped and elderly people: An autonomous vehicle can be a facility for those who are not able to drive. They can use an autonomous vehicle for transportation without being a burden on others.
Follow traffic rules: Every year, road accidents occur as a result of violations of traffic rules, resulting in the loss of life as well as economic losses. Breaking traffic rules is a natural human tendency that is sometimes caused by circumstances and other times by a lack of knowledge of traffic rules. However, this can result in traffic accidents. Such vehicle accidents can be reduced by an intelligent traffic sign and signal recognition system in an autonomous vehicle.
Reduced traffic congestion: Due to human driving tendency, frequent lane change, stop and go, and rash driving, are the main cause of traffic congestion. Rash driving is a main cause of accidents which leads to traffic congestion. In traffic congestion, a vehicle generates more CO2 than normal traffic, and that impacts the environment as well, and there is also a loss of time and money. A lower number of crashes could also reduce traffic congestion. An autonomous vehicle can reduce the frequency of crashes because of inbuilt intelligent software that removes human driving behaviors like rash driving, and stop and go driving. Therefore, with an intelligent system, an autonomous vehicle can reduce traffic congestion as it follows traffic rules resulting in fewer accidents.
Efficient parking: Car parking is one of the most challenging tasks. Almost every organization dedicates one-third of its land for parking. Autonomous vehicles can reduce parking problems because they can drop passengers in one location and park themselves away from the designated location.
Reduce Transportation Costs: Transportation costs also include driver fees along with fuel and other charges. Autonomous vehicles can reduce transportation costs because it does not require a driver.
Emergency warning system: The autonomous vehicle can detect accidents on roads and can notify emergency services and other vehicles.
Automatic speed control: The autonomous vehicle can automatically control the vehicle’s speed according to the speed limit signal on the road. This feature can reduce the likelihood of vehicle accidents.
Transportation accessibility: Autonomous vehicles can reduce driving effort and can be easy and accessible transportation for seniors and people with disabilities.
Safe in night and in bad weather: An autonomous vehicle can be safe to drive at night and in bad weather. This is in contrast to human-driven cars, where drivers may become fatigued or exhausted, and may even go to sleep while driving, which may cause nighttime car accidents.
In autonomous vehicles, anomalous activity detection from video is a very challenging task. Safe driving requires anomalous activities to be detected in real time so that autonomous vehicles can make their own decisions at the right time and prevent road accidents. In the following section we describe possible types of anomalous activity in autonomous vehicles.
Hacking anomalies: In intelligent transport system, autonomous vehicles communicate with each other through internet. For a smooth and safe drive vehicles need to be connected to the internet, which is called Internet of Vehicles (IoVs). Once an autonomous vehicle is connected to the internet, it can be vulnerable to cyber-attacks. Cyber-attacks are a major anomalous activity that can be dangerous for safe driving in autonomous vehicles. Sniffing, denial of service and distributed denial of service are some examples of cyber-attacks. In IoVs, deep learning-based intrusion detection systems (IDS) can be helpful in detecting suspicious anomalous activity between vehicles.
Faulty sensor: Autonomous vehicles depend entirely on various inputs, such as LiDAR, RADAR, sensors, cameras, for information about the surroundings. If there is a fault with the sensor, it can cause abnormal behavior of autonomous vehicles and cause an accident.
Another vehicle behavior: The safety of a vehicle on the road also depends on how other vehicles in the vicinity are driving. There is always a possibility that the driver of a nearby vehicle may be intoxicated or not in good health. Driving near such vehicles can lead to road accidents. Such anomalous behavior of other vehicles should be detected in real time so that the self-driving car can maintain a safe distance from such vehicle.
False object detection: There is always a probability of false object detection in the image sequence. Advertising companies uses public transport to promote their products. For this, they paste product pictures on buses and taxis. Product images may also contain images of a person, animal, vehicle, etc. Another example is wet roads, which can also cause the image of one object to be visible on another vehicle. The deep learning model in autonomous vehicles should be trained in such a way that it should differentiate between actual objects and object images.
Road marking anomalies: Some traffic signs, such as zebra crossing, stop lines, and lane change marking (Rumble strips), are printed on roads. These road markings should be detected by vehicles in real time so that vehicles can take decisions about what action to take next. It may be that the road markings are covered with snow or water due to weather conditions. Such conditions or road marking anomalies not only impact autonomous vehicle driving but also affect human driving. Therefore, to detect such types of road marking anomalies is difficult in autonomous vehicles.
Anomalies like roadside path holes, cracks, and other vehicle behavior may impact the normal safe drive of an autonomous vehicle. It can be useful to analyze the video stream to detect these types of anomalies. Videos are nothing but a continuous flow of still images. Recognition or detection of an object in a digital image is a computer vision problem. In recent years, authors proved that deep learning can be helpful in object classification, localization, and object detection. Convolutional neural network is the backbone of most object detection models. CNN family models such as R-CNN, Fast R-CNN, Faster R-CNN, and YOLO and SSD are the most popular object detection models.
R-CNN was proposed by R. Girshick et al. [19] in 2014. The selective search was used to propose a candidate region. Two thousand region proposals were extracted from the test images. In comparison to CNN, R-CNN uses a limited number of region proposals. AlexNet Deep CNN was used as a feature extractor. Using CNN, fixed-length feature was extracted from each region. Finally, linear SVM was used for the classification of each region. A regression model was used to predict the bounding box and to reduce the localization error. On multicore CPU R-CNN achieved 53.7 mAP in 10 seconds.
Fast R-CNN: R. Girshick et al. proposed an extension of their previous work R-CNN named Fast R-CNN. Fast R-CNN was faster than R-CNN. In Fast R-CNN, instead of 2,000 region proposals, the entire image along with the region of interest (ROI) was fed into the network. Pretrained CNN was used for feature extraction. The end of the network was the custom layer that was the ROI pooling layer to extract features from each region. Finally, each feature vector was processed by a fully connected layer. The output of the fully connected layer was divided into branches. Softmax layer was used for classification in one branch and another branch predicts coordinates for the object. The model achieved 66% mAP [20].
Faster R-CNN: Most CNN-based object detection algorithms used selective search to propose region proposals, Faster R-CNN Ren et al. on the other hand used a separate convolutional network, a region proposal network (RPN) for the region proposals. Both ConvNet and RPN share computation in the entire object detection process. For object classification in each region, the ROI pooling layer is used and the bounding box is predicted.
The role of deep learning in autonomous vehicles is not only about object detection, but also how agile the deep learning model is for accurate object detection. CNN family methods like R-CNN, Fast R-CNN, and Faster R-CNN are based on region proposals for object detection. Most of them use linear search to find the possible regions first then perform object detection. This can slow down the object detection process. Real-time applications like autonomous vehicles require object detection within milliseconds with accuracy so that the vehicle can take necessary action accordingly. Due to the agility in object detection of YOLO and SSD, deep learning models can be used in autonomous vehicles [21]. Faster R-CNN, SSD, and YOLOv2 were evaluated using mAP and other model evaluation methods like recall, precision; it was found that YOLO v2 and SSD are four times faster than Faster R-CNN in object detection. Between YOLOv2 and SSD, it was found that YOLOv2 achieved better mAP in comparison to SSD [22].
Originally YOLO was proposed by J. Redmon et al. [23] in 2016. It was a single neural network that can predict bounding boxes and class probabilities from an image. Proposed YOLO architecture is a unified architecture that is extremely fast and can detect objects at 45 frames per second, which can be considered detection in real time. YOLO architecture is shown in Figure 1.3, The YOLO framework has 24 convolutional layers and 2 fully connected layers.
The proposed YOLO has achieved 51-57% mAP on COCO benchmark dataset. But J. Redmon et al. have found some problems in original YOLO, such as, when compared to Faster R-CNN, there are localization errors in YOLO. YOLO achieved low recall as compared to the region proposal methods [23]. In 2017, J. Redmon introduced another version of YOLO, known as YOLO 9000 or YOLO v2. It was trained and tested in over 900 object categories and was capable of detecting over 900 objects in real time. Experimental results showed that YOLOv2 achieved 76.8 mAP at 67 FPS and 78.6 mAP at 40 FPS on VOC2007 [24]. In 2018, J. Redmon et al. proposed the third version of YOLO and named it YOLOv3. At 320 X 320 YOLOv3 achieved 28.2mAP. While considering at .5 IOU detection matrices YOLOv3 achieved 57.9 AP in 51 ms and 57.5 AP in 198 ms which was 3.8 times faster [25]. In 2020, YOLOv4 was proposed by Alexey Bochkovsky et al. [26]. This time J. Redmon was not part of the team. In experimental results, YOLOv4 performs the fastest and most accurate real-time object detection. YOLOv4 uses oF (bag of freebies) and several BoyS (bag of specials) for object detection. YOLOv4 achieved 43.5 AP on COCO dataset at 65 FPS. In 2020, after two months of YOLOv4 release, Glenn Jocher [27] released YOLOv5. YOLOv5 was implemented on PyTorch. Data augmentation and auto-learning bounding boxes were major improvements in YOLOv5. In the study [28, 30] it was found that deep learning methods can be used in autonomous vehicles for object detection in real time. YOLO and SSD can also be helpful in real-time traffic sign detection and recognition [29, 31].
Figure 1.3 Original YOLO framework. (Source: [19].)
The entire image is cast into the network for classification and prediction of bounding box. The image is divided into SXS grid. A feature map is created from entire image. The feature map is used to predict a bounding box for each and every class. For each grid cell N a bounding box was generated with confidence score. Confidence score determines the presence and absence of object in a grid. Zero confidence score represents the absence of an object and if the class probability and bounding box is higher than a certain confidence score, ensures the presence of object (Figure 1.4 and Figure 1.5).
For each grid cell YOLO predicts output y. If there are four classes then the output y has 3X3X1X9 dimension where 3X3 is grid cell, 1 is number of bounding box and 9 is dimension. Pc represents whether object is present in the grid cell or not. bx,by bh,bw are bounding box coordinates and c1,c2,c3,c4 are classes. The PC value 0 indicates the absence of an object in that grid cell. If the Pc value is 0 then there is no need to calculate the bounding box and other values. Pc value 1, represents the presence of an object in the grid cell. Figure 1.5 depicts how YOLO predicts bounding box coordinates.
Figure 1.4 Prediction of bounding box for each grid cell.
Figure 1.5 Coordinates bx,by,bh and bw specifies the bounding box.
The serpent move of the nearby vehicle can be a danger for a safe drive in an autonomous vehicle. It can be considered as an anomalous activity that can affect the smooth drive-in autonomous vehicles. In this section, we are proposing a method to detect nearby vehicles which are driving in a serpent manner. Figure 1.6 shows serpent/turbulence drive of nearby vehicle.
In Figure 1.6 vehicle A is an autonomous vehicle, and vehicle B is a normal vehicle. It is clearly shown that vehicle B has a serpent motion. The camera at the front of the autonomous vehicle will capture all other vehicles that will come in front of the vehicle. Because of serpent motion, vehicle B will be considered as an anomalous vehicle that can be the cause of an accident for other vehicles. After detecting such types of vehicles, autonomous vehicles should maintain a safe distance from them. To detect such types of vehicles that can be the cause of accidents is important for a safe drive-in autonomous vehicle. Figure 1.7 shows the overall working of the proposed method to detect nearby anomalous vehicles. The main aim is to detect a vehicle that has serpent motion. In our approach, we have proposed the YOLOv2-based method to detect vehicles that have serpent motion. YOLOv2 has been used as a base detector. That has 19 convolutional layer and 5 max pooling layer and softmax layer for object classification. YOLOv2 first takes input image and divides it into SXS grid cells. Then YOLOv2 performs object classification and localization for each grid cell. The cell which contains the center point of an object is responsible to predict bounding box with confidence score. Bounding box coordinates includes the center of the object (bx, by) and (bw,bh) which is the height and width of an object relative to the cell.
Video frame is fed into the YOLOv2 to detect vehicle and license plate. The method detects vehicles in each frame and extracts the vehicle registration number from the license plate. The entry of the detected vehicle in a frame is done in the primary table. Vehicle ID (Vid), time of detection and number of detected vehicle (count) are the fields of the primary table. In parallel, a secondary table is also created by the system to maintain the records of those vehicles whose calculated count value in the primary table exceeds some threshold value. In this approach, the system will check count values of all vehicles whose entry is present in the secondary table. if the count value is greater than some threshold value then that vehicle will be considered as a dangerous vehicle that can be the cause of accidents for other vehicles. The system will alert the autonomous vehicle to maintain an appropriate safe distance. Table 1.1 represents the comparative analysis of cyber threat assessment approaches by focussing on methods and state of the art techniques for anomalous activities.
Figure 1.6 The zigzag drive of a nearby vehicle can be an unsafe drive for an autonomous vehicle.
Figure 1.7 Flow of proposed method.
Algorithm 1: To detect an anomalous vehicle
Algorithm 2: Algorithm for reading vehicle id that have highest count value for secondary table.
Table 1.1 Comparative analysis.
Methods
Anomalous activity
Result
Security model based, a deep learning based network Deep denoising auto encoder [
1
]
Deal with cyber-attacks for ECU
Outperforms other ML models, achieved hit highest rate and lowest miss rates
mIoUNe Network [
2
]
Detecting failure cases in semantic segmentation
Achieved 93.21% mIoU accuracy & 84.8% failure detection
Automatic Pavement Distress Recognition (APDR) system [
3
]
Road anomalies like path hole cracks on roads
Able to perform road anomalies detection
Anomaly detection method using CNN and Kalman filtering [
4
]
Detection of anomalous reading in sensors
Achieved high accuracy, sensitivity, and F1 score
Federated sensor data fusion architecture [
5
]
Deal with sensor faults
Architecture can detect soft and hard faults from a particular sensor
Deep learning-based method [
6
]
To detect anomalous vehicles and reidentification of vehicles
Model can detect anomalous vehicles
Proposed highway traffic anomaly dataset [
7
]