173,99 €
Advanced Analytics and Deep Learning Models
The book provides readers with an in-depth understanding of concepts and technologies related to the importance of analytics and deep learning in many useful real-world applications such as e-healthcare, transportation, agriculture, stock market, etc.
Advanced analytics is a mixture of machine learning, artificial intelligence, graphs, text mining, data mining, semantic analysis. It is an approach to data analysis. Beyond the traditional business intelligence, it is a semi and autonomous analysis of data by using different techniques and tools.
However, deep learning and data analysis both are high centers of data science. Almost all the private and public organizations collect heavy amounts of data, i.e., domain-specific data. Many small/large companies are exploring large amounts of data for existing and future technology. Deep learning is also exploring large amounts of unsupervised data making it beneficial and effective for big data. Deep learning can be used to deal with all kinds of problems and challenges that include collecting unlabeled and uncategorized raw data, extracting complex patterns from a large amount of data, retrieving fast information, tagging data, etc.
This book contains 16 chapters on artificial intelligence, machine learning, deep learning, and their uses in many useful sectors like stock market prediction, a recommendation system for better service selection, e-healthcare, telemedicine, transportation. There are also chapters on innovations and future opportunities with fog computing/cloud computing and artificial intelligence.
Audience
Researchers in artificial intelligence, big data, computer science, and electronic engineering, as well as industry engineers in healthcare, telemedicine, transportation, and the financial sector. The book will also be a great source for software engineers and advanced students who are beginners in the field of advanced analytics in deep learning.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 573
Veröffentlichungsjahr: 2022
Cover
Title Page
Copyright
Preface
Part 1: Introduction to Computer Vision
1 Artificial Intelligence in Language Learning: Practices and Prospects
1.1 Introduction
1.2 Evolution of CALL
1.3 Defining Artificial Intelligence
1.4 Historical Overview of AI in Education and Language Learning
1.5 Implication of Artificial Intelligence in Education
1.6 Artificial Intelligence Tools Enhance the Teaching and Learning Processes
1.7 Conclusion
References
2 Real Estate Price Prediction Using Machine Learning Algorithms
2.1 Introduction
2.2 Literature Review
2.3 Proposed Work
2.4 Algorithms
2.5 Evaluation Metrics
2.6 Result of Prediction
References
3 Multi-Criteria–Based Entertainment Recommender System Using Clustering Approach
3.1 Introduction
3.2 Work Related Multi-Criteria Recommender System
3.3 Working Principle
3.4 Comparison Among Different Methods
3.5 Advantages of Multi-Criteria Recommender System
3.6 Challenges of Multi-Criteria Recommender System
3.7 Conclusion
References
4 Adoption of Machine/Deep Learning in Cloud With a Case Study on Discernment of Cervical Cancer
4.1 Introduction
4.2 Background Study
4.3 Overview of Machine Learning/Deep Learning
4.4 Connection Between Machine Learning/Deep Learning and Cloud Computing
4.5 Machine Learning/Deep Learning Algorithm
4.6 A Project Implementation on Discernment of Cervical Cancer by Using Machine/Deep Learning in Cloud
4.7 Applications
4.8 Advantages of Adoption of Cloud in Machine Learning/ Deep Learning
4.9 Conclusion
References
5 Machine Learning and Internet of Things–Based Models for Healthcare Monitoring
5.1 Introduction
5.2 Literature Survey
5.3 Interpretable Machine Learning in Healthcare
5.4 Opportunities in Machine Learning for Healthcare
5.5 Why Combining IoT and ML?
5.6 Applications of Machine Learning in Medical and Pharma
5.7 Challenges and Future Research Direction
5.8 Conclusion
References
6 Machine Learning–Based Disease Diagnosis and Prediction for E-Healthcare System
6.1 Introduction
6.2 Literature Survey
6.3 Machine Learning Applications in Biomedical Imaging
6.4 Brain Tumor Classification Using Machine Learning and IoT
6.5 Early Detection of Dementia Disease Using Machine Learning and IoT-Based Applications
6.6 IoT and Machine Learning-Based Diseases Prediction and Diagnosis System for EHRs
6.7 Machine Learning Applications for a Real-Time Monitoring of Arrhythmia Patients Using IoT
6.8 IoT and Machine Learning–Based System for Medical Data Mining
6.9 Conclusion and Future Works
References
Part 2: Introduction to Deep Learning and its Models
7 Deep Learning Methods for Data Science
7.1 Introduction
7.2 Convolutional Neural Network
7.3 Recurrent Neural Network
7.4 Denoising Autoencoder
7.5 Recursive Neural Network (RCNN)
7.6 Deep Reinforcement Learning
7.7 Deep Belief Networks (DBNS)
7.8 Conclusion
References
8 A Proposed LSTM-Based Neuromarketing Model for Consumer Emotional State Evaluation Using EEG
8.1 Introduction
8.2 Background and Motivation
8.3 Related Work
8.4 Methodology of Proposed System
8.5 Results and Discussions
8.6 Conclusion
References
9 An Extensive Survey of Applications of Advanced Deep Learning Algorithms on Detection of Neurodegenerative Diseases and the Tackling Procedure in Their Treatment Protocol
9.1 Introduction
9.2 Story of Alzheimer’s Disease
9.3 Datasets
9.4 Story of Parkinson’s Disease
9.5 A Review on Learning Algorithms
9.6 A Review on Methodologies
9.7 Results and Discussion
9.8 Conclusion
References
10 Emerging Innovations in the Near Future Using Deep Learning Techniques
10.1 Introduction
10.2 Related Work
10.3 Motivation
10.4 Future With Deep Learning/Emerging Innovations in Near Future With Deep Learning
10.5 Open Issues and Future Research Directions
10.6 Deep Learning: Opportunities and Challenges
10.7 Argument with Machine Learning and Other Available Techniques
10.8 Conclusion With Future Work
Acknowledgement
References
11 Optimization Techniques in Deep Learning Scenarios: An Empirical Comparison
11.1 Introduction
11.2 Optimization and Role of Optimizer in DL
11.3 Various Optimizers in DL Practitioner Scenario
11.4 Recent Optimizers in the Pipeline
11.5 Experiment and Results
11.6 Discussion and Conclusion
References
Part 3: Introduction to Advanced Analytics
12 Big Data Platforms
12.1 Visualization in Big Data
12.2 Security in Big Data
12.3 Conclusion
References
13 Smart City Governance Using Big Data Technologies
13.1 Objective
13.2 Introduction
13.3 Literature Survey
13.4 Smart Governance Status
13.5 Methodology and Implementation Approach
13.6 Outcome of the Smart Governance
13.7 Conclusion
References
14 Big Data Analytics With Cloud, Fog, and Edge Computing
14.1 Introduction to Cloud, Fog, and Edge Computing
14.2 Evolution of Computing Terms and Its Related Works
14.3 Motivation
14.4 Importance of Cloud, Fog, and Edge Computing in Various Applications
14.5 Requirement and Importance of Analytics (General) in Cloud, Fog, and Edge Computing
14.6 Existing Tools for Making a Reliable Communication and Discussion of a Use Case (with Respect to Cloud, Fog, and Edge Computing)
14.7 Tools Available for Advanced Analytics (for Big Data Stored in Cloud, Fog, and Edge Computing Environment)
14.8 Importance of Big Data Analytics for Cyber-Security and Privacy for Cloud-IoT Systems
14.9 An Use Case with Real World Applications (with Respect to Big Data Analytics) Related to Cloud, Fog, and Edge Computing
14.10 Issues and Challenges Faced by Big Data Analytics (in Cloud, Fog, and Edge Computing Environments)
14.11 Opportunities for the Future in Cloud, Fog, and Edge Computing Environments (or Research Gaps)
14.12 Conclusion
References
15 Big Data in Healthcare: Applications and Challenges
15.1 Introduction
15.2 Analytical Techniques for Big Data in Healthcare
15.3 Challenges
15.4 What is the Eventual Fate of Big Data in Healthcare Services?
15.5 Conclusion
References
16 The Fog/Edge Computing: Challenges, Serious Concerns, and the Road Ahead
16.1 Introduction
16.2 Motivation
16.3 Background
16.4 Fog and Edge Computing–Based Applications
16.5 Machine Learning and Internet of Things–Based Cloud, Fog, and Edge Computing Applications
16.6 Threats Mitigated in Fog and Edge Computing–Based Applications
16.7 Critical Challenges and Serious Concerns Toward Fog/Edge Computing and Its Applications
16.8 Possible Countermeasures
16.9 Opportunities for 21st Century Toward Fog and Edge Computing
16.10 Conclusion
References
Index
Wiley End User License Agreement
Chapter 2
Table 2.1 Columns of dataset.
Table 2.2 Different evaluation metrics.
Table 2.3 Comparison of algorithm.
Chapter 3
Table 3.1 Dataset statistics.
Table 3.2 Result comparison.
Table 3.3 Dataset.
Table 3.4 Comparison among clustering and non-clustering approach.
Table 3.5 Comparison among existing methods in MCRS.
Chapter 4
Table 4.1 Comparison of DL and ML.
Chapter 6
Table 6.1 Literature review of existing technological works on Alzheimer’s disea...
Chapter 7
Table 7.1 Model prediction accuracy.
Table 7.2 Comparison between RCNN variants.
Table 7.3 Comparison between Markov decision model and Q learning model.
Chapter 8
Table 8.1 Percentage-wise usage of machine learning algorithms.
Chapter 9
Table 9.1 Comparison on commonly utilized deep learning models [2].
Table 9.2 Comparison of various algorithms on detection of AD.
Table 9.3 Comparison of various algorithms on detection of Parkinson’s disease.
Table 9.4 Results of the algorithm on detection of attacks on deep brain stimula...
Chapter 12
Table 12.1 Comparison of 2018 vs. 2019 data security [1].
Table 12.2 Data breach from February to June 2020 [1].
Chapter 1
Figure 1.1 Chatbot responding to the user contextually.
Figure 1.2 Chatbot responding to the user contextually.
Chapter 2
Figure 2.1 Flow of work.
Figure 2.2 Missing values.
Figure 2.3 Visualizing missing values using heatmap.
Figure 2.4 Different BHK attribute.
Figure 2.5 Bath visualization.
Figure 2.6 BHK visualization.
Figure 2.7 Scatter plot for 2 and 3 BHK flat for total square feet.
Figure 2.8 Scatter plot for 2 And 3 BHK flat for total square feet after removin...
Chapter 3
Figure 3.1 Working principle of MCRS.
Figure 3.2 Phases of MCRS.
Figure 3.3 Filtering techniques of MCRS.
Figure 3.4 Result comparison.
Figure 3.5 Experimental result.
Figure 3.6 Result.
Chapter 4
Figure 4.1 Advancement of artificial intelligence.
Figure 4.2 AI, ML, and DL.
Figure 4.3 Working network of deep learning.
Figure 4.4 Difference between ML and DL.
Figure 4.5 Types of ML.
Figure 4.6 Supervised learning algorithm.
Figure 4.7 Unsupervised learning algorithm.
Figure 4.8 Reinforcement algorithm.
Figure 4.9 Supervised, unsupervised, and reinforcement learning.
Figure 4.10 Regression algorithms.
Figure 4.11 Instance-based algorithms.
Figure 4.12 Regularization algorithms.
Figure 4.13 Decision algorithms.
Figure 4.14 Bayesian algorithms.
Figure 4.15 Clustering algorithms.
Figure 4.16 Association rule learning algorithms.
Figure 4.17 Artificial neural network algorithms.
Figure 4.18 Deep learning algorithms.
Figure 4.19 Dimensional reduction algorithms.
Figure 4.20 Ensemble algorithms.
Figure 4.21 Convolutional Neural Networks.
Figure 4.22 How our DL algorithm sees an image.
Figure 4.23 Convolution layers.
Figure 4.24 DL terminology examples.
Figure 4.25 Neural network.
Figure 4.26 AI or real Shakespeare?
Figure 4.27 GAN.
Figure 4.28 GAN examples.
Figure 4.29 GAN example.
Figure 4.30 GAN used to create painting.
Figure 4.31 AI in chatbots.
Figure 4.32 Behavior of the sentiment neuron. Colors show the type of sentiment.
Figure 4.33 Flowchart of the methodology.
Figure 4.34 Flowchart includes training and testing.
Figure 4.35 Values of the trained dataset matrices.
Figure 4.36 Values of the tested datasets matrices.
Figure 4.37 Sample cervical cancer magnetic resonance image (MRI).
Figure 4.38 Loading the MRI image from datasets.
Figure 4.39 Contrast enhancement.
Figure 4.40 Image segmentation.
Figure 4.41 Segmented region of interest (ROI).
Figure 4.42 After classification, cervical cancer (ROI) tumor is found.
Figure 4.43 After classification, cervical cancer region of interest (ROI) tumor...
Chapter 7
Figure 7.1 (A) Kannada Main Aksharas.
Figure 7.1 (B) Kannada Vatt Aksharas.
Figure 7.2 Training of CNN for kannada characters.
Figure 7.3 (A) Sample image.
Figure 7.3 (B) Output edible text.
Figure 7.4 Recurrent neural network architecture.
Figure 7.5 Simple RNN.
Figure 7.6 Long short-term memory networks.
Figure 7.7 Fully gated version.
Figure 7.8 Type 1 GRU.
Figure 7.9 Type 2 GRU.
Figure 7.10 Type 3 GRU.
Figure 7.11 Architecture of denoising autoencoder.
Figure 7.12 Architecture of RCNN.
Figure 7.13 Architecture of deep belief networks.
Chapter 8
Figure 8.1 Classification of deep neural network.
Figure 8.2 Valence arousal model.
Figure 8.3 EEG setup.
Figure 8.4 Training and validation accuracy for DEAP dataset.
Chapter 9
Figure 9.1 Comparison of healthy brain and AD-affected brain.
Figure 9.2 (a) sMRI example and [20] (b) fMRI example [21].
Figure 9.3 Example of PET [22].
Figure 9.4 OASIS example images [25].
Figure 9.5 CNN—Example [30].
Figure 9.6 Architecture—UUNet [34].
Figure 9.7 Architecture of CNN presented by Santos
et al.
[3].
Figure 9.8 Architecture of detection model presented by Alejandro
et al.
[4].
Figure 9.9 Architecture of the hybrid model [5].
Figure 9.10 Architecture of DCssCDBM [6].
Figure 9.11 “Siamese Net” Architecture [6].
Figure 9.12 Architecture of Lin Liu that utilizes spectrogram [7].
Figure 9.13 “MRICloud” representation [8].
Figure 9.14 “Siamese Net” architecture of [8].
Figure 9.15 Architecture of deep learning technique for Parkinson’s [9].
Figure 9.16 Architecture of MDS-UPDRS [10].
Chapter 10
Figure 10.1 Concepts and theories in deep learning.
Figure 10.2 Process of deep neural network (DNN).
Figure 10.3 Convolution layers feeding image data into a fully-connected layer [...
Chapter 11
Figure 11.1 An optimizer framework.
Figure 11.2 Proposed choices for training of a NN.
Figure 11.3 Three steps toward generalization.
Figure 11.4 Optimization: Issues and challenges.
Figure 11.5 A 3-D representation with local and global minima (maxima).
Figure 11.6 A MAS mixing logo.
Figure 11.7 Sample output showing three different classes.
Figure 11.8 (a) SGD with “lrate” and accuracy with number of epochs. (b) ADAM wi...
Figure 11.9 MNIST handwriting data.
Figure 11.10 (a, b) Different optimizers with loss and accuracy.
Chapter 12
Figure 12.1 Characteristics of Big Data [2].
Figure 12.2 One dimensional illustration (COVID-19 cases maximum in Mumbai and P...
Figure 12.3 Two-dimensional representation of Covid Cases in Pune [15].
Figure 12.4 Cartogram of India where each color represents different states of I...
Figure 12.5 Distribution of population.
Figure 12.6 Proportional map for Highest cases in different zones [11]. Left are...
Figure 12.7 Contour map for COVID-19 patients [16].
Figure 12.8 Three-Dimensional Visualization of Zones and the count of Patients i...
Figure 12.9 COVID-19 cases in three major cities of Maharashtra from March to Ju...
Figure 12.10 COVID-19 cases from March to June in timeline chart [17].
Figure 12.11 COVID-19 cases in different areas [16].
Figure 12.12 Death rate of Maharashtra in metro cities [16].
Figure 12.13 Most case in March-April (Mumbai) [16].
Figure 12.14 Weather prediction [16].
Figure 12.15 Major cases in three states of India [16].
Figure 12.16 Covid cases in Mumbai and Pune (March-July) [9]. We can say that tw...
Figure 12.17 Dendrogram data analysis of COVID-19 [12].
Figure 12.18 Spreading of Corona virus from root node to last node [16].
Figure 12.19 Hierarchy of data visualization according to department [12].
Figure 12.20 Two-dimensional histogram (different colour) shows age group and to...
Figure 12.21 Two-Dimensional Contour with different age [9].
Figure 12.22 Polar scatter in which age and new confirmed cases is shown in 0° t...
Chapter 13
Figure 13.1 Addressing e-governance.
Figure 13.2 Urban population fraction in various geographical regions.
Figure 13.3 Smart cities’ mission and housing in India.
Figure 13.4 Apache Hadoop framework architecture adopted for smart city governan...
Figure 13.5 Layered architecture of big data system (from bottom to up).
Figure 13.6 Three stages of data acquisition system.
Figure 13.7 An overview model flow for smart governance for citizen services and...
Figure 13.8 Worldwide big data and Hadoop market size.
Chapter 14
Figure 14.1 (a) Cloud computing to edge computing transition. (b) Cloud computin...
Figure 14.2 The timeline of DC.
Figure 14.3 Taxonomy of fog framework.
Figure 14.4 Layered CloudSim architecture.
Figure 14.5 SPECI package.
Figure 14.6 Architecture of the OCT cloud simulator.
Chapter 16
Figure 16.1 Evolution of computational methods.
Cover
Table of Contents
Title Page
Copyright
Preface
Begin Reading
Index
Also of Interest
End User License Agreement
v
ii
iii
iv
xix
xx
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
149
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
351
352
353
354
355
356
357
358
359
360
361
362
363
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
391
392
393
394
395
396
397
398
Scrivener Publishing100 Cummings Center, Suite 541JBeverly, MA 01915-6106
Next-Generation Computing and Communication Engineering
Series Editors: Dr. G. R. Kanagachidambaresan and Dr. Kolla Bhanu Prakash
Developments in artificial intelligence are made more challenging because the involvement of multi-domain technology creates new problems for researchers. Therefore, in order to help meet the challenge, this book series concentrates on next generation computing and communication methodologies involving smart and ambient environment design. It is an effective publishing platform for monographs, handbooks, and edited volumes on Industry 4.0, agriculture, smart city development, new computing and communication paradigms. Although the series mainly focuses on design, it also addresses analytics and investigation of industry-related real-time problems.
Publishers at ScrivenerMartin Scrivener ([email protected])Phillip Carmical ([email protected])
Edited by
Archana Mire
Computer Engineering Department, Terna Engineering College, Navi Mumbai, India
Shaveta Malik
Computer Engineering Department, Terna Engineering College, Nerul, India
and
Amit Kumar Tyagi
Vellore Institute of Technology (VIT), Chennai Campus, India
This edition first published 2022 by John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA and Scrivener Publishing LLC, 100 Cummings Center, Suite 541J, Beverly, MA 01915, USA© 2022 Scrivener Publishing LLCFor more information about Scrivener publications please visit www.scrivenerpublishing.com.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
Wiley Global Headquarters111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchant-ability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials, or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read.
Library of Congress Cataloging-in-Publication Data
ISBN 978-1-119-79175-1
Cover image: Pixabay.ComCover design by Russell Richardson
Set in size of 11pt and Minion Pro by Manila Typesetting Company, Makati, Philippines
Printed in the USA
10 9 8 7 6 5 4 3 2 1
Advanced analytics is a mixture of machine learning, artificial intelligence, graphs, text mining, data mining, semantic analysis. It is an approach to data analysis. Beyond the traditional business intelligence, it is a semi and autonomous analysis of data by using different techniques and tools. However, deep learning and data analysis both are the high centres of data science. Almost all the private and public organizations collect heavy amounts of data, i.e., domain specific data. Many small/large companies are exploring large amounts of data for existing and future technology. Deep learning is also exploring large amounts of unsupervised data.
In fact, it is a key benefit of big data. It is also effective for big data. Moreover, it is collecting an unlabelled and uncategorized raw data. There are some challenges also in big data related to the extraction complex patterns from the large amount of data, retrieving of fast information, tagging of data etc, deep learning can be used to deal these kinds of problems or challenges.
The purpose of this book is to help teachers to instruct the concepts of analytics in deep learning and how big data technologies are managing massive amounts of data with the help of Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL) etc. In this book one will find the utility and challenges of big data. Those who are keen to learn the different models of deep learning, the connection between AI, ML and DL will definitely find this book as a great source of knowledge.
This book contains chapters on artificial intelligence, machine learning, deep learning and their uses in many useful sectors like stock market prediction, recommendation system for better service selection, ehealthcare, telemedicine, transportation. In last few interesting chapter like innovations or issue or future opportunities with fog computing/cloud computing or artificial intelligence are being discussed in this work for future readers/researchers.
Hence, this book will be convenient to the undergraduate and graduate students planning their careers in either industry or research. This will also serve as a great source of learning to software engineers who are beginners in the field of advanced analytics in deep learning.
Dr. Archana MireDr. Shaveta MalikDr. Amit Kumar TyagiJanuary 2022
Khushboo Kuddus
School of Humanities (English), KIIT Deemed to be University, Bhubaneswar, Odisha, India
Abstract
Fourth Industrial Revolution which features rapid expansion of technology and digital application is influencing almost all spheres of our lives. Artificial Intelligence (AI) has made an impact on the way we live and work, that is, from floor cleaning to instructing Alexa. AI has a great potential in the field of education. AI in education is an emerging field in educational technology. It has an enormous potential of providing digitalized and completely personalized learning to each learner. However, the idea of using AI in education is actually intimidating educators because there is a lot of misconception and misunderstanding regarding the use of AI in education. It is mainly because the educators are unaware of the pedagogical implication of it in education in general and language learning in particular. It is also because of the lack of critical reviews of the pedagogical implications and new approaches in adopting AI in education. Therefore, the present study attempts to explore how AI can be used to enhance language learning experiences. It discusses the tools that can be used to teach English effectively. It further aims to explain how AI can be used to foster learner’s autonomy. It essentially envisions AI embedded learning in classrooms to enhance English language teaching learning experience and assist the teachers teach their lessons effectively. The findings bring into light some practical and innovative ways, AI can be integrated in ELT classroom to enhance the language teaching learning experience. It focuses on teaching pronunciation and increasing fluency by mimicking the sound pattern and using speech recognition and speech editing features. Moreover, it also highlights the personal approach to language learning by using Chatbot which provides text-to-speech and speech-to-text conversion, using technology to transcribe speech in order to check the pronunciation, translate speech, and practicing conversation by using voice command like Google Assistant. Hence, the paper examines the potential application of AI in education and language learning in particular. Further, it explores the possibilities of implication of AI in classrooms adopting new learning approaches and pedagogical modifications.
Keywords: Artificial intelligence, intelligent computer-assisted language learning, natural language processing, networked learning, English language teaching, pedagogies, digital tools
English language is one of the universal languages these days. It is not only the language of science, technology, higher education, aviation, travel, and tourism but also the language of the internet and information technology. There have been unprecedented changes in the field of English teaching and learning with the continuous advancement in Information Communication Technology (ICT) [1, 2]. The rapid advancement of technology has had a significant impact on the field of education, particularly language acquisition and teaching. The enormous development of technology has remarkably affected the field of education and especially language learning and teaching. The adoption of ICT with the present day technical trends in language teaching is extraordinary [3]. The teaching and learning has been made easier, active, personalized, authentic, and effective by integrating ICT in second language acquisition or foreign language learning. It has also resulted in a paradigm shift in the teaching and learning process, as well as changes in teachers’ roles [4].
The importance of application of technology in second language learning was realized long back in 1930s which gave rise to the emergence of Computer-Assisted Language Learning (CALL) which was initially used only for the drilling exercises. Later, with the advancement of technology, CALL became more interactive using multimedia and Language Laboratory. Moreover, in the 21st century, the social dimensions of ICT expanded with the exponential growth of ICT which led to revitalization of CALL in the form of Web 2.0 tools, Mobile-Assisted Language Learning (MALL), and Network Learning (NL) and, later, the Intelligent CALL (ICALL) [5]. Having said that, it is the implication of Artificial Intelligence (AI) in Language Learning along with Computational Linguistics, Machine Learning, and Natural Language Processing (NLP).
This chapter discusses the potential application of AI in education and language learning in particular adopting new learning approaches and pedagogical modifications. Further, it explores the ways how AI can be used to enhance language learning experiences by fostering learner’s autonomy and adaptability. It also discusses the AI tools that can be used to teach English effectively. It focuses on teaching pronunciation and increasing fluency by mimicking the sound pattern, using speech recognition and speech editing features and the personal approach to language learning by using Chatbot. Furthermore, the chapter concentrates on the inference of AI embedded learning for establishing a new trend in foreign language learning as well as the shift in teachers᾽ roles. Hence, the aim of this chapter is to provide a few substantial examples of how AI may be used to improve the language learning experience, and why language teachers should embrace and incorporate AI into the teaching process rather than fear it.
In the 1960s, the audio-lingual method was introduced for English language teaching (ELT), which essentially insisted on drill and practice which became quite easier with the incorporation of computer in teaching and learning language [6, 7]. By the 20th century, CALL had a great impact on language teaching and learning. CALL during 1960s and 1980s can be termed as Structural CALL, as during this period, the computers used in language learning were mainly for drills and practice. Following the Structural approach and Behaviorist theory of learning, the computers programs focused more on structured and rote learning than interactivity. During this period, accuracy in grammar and sentence structure were the primary aims for language learning. The second phase of CALL between 1980s and 1990s is known as Communicative CALL. Computers used during this period were majorly for constructing exercises to develop efficacious communication. During this period, the integration of computers in language learning was not only for accuracy but also for achieving fluency. It encouraged interaction not only with the computers but also with the fellow learners. From the 1990s to the early 21st century, integrative CALL was marked by increased access to digital resources and the Internet [8]. The advancement in technology and easy access to the internet encouraged the educators to invent flexible classroom language learning lessons that could be easily accessed even outside the classroom. In the early 21st century, the Integrative CALL continued to have an impact on language learning. Davies et al. argue that the field was infused with the era’s “Web 2.0 fever” [9]. This included the emergence of a slew of new communities based on Web 2.0 tools like wikis, social networking sites, discussion boards, and virtual worlds. The best examples are Facebook groups, Instagram, and Twitter.
Technology has become an integral part of our lives. Its presence is ubiquitous right from the time we wake up till the time we sleep in various forms like alarms, smart phone, smart TV, smart AC, laptop, tablet, Whats App, You Tube, and many others. Technology has become so prevalent in every sector and at every level of education that it is impossible to imagine one’s existence without it in any form. The majority of language learners around the world today use technology to access materials in their second and foreign languages, communicate with people all over the world, learn at their own pace, and take several language tests such as the TOEFL and IELTS [10–12]. Technology helps us get connected with anyone at any part of the world and so the language learners can easily get connected to larger connected networks of native language speakers where they can learn a language by getting directly exposed to the target language. Hence, it would not be wrong to say that the significant development of ICT has changed the way we understand learning and consequently has led to a shift from traditional approaches to teaching to networked teaching and learning.
In this context of language learning and teaching, it is worth noting that Warschauer and Kern coined the concept of Network-Based Language Teaching (NBLT), which focuses on communication [13]. According to Sharples et al., there is a fundamental correlation between learner-centered, personalized, interactive, collaborative, situated, lifelong, and ubiquitous New Learning and New Technology, which is well known for being mobile, user-friendly, and ubiquitous [14]. According to Jones, Networked Learning (NL) has emerged as a significant paradigm in which ICT is used to foster interaction and connections between teachers and learners, learners and other learners, and a learning community and its learning resources [15]. Further, with the Fourth Industrial Revolution which features rapid expansion of technology and digital application is influencing all spheres of our lives. AI has made an impact on the way we live and work, that is, from floor cleaning, using automatic induction heaters, driverless cars to instructing Alexa. According to Manns, the Fourth Industrial Revolution is being driven by the integration and amplification of emerging breakthroughs in AI, automation, and robotics, as well as the far-reaching connection between billions of people with mobile devices that provide unparalleled access to data and information [16]. Furthermore, AI now has major applications for language studies, thanks to advances in NLP, the advent of NL, and the technological ability to manage large amount of data.
Artificial Intelligence (AI) is a branch of science that studies and develops devices aimed at stimulating human intelligence processes. The primary aim of AI is to improve the speed and efficacy of regular processes. As a result, the number of industries implementing AI is growing globally [17].
The term AI as defined by Russell and Norvig is Computational Intelligence, or Machine Intelligence, which encompasses a wide range of subfields in which “specific tasks, such as playing chess, proving mathematical theorems, writing poetry, and diagnosing diseases, can be performed” [18]. According to Housman, “AI is capable of two things: (1) automating repetitive tasks by predicting outcomes on data that has been labeled by human beings, and (2) enhancing human decision-making by feeding problems to algorithms developed by humans” [19]. To put it another way, AI registers assigned commands by performing the tasks repeatedly and then generates a decision pathway for humans by presenting alternatives. Moreover, Nabiyev describes AI as a computer-controlled device’s ability to execute tasks in a human-like manner [20]. According to the author, human-like features include mental processes like reasoning, meaning formation, generalization, and learning from prior experiences. Nilsson goes on to describe AI as the full algorithmic edifice that mimics human intellect [21]. According to him, AI encompasses the development of the information-processing theory of intelligence.
AI has evolved in terms of its philosophical approach over time. Intelligent Tutoring Systems (ITSs) were the first to incorporate AI into language learning in the 1980s aimed for personalized and autonomous learning. Early iterations of ITS were referred to as programs that sought to cater to the needs of learners by facilitating communication [22]. Another significant benefit of ITS was that it allowed for infinite repetitions and practice, something that could never be done with a human instructor. It was designed for the individual learner who wanted to improve their language skills by using tutoring systems. Despite of its advantages, several studies on integration of ITS in higher education found that it had moderate positive impact on the academic learning of college students [23]. However, after four decades, the more advanced and updated version of AI has revitalized the potential for personalized learning [24].
Although ITS made extensive use of drill and rote-learning mechanism built into the computer-based learning system, today’s AI applications are much more advanced, with the same aim of catering to personalized learning. The fundamental difference between the previous model of ITS and the current model is that the former involved a student working in isolation using an ITS and the later engages students in a networked environment. This exposes the learner to the authentic and natural learning scenarios providing social context for language learning.
As mentioned earlier, the remarkable advancement in AI has brought a significant and inevitable shift from CALL to ICALL. With advancements in mobile technologies and their applications in language learning, CALL paved the way for MALL, and similarly, development in AI has led to the rise of a new academic field called ICALL. NLP technologies’ language processing capabilities have numerous implications in the field of CALL, and the field of study that investigates and integrates such implementations is known as ICALL [25].
In the early 2000s, the Massive Open Online Courses (MOOCs) offered a highly required and cost effective alternative to the expensive higher education in the US and beyond. However, such courses could not facilitate learners’ participation, peer learning, scaffolding, or large-scale connections with global learners. Because of these constraints, the MOOC movement has stalled when it comes to delivering education on a wide scale. In contrast, many well-known ongoing MOOC initiatives, such as Coursera, Khan Academy, Udemi, EdX, and Udacity, have used AI and NLP techniques to improve learners’ engagement, active learning, and autonomy. This resurgence of AI, along with its strong NLP potential, has had a significant impact on second language education, as NLP-based tutoring systems can provide corrective input and adapt and customise instructional materials [5].
There are myriad of implications of AI in language teaching and learning. There is multitude of ways that language learners and teachers can gain from integrating this technology. Some of the most relevant implications are the following.
Cultural variation is one of the predominant barriers of communication which majorly occurs due to the difficulty in decoding the language, one is not familiar with. In such scenario, being bilingual or multilingual is a blessing which paves the way for enormous career opportunities and communication across the world. The language barrier is easily eradicated by innovative AI-based translation technologies like Google Translate. On a wide scale, such innovations have made significant progress in helping second language and foreign language learners. Google Translate initially supported only a few languages, but by 2016, it supported 103 languages at different levels, with over 500 million total users and over 100 billion words translated daily [26]. Since this translation service is so easily and widely accessible, second language learners are using it to enhance their learning beyond the four walls of the classroom. In contrast, Google’s machine translation had been slammed for its accuracy because the translations are based on statistical machine translation rather than grammatical rules. Advanced and revised versions of Google Translate, on the other hand, exhibited higher accuracy [27].
Learners can communicate and learn from language chatbots in a natural way by integrating chatbots in mobile apps, which enhances the autonomy of the learning process. Duolingo is the most common language learning chatbot, with AI algorithms that can understand the context of use and respond contextually and uniquely to users. Chatbots have helped thousands of learners learn languages without being embarrassed or feel uncomfortable. There are other such language learning chatbots like Andy, Mondly, and Memrise.
Figures 1.1 and 1.2 show how the chatbots respond to users contextually and uniquely.
The speech recognition tools identify spoken languages, analyze them, and convert them into text. This tool is of great help to the students with physical disabilities or the ones who are not comfortable with the keypad. The Dragon transcription software was one of the first AI applications which transcribed text from voice. This application is significantly used for second language acquisition, especially for improving pronunciation. Furthermore, using Automatic Speech Recognition (ASR) and NLP techniques, software and online systems such as Carnegie Speech and Duolingo have provided foreign language education. These systems not only transcribe speech to text but also identify and correct language errors for users.
Figure 1.1 Chatbot responding to the user contextually.
Figure 1.2 Chatbot responding to the user contextually.
In addition, Google Assistant can be constructively integrated to enhance learners’ proficiency and pronunciation. Students can ask simple questions to the Google assistant like “How’s the weather today in…?”, “How far is Delhi from Agra?”, “When was Taj Mahal built?”, and “What time is it in Malaysia now?”. This is an excellent way to improve and assess students’ communication skills while also ensuring that their pronunciation is intelligible. Some of the well-known speech-recognizing applications are mentioned in the timeline given in the following.
Applications like Autocorrect can be used to get feedback for text. The feedback provided is an actionable feedback about their writing related to claims and sources, topic development, coherence, and English conventions and word choice. It also provides synonyms for unfamiliar words that they may encounter while reading external sources. Such widely used applications based on AI are Writing Mentor and Grammarly which provide feedback about the text related to punctuation, sentence construction, and accuracy.
There are applications with unique Machine Learning algorithm which facilitates completely personalized set of study materials by adapting the learning pattern and analyzing each learner’s vocabulary strengths and weaknesses. Alphary is an AI-based application that helps students acquire and strengthen English vocabulary. These applications use the Oxford suite of Learner’s Dictionaries and an integrated AI named FeeBu (Feedback ButterÖy) to mimic the behavior of a human English tutor who gives automated, intelligent feedback. It also automatically evaluates writing and analyzes grammatical mistakes [32].
The widely known text processor Google Docs is a free and mobile friendly tool. The speech editing feature has been recently added to it. The voice recognition feature has evolved and it can help teachers in providing feedback of conversational activities. It can also be used to evaluate the intelligibility of the learners’ speech by providing direct feedback in the form of text. The application can be creatively used for the maximum benefit of students.
The Language Muse TM Activity Palette is a fun way to improve learners’ language skills. It is a web-based language-instruction program that uses NLP algorithms and lexical tools to generate language activities automatically and help English language learners’ content comprehension and language skills enhancement. The online interface of the software for activity generation, assessment, and feedback is adaptable to MOOCs and several other online learning [33].
AI promotes personalized and autonomous learning. With the integration of AI, students always have easy access to learning. They can study at their own pace and space. They can study at any place and time they wish to. Students can clear their doubts at their own speed and time with AI-powered chatbots or AI virtual personal assistants like Google Assistant and Siri, without being humiliated in front of the entire class. AI personalizes studies to meet the needs of individual students, resulting in increased efficiency. AI has the potential to personalize digital language learning make for each learner, catering to their individual needs.
The use of AI in education has paved the way for modern learning methods such as visualization, simulation, and web-based learning environments. Learners become more involved and engaged as a result of these new learning methods. Not only that, but AI also assists in the development and updating of lesson material, as well as customizing it for various learning goals and learners.
Evaluation is a time-consuming task that could be easily automated by the instructor using AI. It can automatically grade tests and even review essays, highlighting mistakes and recommending ways to prevent them in the future.
The innovative integration of AI in education broadens the ways of building multitude of constructive pedagogies to teaching and learning for students having learning disabilities. It also ensures access to education for physically challenged students like deaf or visually impaired. AI systems can be effectively trained to assist any group of special needs students.
If we look back in the timeline, we would realize that AI that we are living today was just a part of science fiction movies some years ago. Today, AI has become an inseparable part of our lives and it has made its space in almost all the spheres we can think of, right from business, banking, health, aviation to marketing, and now is slowly paving its way in the sphere of academics. AI in education is playing a significant role in augmenting teaching and learning. AI learning platforms facilitate autonomous learning, provides flexibility of space, pace, and time. Further, it facilitates personalized learning focusing on the learners’ interest areas and considering the factors such as strengths, weaknesses, interests, and cultural background. AI-integrated online learning has augmented second language education across the globe manifolds.
Despite of AI having an enormous potential of enhancing the teaching and learning of foreign or second language, it still have certain challenges which need to be addressed. According to Lovett, despite advances in translation technology, there have been questions regarding Google Translate’s grammatical accuracy and how it might be impacting the learner’s process of building proficiency [34]. Furthermore, NLP is a complicated process, and accurately capturing all linguistic information is difficult. Further, the voice recognition also needs adjustments as sometimes it cannot understand heavy accents, articulation speech impediments, and soft voices. Therefore, further studies can be done to identify the solutions to the above stated problems. In addition, further research can also be done to identify the impact of AI on personalized learning, the grammar of the target language and on the proficiency of the language learnt. Moreover, a study on assessing the efficiency of AI on grading and evaluation can also be done.
AI is slowly paving its way into the sphere of academics through various technologies like Machine Learning and NLP, and it is surely going to foster the teaching and learning of second and foreign languages in the days to come.
1. Warschauer, M., Of digital divides and social multipliers: Combining language and technology for human development, in: Information and communication technologies in the teaching and learning of foreign languages: State of the art, needs and perspectives, p. 46, 2004.
2. Khan, N.M. and Kuddus, K., Integrating ICT in English Language Teaching in Bangladesh: Teachers’ Perception and Challenges. Rupkatha J. Interdiscip. Stud. Humanit., 12, 5, 1, 2020.
3. Chatterjee, B. and Kuddus, K., Second Language Acquisition through Technology: A Need for Underdeveloped Regions like Jharkhand. Res. Sch.-An International Referred e- J. Lit. Explor., 2, 2, 252, 2014.
4. Kuddus, K., Emerging Technologies and the Evolving Roles of Language Teachers: An Overview. Lang. India, 18, 81, 2018.
5. Kannan, J. and Munday, P., New trends in second language learning and teaching through the lens of ICT, networked learning, and artificial intelligence, in: Vías de transformación en la enseñanza de lenguas con mediación tecnológica. Círculo de Lingüística Aplicada a la Comunicación, vol. 76, Fernández Juncal, C. and Hernández Muñoz, N. (Eds.), p. 13, 2018, http://dx.doi.org/10.5209/CLAC.62495.
6. Davies, G., Walker, R., Rendall, H., Hewer, S., Introduction to new technologies and how they can contribute to language learning and teaching (CALL). Module 1.1, in: Information and Communications Technology for Language Teachers (ICT4LT), G. Davies (Ed.), Thames Valley University, Slough, 2011, http://www.ict4lt.org/en/en_mod1-1.htm.
7. Levy, M., CALL: context and conceptualisation, Oxford: Oxford University Press, New York, 1997.
8. Warschauer, M., CALL for the 21st century. Paper presented at the IATEFL and ESADE Conference, Barcelona, Spain, 2 July, 2000, http://education.uci.edu/uploads/7/2/7/6/72769947/cyberspace.pdf.
9. Davies, G., Otto, S.E., Rüschoff, B., Historical perspectives on CALL, in: Contemporary computer-assisted language learning, p. 19, 2013.
10. Kuddus, K., Web 2.0 Technology in Teaching and Learning English as a Second Language. Int. J. Engl. Lang. Lit., 1, 4, 292, 2013.
11. Dash, A. and Kuddus, K., Leveraging the Benefits of ICT Usage in Teaching of English Language and Literature, in: Smart Intelligent Computing and Applications. Smart Innovation, Systems and Technologies, vol. 160, S. Satapathy, V. Bhateja, J. Mohanty, S. Udgata (Eds.), pp. 225–232, Springer, Singapore, 2020.
12. Chatterjee, B. and Kuddus, K., Mass media Approach to Second Language Acquisition. J. Engl. Stud., 10, 1, 10, 2015.
13. Warschauer, M. and Kern, R., Network-based language teaching: Concepts and practice, Cambridge University Press, New York, 2000.
14. Sharples, M., Taylor, J., Vavoula, G., A theory of learning for the mobile age, in: Medienbildung in neuen Kulturräumen, pp. 87–99, VS Verlag für Sozialwissenschaften, Switzerland, 2010.
15. Jones, C., Networked learning: an educational paradigm for the age of digital networks, Springer, Cham, Switzerland, 2015.
16. Manns, UNESCO, Artificial Intelligence: Opportunities, threats and the future of learning, Asia and Pacific Regional Bureau for Education, UNESCO Bangkok 2017.
17. Goksel, N. and Bozkurt, A., Artificial Intelligence in Education: Current Insights and Future Perspectives, in: Handbook of Research on Learning in the Age of Transhumanism, Sisman-Ugur, S. and Kurubacak, G. (Eds.), p. 224, 2019.
18. Russell, S.J., and Norvig, P., Artificial intelligence, A modern approach, 2nd ed, Pearson Education Inc., Upper Saddle River, New Jersey, 2003.
19. Housman, M., Why ‘augmented intelligence’ is a better way to describe AI, AINews, United Kingdom, 2018, https://www.artificialintelligence-news.com/2018/05/24/why-augmented-intelligence-is-a-betterway-to-describe-ai/.
20. Nabiyev, V.V., Yapay zeka: İnsan bilgisayar etkileşimi, Seçkin Yayıncılık, Ankara, 2012.
21. Nilsson, J., Voice interfaces: Assessing the potential, Nielsen Norman Group, USA, 2003, Retrieved from http://www.useit.com/alertbox/20030127.htm.
22. Self, J., The defining characteristics of intelligent tutoring systems research: ITSs care, precisely. Int. J. Artif. Intell. Educ. (IJAIED), 10, 350, 1998.
23. Steenbergen-Hu, S. and Cooper, H., A meta-analysis of the effectiveness of intelligent tutoring systems on college students’ academic learning. J. Educ. Psychol., 106, 2, 331, 2014.
24. Reiland, R., Is Artificial Intelligence the Key to Personalized Education?, Smithsonian Magazine, Smithsonian Magazine, USA, 2018. https://www.smithsonianmag.com/innovation/artificial-intelligencekey-personalized-education-180963172/, on March 15 2018.
25. Lu, X., Natural Language Processing and Intelligent Computer-Assisted Language Learning (ICALL), The TESOL Encyclopedia of English Language Teaching, USA, 2018.
26. Turovsky, B., Ten years of Google translate, Google Translate Blog, Google, USA, 2016. https://blog.google/products/translate/ten-years-of-google-translate/
27. Turovsky, B., Found in translation: More accurate, fluent sentences in Google Translate, Blog. Google, USA, 15, 2016, https://www.blog.google/products/translate/found-translation-more-accurate-fluentsentences-google-translate/.
28. https://medium.com/@alejandra.riveraUX/adding-a-chat-feature-to-duolingoa-ux-case-study-73175b612120
29. https://images.app.goo.gl/3g7rVCnfyYBBVJZMA
30. https://www.google.co.in/url?sa=i&url=https%3A%2F%2Fwww.smartsheet.com%2Fvoice-assistants-artificial-intelligence&psig=AOvVaw35WNWG91EdKuqWmYVQcvdI&ust=1617361257271000&source=images&cd=vfe&ved=0CAIQjRxqFwoTCJC8hq7y3O8CFQAAAAAdAAAAABAD
31. https://www.researchgate.net/profile/Michelle-Cavaleri/publication/320618419/figure/fig2/AS:697745916571656@1543366998696/Grammarly-feedback-Free-version.jpg
32. https://www.intellias.com/ai-nlp-driven-language-learning-app/
33. Burstein, J., Madnani, N., Sabatini, J., McCaffrey, D., Biggers, K., Dreier, K., Generating Language Activities in Real-Time for English Learners using Language Muse, in: Proceedings of the Fourth ACM Conference on Learning Scale (L@S’17), Association for Computing Machinery, NY, USA, pp. 213–215, 2017, https://dl.acm.org/doi/10.1145/3051457.3053988.
34. Lovett, D., Is Machine Translation a threat to language learning?, The Chronicle of Higher Education, Washington D.C., 2018.
Email: [email protected]
Palak Furia* and Anand Khandare†
Department of Computer Engineering, Thakur College of Engineering and Technology, Mumbai, India
Abstract
For a long time since the very beginning, a continuous paradigm of selling and buying houses/land has continued to exist. The wealth of a man is often determined by the kind of house he/she buys, but this process had multiple people intermediate. However, with the increase in technology, this barter system has also changed a lot. With PropTech being the new upcoming thing to disrupt in the real estate market, using technology to complete the operations has made buying property very simple. It is seen as part of a digital transformation in the real estate industry, focuses on both the technological and psychological changes of the people involved, and could lead to new functions such as transparency, unprecedented data, statistical data, machine learning, blockchain, and sensors that are part of PropTech.
In India, there are number of websites, which collect the data for properties that are to sell, but there are cases where on different sites price vary for the same apartment, and as a result, there is a lot of obscurity [1, 2]. This project uses machine learning to predict house prices. One heuristic data commonly used in the analysis of housing price deficits is the Bangalore city suburban housing data. Recent analysis has found that prices in that database are highly dependent on size and location. To date, basic algorithms such as linear regression can eliminate errors using both internal and local features. The previous function of forecasting housing prices are basis of retrospective analysis and machine learning [6, 7]. A linear regression model and a decision tree model, using vague assumptions. In addition, a multi-dimensional object model with two training items is used to evaluate house prices where something that predicts the “internal” cost of a house is used, and the non-objective component can count neighbors’ preferences. The aim is to solve the problems of relapse where the target variable is the value and the independent variable region. We have used hot code coding in each of our institutions. The business application of this algorithm is that classified websites can directly use this algorithm to predict the values of new properties that are listed by taking variable input and predicting the correct and appropriate value.
Keywords: Machine learning, clustering algorithm, linear regression, LASSO regression, decision tree, support vector machine, random forest regressor
We are in want of a right prediction at the real estate and the housing marketplace discipline. We see a mechanism that runs all through the residence shopping and promoting; buying a house may be a lifetime purpose for maximum of the people. There are lot of individuals making big errors when buying the houses; the majority are shopping for homes from the people they recognise with the aid of seeing the classified ads and everywhere in the grooves coming across the India. One of the not unusual hassles is shopping for the residences, which are too high priced and no longer really worth it [3]. From claiming valuation structures, additional techniques mirror those natures of asset and those conditions that are provided for [8, 9]. The assets would possibly properly, at the manner, alternate in open market underneath many situations and instances; people are unaware about the contemporary conditions and they start losing their cash [10]. The exchange in cost of residences would affect both the common people together with the financial system of country; to avoid such situations, there is a want of rate prediction. Many techniques are to use within the price prediction.
Statistical fashions have been a method to analyze and are expecting property expenses for a long term. In the work of Fik et al. (2003), a study to explain the housing costs version was carried out with the aid of studying the impact of vicinity capabilities at the property charges [11] (Piazzesi and Schneider; 2009). For those who foresee product costs in a different way, the association can be quite complicated. Price forecasts are number one within the import commercial enterprise quarter. But, forecasting from deliver call for can be complex due to the fact that there may be a consolidation energy alongside the way. A neural programming model wishes to predict inventory price. This gives an overlap between those shares and blessings.
Authors (Selim, 2009) [12] compared a few studies of artificial neural network deflection using 60% of residential price calculations, and a lot of comparisons have been made by estimating the performance of all their comparisons with different education sizes and choosing statistical lengths.
Authors (Wu and Brynjolfsson, 2009) [15] from MIT made an estimate of the way Google searches for global loan and income. The author is well aware about the near encounters between them in the fee of houses and the love for much priced houses. Data taken from net seek manner search queries the use of Google procedures and with the assistance of actual countrywide harmony-information gather each present of states.
The author provides a brief overview of how a random wooded algorithm is use for retrofitting and phase, power boost, and bag loading used as methods. It generates a lot of distinctions, and the difference between lifting and bagging as stated by Liaw et al. (2002) is the successive trees, calculating the weights of the objects and most will take predictions. Throughout the year 2001, Nghiep and Al (2001) proposed a randomized start-up that included fundraising and provided more randomly the entire random planning and postponement process, which is mentioned here in retrospect.
Eric Slone et al. (2014) improved the relationship among the various home factors and the number of residential queries analyzed using a simple linear regression and multiple linear regressions using a standard square method. Home square images have been used as descriptive variables in simple queues, and multi-line retouches include an increase in the measurement of the parcel of land, number of bedrooms, year of construction, and more descriptive.
There are classified websites where properties are inconsistent in terms of pricing of an apartment, and there are some cases where similar apartments are priced at different price point, and thus, there are a lot of intransparencies. Sometimes, the consumers feels that the pricing is not justified for a particular listed apartment, but there no way to confirm that either. We propose to use three machine learning algorithms: linear regression, LASSO regression, and decision tree algorithm. The tools required for the project are as follows: Python, Sklearn for model building, Jupyter notebook, visual studio code and Pycharm as IDE, Python flask for http server, HTML/CSS/Javascript for UI, Numpy and Pandas for data cleaning, and Matplotlib for data visualization.
Figure 2.1 Flow of work.
The selected dataset has element of the metropolis Bengaluru; it consists of nine columns with contents that is point out under in Table 2.1 and has 13,321 instances. Enforcement of real estate, infidelity in real estate builders inside the city, and actual property sales throughout India in 2017 have dropped by 7%. As an example, for a potential house owner, greater than 9,000 apartments and flats for sale vary between 42 and 52 lakh, and it is observed that more than 7,100 apartments are within the budget 52 to 62 lakh, in step with a property file website Makaan.
Table 2.1 Columns of dataset.
Column name
Description
Area type
The kind of area the flat/plot is in.
Availability
If the land is currently available or not.
Location
Location of the land/plot.
Size
Number of bedrooms and hall kitchen in the flat.
Society
Name of the cooperating society.
Total square feet
Area of the plot in square feet.
Bath
Number of bathroom in the flat.
Balcony
Number of balcony in the flat.
Price
Price of the plot/flat.
