9,14 €
Are you excited about Artificial Intelligence and want to get started?Are you excited about Machine Learning and want to learn how to implement in Python?
The book below is the answer.
Given the large amounts of data we use everyday; whether it is in the web, supermarkets, social media etc. analysis of data has become integral to our daily life. The ability to do so effectively can propel your career or business to great heights. Machine Learning is the most effective data analysis tool. While it is a complex topic, it can be broken down into simpler steps, as show in this book. We are using Python, which is a great programming language for beginners.
Python is a great language that is commonly used with Machine Learning. Python is used extensively in Mathematics, Gaming and Graphic Design. It is fast to develop and prototype. It is web capable, meaning that we can use Python to gather web data. It is adaptable, and has great community of users.
Here's What's Included In This Book:
What is Machine Learning?Why use Python?Regression Analysis using Python with an exampleClustering Analysis using Python with an exampleImplementing an Artificial Neural NetworkBackpropagation90 Day Plan to Learn and Implement Machine LearningConclusion
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 100
Veröffentlichungsjahr: 2019
Machine Learning In Python
Hands on Machine Learning with Python Tools, Concepts and Techniques
Copyright © Abiprod 2018
All Rights Reserved
No section of this book is allowed to be transferred or reproduced in print, electronic, audio, photocopy, scanning, mechanical or recording form without prior written consent from Abiprod Pty Ltd.
The author and published has taken great effort to ensure accuracy of this written content. However, readers are advised to follow information in this book at their own risk. The author and publisher cannot be held responsible for any personal, commercial or family damage caused by the information. All readers should seek professional advice for their specific situation.
Disclaimer
What is Machine Learning?
Why use Python?
Regression Analysis using Python
Implementing an Artificial Neural Network
A 90 Day Plan for Machine Learning with Python
Conclusion
How Programming Normally Works
The usual method of programming is quite linear, even in places where it seems nonlinear. The most common “insult” that some programmers use to refer to machine learning is that it is just a bunch of if... else statements where the machine is not actually learning. It is very easy to understand how these programmers come to understand this, but it is important to realize that they are only half right.
Let’s look at how something like a website and Photoshop works, considering how widely the manner in which they operate is different. A website is a collection of HTML, CSS, and Javascript with whatever backend code implementation they plan to use. The website itself does not normally install anything on the user desktop and utilizes features that are already there.
The only mechanism that provides change is the web browser itself and it is only when the web browser supports changes in those languages do those languages really have access to new features. In order to construct the front-end of the website, one has to load the HTML, which will then load the CSS in the Head or the Body areas of the page and load the Javascript in, usually, the Body area of the page near the footer. Therefore, it is linearly loaded no matter how interconnected the web pages may seem.
In Photoshop, the implementation is definitely different due to the fact that it is a program that must be installed on a computer. To the average individual, Photoshop looks like a self-contained unit that can be utilized on every platform. However, Photoshop must utilize and have access to graphical standards only found in drivers for Graphics Cards. In order to draw a line, Photoshop normally has to make a call to the Direct X 11 or Direct X 12 or Vulkan or OpenGL libraries. No one really knows which library it calls to or if it calls to all of them, but all graphics-based programs have to call on existing libraries. This doesn’t become apparent until the program encounters an error.
You might ask how I know this and it really has to deal with the variation of Graphics Cards on the market. You have Intel, AMD, and NVidia all making their own versions of Graphics Chips, with each version of these chips running on the previously mentioned libraries and even older ones. With AMD alone, I know that the past 10 years have seen Direct X 9, Direct X 10, Direct X 11, and Vulkan chip libraries. These libraries provide a consistent basis for function calls across the variety of Graphics Chips in the market. It would be impractical for Adobe, developers of Photoshop, to create their software from complete scratch for each Graphics Chip in existence when there are pre-existing libraries that other companies maintain that cut the workload significantly.
Therefore, in order for a program like Photoshop to even work, it has to have a linear access to already implemented resources. Photoshop, itself, is very modular but still linear. You can see this in how it structures its’ menus. I click on Filter to find the Blur category where I can use the Gaussian Blur equation. Photoshop can be seen more like a library of different image related equations that have sub-equations to ultimately create a linear stack of Layers as they are referred to in Photoshop. Therefore, while the tools are modular, they are nested linearly and applied to the image in a chronologically linear methodology.
Having this in mind and having seen programs and websites work like this for decades, it is understandable that Machine Learning could be seen as nothing more than if... else statements. The problem doesn’t rely on how programming works, but rather on how if... else statements are seen. For instance, if true then this else then that is a valid way to teach new programmers how to understand if... else statements. The programmers who compare Machine Learning to this could say if (feature has curve) then feature is a, b, c... else feature is L, A, E... and this could very well be a valid representation of how a network might work. However, that is how the human mind works and we learn all the time so what’s the problem?
How We Define Learning
The problem, therefore, is the definition of what it means to learn, and this is indeed a philosophical discussion. You might have been asking why I have laid this out in such a manner, but it is truly important to understand that machine learning works differently than the average programming as it has been practiced. It is different not because of how it is programmed, but with what intent it is programmed for. This is why the philosophy is also important as it determines how one goes about making and implementing machine learning.
How does the human mind learn? It learns through practicing until it gets it mostly right. Therefore, our recognition usually fails us the first few times that we attempt to apply it. It is only through repeated failure that human minds find their Gradient Descent. Gradient Descent is how Machine Learning works, but exactly what is it? It is a mathematical equation given to us by Calculus and while it has many applications, Machine Learning uses it to measure the amount of error an algorithm has and move towards less error.
The philosophy behind Machine Learning is to define how human minds would normally classify Features mathematically at their most fundamental levels. Once we have this definition, we then begin to write algorithms that are designed to find these features in a more general sense because we humans don’t make things with perfection as it would be in a computer world. For instance, while a circle is a circle in the human world it will eventually boil down into a line if we zoom in close enough. The computational world views it from a mathematical equation, which means it will never become a line no matter how much we zoom into it. Once we have written the drafts of these programs, these if... else statements, we begin to repeatedly test them to see how accurate they are when applied. This produces an error rate with each test and the goal is to make the error rate drop via a Gradient Descent.
In Calculus, this Gradient Descent is really just an X and Y plot line that curves with hills and valleys. The goal for those developing the Machine Learning algorithms is to create an error rate that exists inside of the lowest possible valley. Each error rate represents a plot point. However, this is still very much a Linear Program because we make it, test it, and change it to make it more correct and this doesn’t constitute as learning. Learning requires that an algorithm is able to review past mistakes, use those mistakes to get better results, and make fewer mistakes. Thus, the key to unlocking a Learning Algorithm is how one can make the algorithm remember and change its’ algorithm for better results.
The Cleverness of Recursive Programming
When looking at Machine Learning programs, the most common theme you will see is that those programmers will often run these programs thousands of times to see what it does. While we could very well explain the different methodologies behind how one goes about teaching an algorithm, the most important facet is that the programmer is looking for the correct weights and biases to get the best Gradient Descent. Here, I am going to discuss one of the many types of algorithms used that will help you understand why most programmers make their programs recursive.
Let us say that we have a dataset of 100 randomized characters and we want our algorithm to recognize letters. The first method is where we have Supervised Learning, which is where we know the correct answer for every character that goes into our Machine Learning algorithm. The goal here would be to feed the character through the algorithm, see if it guessed correctly, and change values if it got it wrong. We could do this by hand, but this is usually time-consuming and human error prone, like reusing values by accident. When it comes to individual feature detection, this is manageable. You may only have to test 100 times for each feature to make sure it detects the fundamentals.
However, when you have to detect if the Machine Learning algorithm can utilize those feature detection nodes in unison, it becomes a mathematical nightmare to do it by hand. Basically, you can think of it as a factorial equation with each feature detection node adding one more to the factorial. Therefore, if you are testing for seven features, you would need to test it by hand seven factorial or 5,040 times. Instead, we would be much better off if we had the program detect when it was wrong, have it change its’ own values, and then reattempt to guess correctly. This, by definition, is a recursive algorithm, which is the most common way Machine Learning is practiced. However, having a known database with known values is still Supervised Learning regardless of whether it is recursive or not, recursion just makes the process faster.
The importance of recursion inside of Machine Learning cannot be understated because this is how the algorithm teaches itself from then on. Imagine having to correct every error Google Voice Recognition provided by hand; it’s simply impossible for one human. Thus, recursion allows the programmer to run test batches via Supervised or Unsupervised and glean information on whether it is working well or something needs to be changed in the fundamentals. While I may not have defined all of the points, this is generally how Machine Learning works and is applied.
The Core of ML is Feature Detection
Now, I have talked a lot about Feature Detection without actually defining what it is and this is because it is an abstract concept rather than a defined item. For instance, when you look at the letter A, it will have different features to it than the letter a. Instead of looking for a direct definition, you would look for features like a straight line or a curve in the letter to determine parts of a whole. Parts of a whole is a great way to think about how Feature Detection works because that is how all Machine Learning algorithms sift through the data.
In order to create a feature, you have to Classify or Categorize those features that you define. The entirety of creating features is ultimately to determine “what does it mean?” because what does it mean if the letter has a curve? The natural answer would mean that it could only be part of a smaller set of structures. Let us go through the process of detecting or recognizing
