Hands-On Neural Network Programming with C# - Matt R. Cole - E-Book

Hands-On Neural Network Programming with C# E-Book

Matt R. Cole

0,0
31,19 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Neural networks have made a surprise comeback in the last few years and have brought tremendous innovation in the world of artificial intelligence.
The goal of this book is to provide C# programmers with practical guidance in solving complex computational challenges using neural networks and C# libraries such as CNTK, and TensorFlowSharp. This book will take you on a step-by-step practical journey, covering everything from the mathematical and theoretical aspects of neural networks, to building your own deep neural networks into your applications with the C# and .NET frameworks.

This book begins by giving you a quick refresher of neural networks. You will learn how to build a neural network from scratch using packages such as Encog, Aforge, and Accord. You will learn about various concepts and techniques, such as deep networks, perceptrons, optimization algorithms, convolutional networks, and autoencoders. You will learn ways to add intelligent features to your .NET apps, such as facial and motion detection, object detection and labeling, language understanding, knowledge, and intelligent search.

Throughout this book, you will be working on interesting demonstrations that will make it easier to implement complex neural networks in your enterprise applications.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 261

Veröffentlichungsjahr: 2018

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Hands-On Neural Network Programming with C#
Add powerful neural network capabilities to your C# enterprise applications
Matt R. Cole
BIRMINGHAM - MUMBAI

Hands-On Neural Network Programming with C#

Copyright © 2018 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Commissioning Editor: Pravin DhandreAcquisition Editor: Divya PoojariContent Development Editor:Unnati GuhaTechnical Editor: Dinesh ChaudharyCopy Editor: Safis EditingProject Coordinator: Manthan PatelProofreader: Safis EditingIndexer: Rekha NairGraphics: Jisha ChirayilProduction Coordinator: Nilesh Mohite

First published: September 2018

Production reference: 1270918

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-78961-201-1

www.packtpub.com

This book is dedicated to my always supportive and loving wife, Nedda. I also want to thank the hard-working and professional team at Packt for their hard work and dedication to bringing all my books to market.
mapt.io

Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Mapt is fully searchable

Copy and paste, print, and bookmark content

Packt.com

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

Contributors

About the author

Matt R. Cole is a developer and author with 30 years' experience. Matt is the owner of Evolved AI Solutions, a provider of advanced Machine Learning/Bio-AI, Microservice and Swarm technologies. Matt is recognized as a leader in Microservice and Artificial Intelligence development and design. As an early pioneer of VOIP, Matt developed the VOIP system for NASA for the International Space Station and Space Shuttle. Matt also developed the first Bio Artificial Intelligence framework which completely integrates mirror and canonical neurons. In his spare time Matt authors books, and continues his education taking every available course in advanced mathematics, AI/ML/DL, Quantum Mechanics/Physics, String Theory and Computational Neuroscience.

About the reviewers

Gaurav Aroraa has an M.Phil in computer science. He is a Microsoft MVP, certified as a scrum trainer/coach, XEN for ITIL-F, and APMG for PRINCE-F and PRINCE-P. Gaurav serves as a mentor at IndiaMentor and webmaster at dotnetspider, and he cofounded Innatus Curo Software LLC. Over more than 19 years of his career, he has mentored over a thousand students and professionals in the industry. You can reach Gaurav via Twitter @g_arora.

Rich Pizzo has many years of experience in the design and development of software and systems. He was a senior architect and project lead, especially in the realm of financial engineering and trading systems. He was the chief technologist at two companies. His knowledge and expertise in digital electronics left its mark in the software domain, as well in providing heterogeneous solutions to tough optimization problems. He has come up with many unique solutions for maximizing computing performance, utilizing Altera FPGAs and the Quartus development environment and test suite.

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Table of Contents

Title Page

Copyright and Credits

Hands-On Neural Network Programming with C#

Dedication

Packt Upsell

Why subscribe?

Packt.com

Contributors

About the author

About the reviewers

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Download the color images

Code in Action

Conventions used

Get in touch

Reviews

A Quick Refresher

Technical requirements

Neural network overview

Neural network training

A visual guide to neural networks

The role of neural networks in today's enterprises

Types of learning

Supervised learning

Unsupervised learning

Reinforcement learning

Understanding perceptrons

Is this useful?

Understanding activation functions

Visual activation function plotting

Function plotting

Understanding back propagation

Forward and back propagation differences

Summary

References

Building Our First Neural Network Together

Technical requirements

Our neural network

Neural network training

Synapses

Neurons

Forward propagation

Sigmoid function

Backward propagation

Calculating errors

Calculating a gradient

Updating weights

Calculating values

Neural network functions

Creating a new network

Importing an existing network

Importing datasets

Testing the network

Exporting the network

Training the network

Testing the network

Computing forward propagation

Exporting the network

Exporting a dataset

The neural network

Neuron connection

Examples

Training to a minimum

Training to a maximum

Summary

Decision Trees and Random Forests

Technical requirements

Decision trees

Decision tree advantages

Decision tree disadvantages

When should we use a decision tree?

Random forests

Random forest advantages

Random forest disadvantages

When should we use a random forest?

SharpLearning

Terminology

Loading and saving models

Example code and applications

Saving a model

Mean squared error regression metric

F1 score

Optimizations

Sample application 1

The code

Sample application 2 – wine quality

The code

Summary

References

Face and Motion Detection

Technical requirements

Facial detection

Motion detection

Code

Summary

Training CNNs Using ConvNetSharp

Technical requirements

Getting acquainted

Filters

Creating a network

Example 1 – a simple example

Example 2 – another simple example

Example 3 – our final simple example

Using the Fluent API

GPU

Fluent training with the MNIST database

Training the network

Testing the data

Predicting data

Computational graphs

Summary

References

Training Autoencoders Using RNNSharp

Technical requirements

What is an autoencoder?

Different types of autoencoder

Standard autoencoder

Variational autoencoders

De-noising autoencoders

Sparse autoencoders

Creating your own autoencoder

Summary

References

Replacing Back Propagation with PSO

Technical requirements

Basic theory

Swarm intelligence

Particle Swarm Optimization

Types of Particle Swarm Optimizations

Original Particle Swarm Optimization strategy

Particle Swarm Optimization search strategy

Particle Swarm Optimization search strategy pseudo-code

Parameter effects on optimization

Replacing back propagation with Particle Swarm Optimization

Summary

Function Optimizations: How and Why

Technical requirements

Getting started

Function minimization and maximization

What is a particle?

Swarm initialization

Chart initialization

State initialization

Controlling randomness

Updating the swarm position

Updating the swarm speed

Main program initialization

Running Particle Swarm Optimization

Our user interface

Run button

Rewind button

Back button

Play button

Pause button

Forward button

Hyperparameters and tuning

Function

Strategy

Dim size

Upper bound

Lower bound

Upper bound speed

Lower bound speed

Decimal places

Swarm size

Max iterations

Inertia

Social weight

Cognitive weight

Inertia weight

Understanding visualizations

Understanding two-dimensional visualizations

Understanding three-dimensional visualizations

Plotting results

Playing back results

Updating the information tree

Adding new optimization functions

The purpose of functions

Adding new functions

Let's add a new function

Summary

Finding Optimal Parameters

Technical requirements

Optimization

What is a fitness function?

Maximization

Gradient-based optimization

Heuristic optimization

Constraints

Boundaries

Penalty functions

General constraints

Constrained optimization phases

Constrained optimization difficulties

Implementation

Meta-optimization

Fitness normalization

Fitness weights for multiple problems

Advice

Constraints and meta-optimization

Meta-meta-optimization

Optimization methods

Choosing an optimizer

Gradient descent (GD)

How it works

Drawbacks

Pattern Search (PS)

How it works

Local Unimodal Sampling (LUS)

How it works

Differential Evolution (DE)

How it works

Particle Swarm Optimization (PSO)

How it works

Many Optimizing Liaisons (MOL)

Mesh (MESH)

Parallelism

Parallelizing the optimization problem

Parallel optimization methods

Necessary parameter tuning

And finally, the code

Performing meta-optimization

Computing fitness

Testing custom problems

Base problem

Creating a custom problem

Our Custom Problem

Summary

References

Object Detection with TensorFlowSharp

Technical requirements

Working with Tensors

TensorFlowSharp

Developing your own TensorFlow application

Detecting images

Minimum score for object highlighting

Summary

References

Time Series Prediction and LSTM Using CNTK

Technical requirements

Long short-term memory

LSTM variants

Applications of LSTM

CNTK terminology

Our example

Coding our application

Loading data and graphs

Loading training data

Populating the graphs

Splitting data

Running the application

Training the network

Creating a model

Getting the next data batch

Creating a batch of data

How well do LSTMs perform?

Summary

References

GRUs Compared to LSTMs, RNNs, and Feedforward networks

Technical requirements

QuickNN

Understanding GRUs

Differences between LSTM and GRU

Using a GRU versus a LSTM

Coding different networks

Coding an LSTM

Coding a GRU

Comparing LSTM, GRU, Feedforward, and RNN operations

Network differences

Summary

Activation Function Timings

Function Optimization Reference

The Currin Exponential function

Description

Input domain

Modifications and alternative forms

The Webster function

Description

Input distributions

The Oakley & O'Hagan function

Description

Input domain

The Grammacy function

Description

Input fomain

Franke's function

Description

Input domain

The Lim function

Description

Input domain

The Ackley function

Description

Input domain

Global minimum

The Bukin function N6

Description

Input domain

Global minimum

The Cross-In-Tray function

Description

Input domain

Global minima

The Drop-Wave function

Description

Input domain

Global minimum

The Eggholder function

Description

Input domain

Global minimum

The Holder Table function

Description

Input domain

Global minimum

The Levy function

Description

Input domain

Global minimum

The Levy function N13

Description

Input domain

Global minimum

The Rastrigin function

Description

Input domain

Global minimum

The Schaffer function N.2

Description

Input domain

Global minimum

The Schaffer function N.4

Description

Input domain

The Shubert function

Description

Input domain

Global minimum

The Rotated Hyper-Ellipsoid function

Description

Input domain

Global minimum

The Sum Squares function

Description

Input domain

Global minimum

The Booth function

Description

Input domain

Global minimum

The Mccormick function

Description

Input domain

Global minimum

The Power Sum function

Description

Input domain

The Three-Hump Camel function

Description

Input domain

Global minimum

The Easom function

Description

Input domain

Global minimum

The Michalewicz function

Description

Input domain

Global minima

The Beale function

Description

Input domain

Global minimum

The Goldstein-Price function

Description

Input domain

Global minimum

The Perm function

Description

Input domain

Global minimum

The Griewank function

Description

Input domain

Global minimum

The Bohachevsky function

Description

Input domain

Global minimum

The Sphere function

Description

Input domain

Global minimum

The Rosenbrock function

Description

Input domain

Global minimum

The Styblinski-Tang function

Description

Input domain

Global minimum

Summary

Keep reading

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

This book will help users learn how to develop and program neural networks in C#, as well as how to add this exciting and powerful technology to their own applications. Using many open source packages as well as custom software, we will work our way from simple concepts and theory to powerful technology that everyone can use.

Who this book is for

This book is for the C# .NET developer looking to learn how to add neural network technology and techniques to their applications.

What this book covers

Chapter 1, A Quick Refresher, give you a basic refresher on neural networks.

Chapter 2, Building our First Neural Network Together, shows what activations are, what their purpose is, and how they appear visually. We will also present a small C# application to visualize each using open source packages such as Encog, Aforge, and Accord.

Chapter 3, Decision Trees and Random Forests, helps you to understand what decision trees and random forests are and how they can be used.

Chapter 4, Face and Motion Detection, will have you use the Accord.Net machine learning framework to connect to your local video recording device and capture real-time images of whatever is within the camera's field of view. Any face in the field of view will be then tracked.

Chapter 5,Training CNNs Using ConvNetSharp, will focus on how to train CNNs with the open source package ConvNetSharp. Examples will be used to illustrate the concepts for the user.

Chapter 6,Training Autoencoders Using RNNSharp, will have you use the autoencoders of the open source package RNNSharp to parse and handle various corpuses of text.

Chapter 7,Replacing Back Propagation with PSO, presents how particle swarm optimization can replace neural network training methods such as back propagation for training a neural network.

Chapter 8, Function Optimizations: How and Why, introduces you to function optimization, which is an integral part of every neural network.

Chapter 9, Finding Optimal Parameters, will show you how to easily find the most optimal parameters for your neural network functions using Numeric and Heuristic Optimization techniques.

Chapter 10, Object Detection with TensorFlowSharp, will expose the reader to the open source package TensorFlowSharp.

Chapter 11, Time Series Prediction and LSTM Using CNTK, will see you using the Microsoft Cognitive Toolkit, formerly known as CNTK, as well as long short-term memory (LSTM), to accomplish time series prediction.

Chapter 12, GRUs Compared to LSTMs, RNNs, and Feedforward Networks, deals with Gated Recurrent Units (GRUs), including how they compare to other types of neural network.

Appendix A, Activation Function Timings, shows different activation functions and their respective plots.

Appendix B, Function Optimization Reference, includes different optimization functions.

To get the most out of this book

In this book, we assume the reader has a basic knowledge and familiarity with C# .NET software development and knows their way around Microsoft Visual Studio.

Download the example code files

You can download the example code files for this book from your account atwww.packt.com. If you purchased this book elsewhere, you can visitwww.packt.com/supportand register to have the files emailed directly to you.

You can download the code files by following these steps:

Log in or register at

www.packt.com

.

Select the

SUPPORT

tab.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR/7-Zip for Windows

Zipeg/iZip/UnRarX for Mac

7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub athttps://github.com/PacktPublishing/Hands-On-Neural-Network-Programming-with-CSharp. In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available athttps://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here:http://www.packtpub.com/sites/default/files/downloads/9781789612011_ColorImages.pdf.

Code in Action

Visit the following link to check out videos of the code being run: http://bit.ly/2DlRfgO.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book,mention the book title in the subject of your message and email us [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visitwww.packt.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us [email protected] a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visitauthors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packt.com.

A Quick Refresher

Welcome to Hands-On Neural Network Development Using C#. I want to thank you for purchasing this book and for taking this journey with us. It seems as if, everywhere you turn, everywhere you go, all you hear and read about is machine learning, artificial intelligence, deep learning, neuron this, artificial that, and on and on. And, to add to all that excitement, everyone you talk to has a slightly different idea about the meaning of each of those terms.

In this chapter, we are going to go over some very basic neural network terminology to set the stage for future chapters. We need to be speaking the same language, just to make sure that everything we do in later chapters is crystal clear.

I should also let you know that the goal of the book is to get you, a C# developer, up and running as fast as possible. To do this, we will use as many open source libraries as possible. We must do a few custom applications, but we've provided the source code for these as well. In all cases, we want you to be able to add this functionality to your applications with maximal speed and minimal effort.

OK, let's begin.

Neural networks have been around for very many years but have made a resurgence over the past few years and are now a hot topic. And that, my friends, is why this book is being written. The goal here is to help you get through the weeds and into the open so you can navigate your neural path to success. There is a specific focus in this book on C# .NET developers. I wanted to make sure that the C# developers out there had handy resources that could be of some help in their projects, rather than the Python, R, and MATLAB code we more commonly see. If you have Visual Studio installed and a strong desire to learn, you are ready to begin your journey.

First, let's make sure we're clear on a couple of things. In writing this book, the assumption was made that you, the reader, had limited exposure to neural networks. If you do have some exposure, that is great; you may feel free to jump to the sections that interest you the most. I also assumed that you are an experienced C# developer, and have built applications using C#, .NET, and Visual Studio, although I made no assumptions as to which versions of each you may have used. The goal is not about C# syntax, the .NET framework, or Visual Studio itself. Once again, the purpose is to get as many valuable resources into the hands of developers, so they can embellish their code and create world-class applications.

Now that we've gotten that out of the way, I know you're excited to jump right in and start coding, but to make you productive, we first must spend some time going over some basics. A little bit of theory, some fascinating insights into the whys and wherefores, and we're going to throw in a few visuals along the way to help with the rough-and-tough dry stuff. Don't worry; we won't go too deep on the theory, and, in a few pages from here, you'll be writing and going through source code!

Also, keep in mind that research in this area is rapidly evolving. What is the latest and greatest today is old news next month. Therefore, consider this book an overview of different research and opinions. It is not the be-all-and-end-all bible of everything neural network-related, nor should it be perceived to be. You are very likely to encounter someone else with different opinions from that of the writer. You're going to find people who will write apps and functions differently. That's great—gather all the information that you can, and make informed choices on your own. Only doing by that will you increase your knowledge base.

This chapter will include the following topics:

Neural network overview

The role of neural networks in today's enterprises

Types of learning

Understanding perceptions

Understanding activation functions

Understanding back propagation

Technical requirements

Basic knowledge of C# is a must to understand the applications that we will develop in this book. Also, Microsoft Visual Studio (Any version) is a preferred software to develop applications.

Neural network overview

Let's start by defining exactly what we are going to call a neural network. Let me first note that you may also hear a neural network called an Artificial Neural Network (ANN). Although personally I do not like the term artificial, we'll use those terms interchangeably throughout this book.

"Let's state that a neural network, in its simplest form, is a system comprising several simple but highly interconnected elements; each processes information based upon their response to external inputs."

Did you know that neural networks are more commonly, but loosely, modeled after the cerebral cortex of a mammalian brain? Why didn't I say that they were modeled after humans? Because there are many instances where biological and computational studies are used from brains from rats, monkeys, and, yes, humans. A large neural network may have hundreds or maybe even thousands of processing units, where as a mammalian brain has billions. It's the neurons that do the magic, and we could in fact write an entire book on that topic alone.

Here's why I say they do all the magic: If I showed you a picture of Halle Berry, you would recognize her right away. You wouldn't have time to analyze things; you would know based upon a lifetime of collected knowledge. Similarly, if I said the word pizza to you, you would have an immediate mental image and possibly even start to get hungry. How did all that happen just like that? Neurons! Even though the neural networks of today continue to gain in power and speed, they pale in comparison to the ultimate neural network of all time, the human brain. There is so much we do not yet know or understand about this neural network; just wait and see what neural networks will become once we do!

Neural networks are organized into layers made up of what are called nodes or neurons. These nodes are the neurons themselves and are interconnected (throughout this book we use the terms nodes and neurons interchangeably). Information is presented to the input layer, processed by one or more hidden layers, then given to the output layer for final (or continued further) processing—lather, rinse, repeat!

But what is a neuron, you ask? Using the following diagram, let's state this:

"A neuron is the basic unit of computation in a neural network"

As I mentioned earlier, a neuron is sometimes also referred to as a node or a unit. It receives input from other nodes or external sources and computes an output. Each input has an associated weight (w1 and w2 below), which is assigned based on its relative importance to the other inputs. The node applies a function f(an activation function, which we will learn more about later on) to the weighted sum of its inputs. Although that is an extreme oversimplification of what a neuron is and what it can do, that's basically it.

Let's look visually at the progression from a single neuron into a very deep learning network. Here is what a single neuron looks like visually based on our description:

Next, the following diagram shows a very simple neural network comprised of several neurons:

Here is a somewhat more complicated, or deeper, network:

Neural network training

Now that we know what a neural network and neurons are, we should talk about what they do and how they do it. How does a neural network learn? Those of you with children already know the answer to this one. If you want your child to learn what a cat is, what do you do? You show them cats (pictures or real). You want your child to learn what a dog is? Show them dogs. A neural network is conceptually no different. It has a form of learning rule that will modify the incoming weights from the input layer, process them through the hidden layers, put them through an activation function, and hopefully will be able to identify, in our case, cats and dogs. And, if done correctly, the cat does not become a dog!

One of the most common learning rules with neural networks is what is known as the delta rule. This is a supervised rule that is invoked each time the network is presented with another learning pattern. Each time this happens it is called a cycle or epoch. The invocation of the rule will happen each time that input pattern goes through one or more forward propagation layers, and then through one or more backward propagation layers.

More simply put, when a neural network is presented with an image it tries to determine what the answer might be. The difference between the correct answer and our guess is the error or error rate. Our objective is that the error rate gets either minimized or maximized. In the case of minimization, we need the error rate to be as close to 0 as possible for each guess. The closer we are to 0, the closer we are to success.

As we progress, we undertake what is termed a gradient descent, meaning we continue along toward what is called the global minimum, our lowest possible error, which hopefully is paramount to success. We descend toward the global minimum.

Once the network itself is trained, and you are happy, the training cycle can be put to bed and you can move on to the testing cycle. During the testing cycle, only the forward propagation layer is used. The output of this process results in the model that will be used for further analysis. Again, no back propagation occurs during testing.

A visual guide to neural networks

In this section, I could type thousands of words trying to describe all of the combinations of neural networks and what they look like. However, no amount of words would do any better than the diagram that follows:

Reprinted with permission, Copyright Asimov Institute Source: http://www.asimovinstitute.org/neural-network-zoo/

Let's talk about a few of the more common networks from the previous diagram:

Perceptron:

This is the simplest feed-forward neural network available, and, as you can see, it does not contain any hidden layers:

Feed-forward network: