-by gagan

< this is a beginner friendly blog, where the concepts are explained on a surface level. there will be follow up blogs which will dive deeper into the math part of it >

Preview

Have you ever wondered how your google image search can scan and recognise text, how your digital assistant can recognise your voice, how your phone unlocks through your face, how tesla’s auto-pilot works, etc. Well it turns out that the main factor behind all these technologies is something known as Neural Networks.

The term Neural Network as it indicates was initially inspired from the biological neurons and the Neural Network as a whole was inspired by how the brain works, although in today’s world the artificial neural networks(ANN’s) are different from the biological one. In fact, the human mind is so complicated that, even till today there isn’t a definite answer for how the brain works completely.

Introduction

Before we move on to learn about Neural Networks, let us briefly learn how the biological neuron works.

The neuron consists of three main parts: Dendrites, Soma(Cell Body) and the axons.

Now that we know how a biological neuron works, let’s move on to Artificial Neural Networks. Here is more info if you want to know about the history of Neural Networks:

A Very Short History of Artificial Neural Networks

The Perceptron

There are a lot of events that led to the development of ANN’s and I have attached the link to read about the history of NN’s, but the most significant one of them is the Perceptron. Now let’s learn briefly about Perceptron before we move on to ANN’s.

Brief Introduction Imagine a machine that can learn to recognize patterns-like telling the difference between a cat and a dog-by adjusting itself based on experience. This was the vision behind the perceptron, one of the earliest and simplest forms of artificial neural networks which was the base for future innovations, introduced by Frank Rosenblatt in 1957.

What Is a Perceptron?

A perceptron is a computational model inspired by the biological neurons in our brains. It takes several input values (features), multiplies each by a corresponding weight (think of these as "importance" factors), adds them up, and then passes the result through an activation function, usually a simple step function. If the result crosses a certain threshold, the perceptron "fires" and outputs one class (say, 1 for "cat"); otherwise, it outputs another (0 for "dog")