|  | 
|  | 1 | +{ | 
|  | 2 | + "cells": [ | 
|  | 3 | + { | 
|  | 4 | + "cell_type": "markdown", | 
|  | 5 | + "metadata": {}, | 
|  | 6 | + "source": [ | 
|  | 7 | + "# Biological and Artificial Neurons" | 
|  | 8 | + ] | 
|  | 9 | + }, | 
|  | 10 | + { | 
|  | 11 | + "cell_type": "markdown", | 
|  | 12 | + "metadata": {}, | 
|  | 13 | + "source": [ | 
|  | 14 | + "Before going ahead, first, we will explore what are neurons and how neurons in our brain\n", | 
|  | 15 | + "actually work, and then we will learn about artificial neurons.\n", | 
|  | 16 | + "\n", | 
|  | 17 | + "A neuron can be defined as the basic computational unit of the human brain. Neurons are\n", | 
|  | 18 | + "the fundamental units of our brain and nervous system. Our brain encompasses\n", | 
|  | 19 | + "approximately 100 billion neurons. Each and every neuron is connected to one another\n", | 
|  | 20 | + "through a structure called a synapse, which is accountable for receiving input from the\n", | 
|  | 21 | + "external environment, sensory organs for sending motor instructions to our muscles, and\n", | 
|  | 22 | + "for performing other activities.\n", | 
|  | 23 | + "\n", | 
|  | 24 | + "A neuron can also receive inputs from the other neurons through a branchlike structure\n", | 
|  | 25 | + "called a dendrite. These inputs are strengthened or weakened; that is, they are weighted\n", | 
|  | 26 | + "according to their importance and then they are summed together in the cell body called\n", | 
|  | 27 | + "the soma. From the cell body, these summed inputs are processed and move through the\n", | 
|  | 28 | + "axons and are sent to the other neurons.\n", | 
|  | 29 | + "\n", | 
|  | 30 | + "The basic single biological neuron is shown in the following diagram:\n", | 
|  | 31 | + "\n", | 
|  | 32 | + "\n", | 
|  | 33 | + "\n", | 
|  | 34 | + "Now, let's see how artificial neurons work. Let's suppose we have three inputs $x_1,ドル $x_2,ドル and $x_3$\n", | 
|  | 35 | + "to predict output $y$. These inputs are multiplied by weights $w_1,ドル $w_2,ドル and $w_3$ are\n", | 
|  | 36 | + "summed together as follows: \n", | 
|  | 37 | + "\n", | 
|  | 38 | + "\n", | 
|  | 39 | + "$$x_{1} \\cdot w_{1}+x_{2} \\cdot w_{2}+x_{3} \\cdot w_{3}$$" | 
|  | 40 | + ] | 
|  | 41 | + }, | 
|  | 42 | + { | 
|  | 43 | + "cell_type": "markdown", | 
|  | 44 | + "metadata": {}, | 
|  | 45 | + "source": [ | 
|  | 46 | + "But why are we multiplying these inputs by weights? Because all of the inputs are not\n", | 
|  | 47 | + "equally important in calculating the output $y$. Let's say that $x_2$ is more important in\n", | 
|  | 48 | + "calculating the output compared to the other two inputs. Then, we assign a higher value to $w_2$\n", | 
|  | 49 | + "than the other two weights. So, upon multiplying weights with inputs, $x_2$ will have a\n", | 
|  | 50 | + "higher value than the other two inputs. In simple terms, weights are used for strengthening\n", | 
|  | 51 | + "the inputs. After multiplying inputs with the weights, we sum them together and we add a\n", | 
|  | 52 | + "value called bias, $b$ : \n", | 
|  | 53 | + "\n", | 
|  | 54 | + "\n", | 
|  | 55 | + "\n", | 
|  | 56 | + "$$ z=\\left(x_{1} \\cdot w_{1}+x_{2} \\cdot w_{2}+x_{3} \\cdot w_{3}\\right)+b$$ \n", | 
|  | 57 | + "\n", | 
|  | 58 | + "\n", | 
|  | 59 | + "If you look at the preceding equation closely, it may look familiar? Doesn't $z$ look like the\n", | 
|  | 60 | + "equation of linear regression? Isn't it just the equation of a straight line? We know that the\n", | 
|  | 61 | + "equation of a straight line is given as: \n", | 
|  | 62 | + "\n", | 
|  | 63 | + "$$ z=m x+b$$\n", | 
|  | 64 | + "\n", | 
|  | 65 | + "\n", | 
|  | 66 | + "\n", | 
|  | 67 | + "Here $m$ is the weights (coefficients), $x$ is the input, and $b$ is the bias (intercept).\n", | 
|  | 68 | + "\n", | 
|  | 69 | + "\n", | 
|  | 70 | + "Well, yes. Then, what is the difference between neurons and linear regression? In neurons,\n", | 
|  | 71 | + "we introduce non-linearity to the result, $z,ドル by applying a function $f(\\cdot)$ called the activation\n", | 
|  | 72 | + "or transfer function. Thus, our output becomes:\n", | 
|  | 73 | + "\n", | 
|  | 74 | + "\n", | 
|  | 75 | + "$$y=f(z)$$\n", | 
|  | 76 | + "\n", | 
|  | 77 | + "\n", | 
|  | 78 | + "A single artificial neuron is shown in the following diagram:\n", | 
|  | 79 | + "\n", | 
|  | 80 | + "\n", | 
|  | 81 | + "\n", | 
|  | 82 | + "\n", | 
|  | 83 | + "So, a neuron takes the input, x, multiples it by weights, w, and adds bias, b, forms $z,ドル and\n", | 
|  | 84 | + "then we apply the activation function on $z$ and get the output, $y$. \n", | 
|  | 85 | + "\n", | 
|  | 86 | + "\n", | 
|  | 87 | + "\n", | 
|  | 88 | + "\n" | 
|  | 89 | + ] | 
|  | 90 | + } | 
|  | 91 | + ], | 
|  | 92 | + "metadata": { | 
|  | 93 | + "kernelspec": { | 
|  | 94 | + "display_name": "Python [conda env:anaconda]", | 
|  | 95 | + "language": "python", | 
|  | 96 | + "name": "conda-env-anaconda-py" | 
|  | 97 | + }, | 
|  | 98 | + "language_info": { | 
|  | 99 | + "codemirror_mode": { | 
|  | 100 | + "name": "ipython", | 
|  | 101 | + "version": 2 | 
|  | 102 | + }, | 
|  | 103 | + "file_extension": ".py", | 
|  | 104 | + "mimetype": "text/x-python", | 
|  | 105 | + "name": "python", | 
|  | 106 | + "nbconvert_exporter": "python", | 
|  | 107 | + "pygments_lexer": "ipython2", | 
|  | 108 | + "version": "2.7.11" | 
|  | 109 | + } | 
|  | 110 | + }, | 
|  | 111 | + "nbformat": 4, | 
|  | 112 | + "nbformat_minor": 2 | 
|  | 113 | +} | 
0 commit comments