Artificial Neural Network (ANN) in Machine Learning

Artificial Neural Network (ANN) in Machine Learning

Artificial Neural Networks – Introduction

Artificial Neural networks (ANN) or neural networks are computational algorithms.

It intended to simulate the behavior of biological systems composed of “neurons”ANNs are computational models inspired by an animal’s central nervous systems. It is capable of machine learning as well as pattern recognition. These presented as systems of interconnected “neurons” which can compute values from inputs.

A neural network is an oriented graph. It consists of nodes which in the biological analogy represent neurons, connected by arcs. It corresponds to dendrites and synapses. Each arc associated with a weight while at each node. Apply the values received as input by the node and define Activation function along the incoming arcs, adjusted by the weights of the arcs.

A neural network is a machine learning algorithm based on the model of a human neuron. The human brain consists of millions of neurons. It sends and process signals in the form of electrical and chemical signals. These neurons are connected with a special structure known as synapses. Synapses allow neurons to pass signals. From large numbers of simulated neurons neural networks forms.

An Artificial Neural Network is an information processing technique. It works like the way human brain processes information. ANN includes a large number of connected processing units that work together to process information. They also generate meaningful results from it.

We can apply Neural network not only for classification. It can also apply for regression of continuous target attributes.

Neural networks find great application in data mining used in sectors. For example economics, forensics, etc and for pattern recognition. It can be also used for data classification in a large amount of data after careful training.

A neural network may contain the following 3 layers:

  • Input layer – The activity of the input units represents the raw information that can feed into the network.
  • Hidden layer – To determine the activity of each hidden unit. The activities of the input units and the weights on the connections between the input and the hidden units. There may be one or more hidden layers.
  • Output layer – The behavior of the output units depends on the activity of the hidden units and the weights between the hidden and output units.

Artificial Neural Network Layers

Artificial Neural network is typically organized in layers. Layers are being made up of many interconnected ‘nodes’ which contain an ‘activation function’. A neural network may contain the following 3 layers:

a. Input layer

The purpose of the input layer is to receive as input the values of the explanatory attributes for each observation. Usually, the number of input nodes in an input layer is equal to the number of explanatory variables. ‘input layer’ presents the patterns to the network, which communicates to one or more ‘hidden layers’.

The nodes of the input layer are passive, meaning they do not change the data. They receive a single value on their input and duplicate the value to their many outputs. From the input layer, it duplicates each value and sent to all the hidden nodes.

b. Hidden layer

The Hidden layers apply given transformations to the input values inside the network. In this, incoming arcs that go from other hidden nodes or from input nodes connected to each node. It connects with outgoing arcs to output nodes or to other hidden nodes. In hidden layer, the actual processing is done via a system of weighted ‘connections’. There may be one or more hidden layers. The values entering a hidden node multiplied by weights, a set of predetermined numbers stored in the program. The weighted inputs are then added to produce a single number.

c. Output layer

The hidden layers then link to an ‘output layer‘. Output layer receives connections from hidden layers or from input layer. It returns an output value that corresponds to the prediction of the response variable. In classification problems, there is usually only one output node. The active nodes of the output layer combine and change the data to produce the output values.

The ability of the neural network to provide useful data manipulation lies in the proper selection of the weights. This is different from conventional information processing.

Structure of a Neural Network

The structure of a neural network also referred to as its ‘architecture’ or ‘topology’. It consists of the number of layers, Elementary units. It also consists of Interconchangend Weight adjustment mechanism. The choice of the structure determines the results which are going to obtain. It is the most critical part of the implementation of a neural network.

The simplest structure is the one in which units distributes in two layers: An input layer and an output layer. Each unit in the input layer has a single input and a single output which is equal to the input. The output unit has all the units of the input layer connected to its input, with a combination function and a transfer function. There may be more than 1 output unit. In this case, resulting model is a linear or logistic regression.This is depending on whether transfer function is linear or logistic. The weights of the network are regression coefficients.

By adding 1 or more hidden layers between the input and output layers and units in this layer the predictive power of neural network increases. But a number of hidden layers should be as small as possible. This ensures that neural network does not store all information from learning set but can generalize it to avoid overfitting.

Overfitting can occur. It occurs when weights make the system learn details of learning set instead of discovering structures. This happens when size of learning set is too small in relation to the complexity of the model.

A hidden layer is present or not, the output layer of the network can sometimes have many units, when there are many classes to predict.

Advantages and Disadvantages of Neural Networks

Let us see few advantages and disadvantages of neural networks:

  • Neural networks perform well with linear and nonlinear data but a common criticism of neural networks, particularly in robotics, is that they require a large diversity of training for real-world operation. This is so because any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases.
  • Neural networks works even if one or few units fail to respond to network but to implement large and effective software neural networks, much processing and storage resources need to be committed. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a most simplified form on Von Neumann technology may compel a neural network designer to fill millions of database rows for its connections – which can consume vast amounts of computer memory and hard disk space.
  • Neural network learns from the analyzed data and does not require to reprogramming but they are referred to as black box” models, and provide very little insight into what these models really do. The user just needs to feed it input and watch it train and await the output.


ANNs are considered as simple mathematical models to enhance existing  data analysis technologies. Although it is not comparable with the power of the human brain, still it is the basic building block of the Artificial intelligence.


Artificial Neural Network (ANN) in Machine Learning

Vulnerability – Introducing 10th V of Big Data

Vulnerability – Introducing 10th V of Big Data

There is a huge hype of Big Data and its features, most of them have been summed up in 9 different Vs of Big data like Volume, Velocity, Variety, Veracity, Validity, Volatility, Value, Variability, Viscosity.

In a recently published white paper by credit reference agency Experian, a proposal has been given to add another “V” to the Big Data features Vulnerability. With the increasing size of people personal data, they have started feeling that it is being used to pry into their behavior to sell them things by different commercial websites.

This is not being liked by some people who may stop doing business with such organizations where their private data is at risk while others aren’t worried about this, as they are comfortable with a transaction that involves exchanging an amount of privacy for an amount of convenience or value.

John Roughley, Experian’s head of strategy for credit services, told “We think about things emotionally … and the emotion that’s associated with data is sometimes of nervousness, anticipation or vulnerability. Stories have been heard about data breaches, and most people have experienced their data being misused as well –people getting calls for asking about payment protection insurance, or telling them they had an accident when it wasn’t.“

At current scenario, with industries running behind Big Data, when data comes up in casual conversation, we generally get excited to discuss the amazing new things we can do with the ocean of data available to us, and the ways that Big Data and analytics are changing the world for the better.

But sometimes the tone can be markedly different while discussing personal data being used by businesses and organizations and why do they need to know so much? What will happen if this data gets out, won’t it be easy for criminals to steal money from our bank accounts and even take our identities?

In order to diminish this fear, organizations need to reassure customers about the safety of their personal data that it won’t be lost, misused or misplaced. This will require achieving a level of “data stewardship” far beyond a level that which is offered by most data businesses today.

Once that’s done, people will be more interested to hear about the another V – value. “We can help people in knowing how the most value from their data can be extracted like in finding a cheaper energy tariff or share their Fit bit data with their doctor to get better medical advice. All this depends on how we are currently conditioned to think about data.

Advice in Experian’s publication, A Data Powered Future, includes taking a careful overview of the data security and having procedures in place for monitoring change of how an organization’s use of data could be more transparent to its customers. By adopting a pro-active policy of transparency, organizations doesn’t just increase trust in its customers but also opens a channel for another conversation about the value that their services can bring.

But we are far away from this approach currently. Challenges need to be faced while addressing people’s concerns about their personal data – particularly when it comes to medical or financial information. Organizations have a big responsibility to protect our personal data and be more transparent about its usage. This can be addressed by adding “Vulnerability” as another essential consideration, with regards to every piece of data which is collected. This would be a pragmatic step towards addressing the personal data problem with Vulnerability.


Vulnerability – Introducing 10th V of Big Data