In the last decade, Artificial Neural Networks gained a lot of popularity. This is mainly thanks to the rise of Big Data combined with increasing computing power.
This blogpost was written by our Machine Learning Engineer Ward Van Laer on Medium, and goes over their mechanics and the problems they try to solve.
A neural network (NN) is a learning algorithm inspired by the human brain. It consists of multiple connected layers of perceptrons, with for each connection a weight. These weights are called the "activations", because these values increase or decrease the strength of the signal flowing through them. The first and last layers are called the input and output layer, while all layers in between are hidden layers. The hidden layers are unobserved and its weights change with the sole purpose to correctly match the expected output with the output layer.
A basic artificial neural network (NN)
The learning process of an artificial neural network consists of adjusting the weights in each layer, to match the correct output for each input value. This means we need a lot of data, which we all send through the network. Based on these results we can compute how to change the weights to better match the output given by the network and the expected output.
This training process is called backpropagation (combined with gradient descent) and it is done in multiple iterations. A deeper network (more hidden layers) will need more iterations and more data to train appropriately because it contains more weights to learn.
If we take an easy example; we might predict the price a house is selling for, based on the age, size, height etc. of the house. In this case we only need a single perceptron in the output layer which will give us the price. To train this network we use data of sold houses, for which the network will slowly change its weights to match the output to the expected price, given its features.
For complex models and more complicated usecases, there can be over 1000 features and a dozen hidden layers. It is also possible to add more output layers, e.g. to also predict the time a house will remain unsold. Of course, adding such complexities to a networks has a big influence on training time and the amount of data needed.
One of the current projects of IxorThink is in the field of digital pathology.Given images of scanned tissue, the designed algorithm should be able to classify this tissue as normal or abnormal. Based on these detected deformations, diseases like cancer can be detected.
Let's use this example to explain how a neural network can be used for basic classification. The output we expect is binary; tissue can be abnormal or normal. In this case, one output value would provide enough information; i.e. when its value becomes close to one,we mark the tissue as normal, while we decide it is abnormal when the output is close to zero. To make sure the output does not get bigger than one, we add a sigmoid activation function. This function maps all possible values to an output between zero and one, exactly like we need it. If we provide the network with a generous amount of correctly tagged images, it should be able to also make predictions for images it has never seen.
A Neural network used for classification
At IxorThink we are constantly trying to improve our methods to create state-of-the-art solutions. As a software-company we can provide stable and fully developed solutions.