In a previous post, I shared about the biggest failures in predicting the future, specially when it comes to the technology domain. It’s a fun post to check. In 1964, Sir Arthur Clarke said the following famous quote:
Trying to predict the future is a discouraging and hazardous occupation, because the prophet invariably falls between two stools:
- If his predictions sound at all reasonable, you can be quite sure that in twenty or, at most, fifty years, the progress of science and technology has made him seem ridiculously conservative.
- On the other hand, if, by some miracle, a prophet could describe the future exactly as it was going to take place, his predictions would sound so absurd, so far-fetched, that everybody would laugh him to scorn. …
Simply speaking, a hash function is a mathematical function that takes any input size and convert it to a fixed output size. Consider this simple hash function H(X) = Last digit of (X)
So no matter what is the input and its size, we’re returning a one digit output. Another important property is that the same input will always give you the same output. H(24) will always be 4.
In summary:
In a previous post, we explained the mechanics behind Neural networks. In this post we will show a basic implementation in pure Numpy, and in TensorFlow.
As we previously explain, neural networks execution have 4 main steps:
This post is meant to be read after:
For all the previously introduced layers, the same output will be generated if we repeat the same input several times. For instance, if we have a linear layer with f(x)=2.x. Each time we ask to predict f(3) we will get 6. So if we ask 10 times in a row, predict us the output when the input is 3, the NN will always give 6:
F(3)=6; F(3)=6; F(3)=6; F(3)=6; F(3)=6; …
Now imagine we are training an algorithm to detect repetitions, so we want that F(3) = 0 for the first time (no repetition detected), then we would like to get F(3)= 1 for the second time. We can’t achieve this behavior with non-recurrent layers. Since by definition we will always get the same output for the same input. A hack solution for this is to take a vector of 2 variables, so we can treat the first variable differently than the second variable. So a F([3;0]) =0 (no repetition is detected) but F([3;3])=1 (repetition is detected). …
After introducing neural networks and linear layers, and after stating the limitations of linear layers, we introduce here the dense (non-linear) layers.
In general, they have the same formulas as the linear layers wx+b, but the end result is passed through a non-linear function called Activation function.
y = f(w*x + b) //(Learn w, and b, with f linear or non-linear activation function)
The “Deep” in deep-learning comes from the notion of increased complexity resulting by stacking several consecutive (hidden) non-linear layers. Here are some graphs of the most famous activation functions:
Many people perceive Neural Networks as black magic. We all have sometimes the tendency to think that there is no rationale or logic behind the Neural Network architecture. We would like to believe that all we can do is just to try a random selection of layers, put some computational power (GPUs/TPUs) to it, and just wait, lazily.
Although there is no strong formal theory on how to select the neural network layers and configuration, and although the only way to tune some hyper-parameters is just by trial and error (meta-learning for instance), there are still some heuristics, guidelines, and theories that can still help us reduce the search space of suitable architectures considerably. In a previous blog post, we introduced the inner mechanics of neural networks. …
Machine learning (ML) is one of the hottest fields in computer science. So many people are jumping with the fake idea that it’s just about running 10 lines of python code, and expecting things to work by magic in any situation. This blog post is about all the things I learned the hard way. Hope it saves you some good time falling for the same mistakes.
Machine learning is like any scientific field: it has its own rules, logic and limitation. Believing that it’s some sort of black magic doesn’t help to improve your ML skills. This belief will stand against your scientific curiosity needed to understand how each model/layer type works. …
Machine learning (ML) is one of the hottest fields in computer science. So many people are jumping with the fake idea that it’s just about running 10 lines of python code, and expecting things to work by magic in any situation. This blog post is about all the things I learned the hard way. Hope it saves you some good time falling for the same mistakes.
Machine learning is like any scientific field: it has its own rules, logic and limitation. Believing that it’s some sort of black magic doesn’t help to improve your ML skills. This belief will stand against your scientific curiosity needed to understand how each model/layer type works. …
The dimension of a mathematical object is the number of independent variables needed to fully describe it. A point has 0 dimensions. A line has 1 dimension, a square has 2 dimensions and a cube has 3 dimensions. On a line we need one variable, let’s say the distance from a starting point in order to pinpoint our position. On a square we need at least 2 pieces of information (x and y). In a cube, we need 3 coordinates (x,y,z)
The dimension of a mathematical object is the number of independent variables needed to fully describe it. A point has 0 dimensions. A line has 1 dimension, a square has 2 dimensions and a cube has 3 dimensions. On a line we need one variable, let’s say the distance from a starting point in order to pinpoint our position. On a square we need at least 2 pieces of information (x and y). In a cube, we need 3 coordinates (x,y,z)
About