10 steps to bootstrap your machine learning project (part 2)

Thomas Olivier
metaflow-ai
Published in
5 min readNov 2, 2016

--

In the first article of this series, we defined what the best steps are when working on a new machine learning project.

As a reminder, the first five steps are:

  1. Define your task
  2. Define your dataset
  3. Split your dataset
  4. Define your metrics
  5. Establish a baseline

Your baseline is your lower boundary: you want to get better results than this dumb model. You are now looking to another boundary: the human error rate which is the accuracy someone familiar with your task can get easily. You want to reach this human error rate and go beyond it, if possible.

6. Know some absolute metrics about neural nets 📊

Before implementing your neural network, it could be interesting to know some absolute metrics on the computation of your neural network.

As stated by Yoshua Bengio at the Bay Area Deep Learning School (video here), if you have a network with N features and each one need O(K) (an order of K) parameters in the final model, you’ll, at least, need O(K*N) examples for your network to be able to generalize well. That’s an interesting metric as it gives you a first idea of the size of the needed dataset.

A bit more technical metric you can look at is the Vapnik–Chervonenkis dimension (VC dimension), which gives you the complexity of a neural network . It gives you a tool to measure the learning ability of the network. Although it’s not a mandatory step and it’s more used as a theoretical metric, it gives you a sense of what’s happening in terms of learning.

7. Implement an existing neural network model ⚙

You know the kind of neural network you are willing to implement. That is awesome. Don’t rush into developing it from scratch. You will probably be able to download it, already trained. This will make you earn tons of hours (both in engineering and in GPU-time). As Andrej Karpathy says: “Don’t be a hero” 😎.

Set it up with your favorite machine learning framework and run it!

You will also want to fine-tune your model so that it works well with your data. Fine-tuning a model implies playing with hyper-parameters, the size of the network, the processing of your data…

Compute the error on the train and dev datasets, fine-tune it to your specific task and analyse the output errors.

Try to be always close to your data:

  • Visualize your inputs / outputs (you could plot your data and their computed categories for example)
  • Collect summary statistics
  • Play with your hyper parameters and see how they affect your results (on a same graph, plot your error rates computed with different hyper parameters)

8. Other architectures?

If you had some doubts when choosing your model, it’s great to give a try to other architectures and compare their results.

For example, if you plan to translate from a language to another one, you could have a look at the Seq2Seq model, but also to some simpler RNN architectures.

If you are working with words, you could try using different embedding.

For a full list of classic networks, The Asimov Institute came up with an awesome blog post: The Neural Network Zoo.

The Neural Network Zoo

I you do not find anything accessible, you can gain intuition by reading papers related to your problem. It will help you get insights on how a specific task is solved and thus could lead you to try new models.

9. It’s not working, what should I do next? 🤔

To get some understanding on what you should do next, you should first compare the human error level to your model error level, both on the training and the dev sets.

Plot the different error lines on a chart.

In order to know what kind of action we can take, we have to know if we are currently in a high bias or/and a high variance problem.

Remember that we split our dataset into three groups: the training set, the dev (or cross validation) set and the test set.

It’s important not to take the following actions at the same time as one action could solve all your problems. Do the first one, look at the results and see if you are still facing the same issue. If so, try another solution!

If your train error is high, compared to the human level error — high bias:

  • Try to have a bigger (deeper) model
  • Add polynomial features / change your hyper-parameters
  • Train longer your model
  • New model architecture

If your dev error is high, compared to your train error — high variance:

  • More data
  • You could generate more data (add some noise, transpose,…)
  • Smaller set of features
  • Try regularization
  • Early stopping
  • New model architecture

If your test error is high but your dev error is normal — overfit of your dev set:

  • Get more dev data by adding more data to your dataset or by changing the split of your dataset

At some point, your machine learning algorithm can be better than the human level error but you can still experience some variance problems.

What you should know is that once you have reached the human lever error, it is harder and harder to get your algorithm better — and reach the Bayes rate which is the the lowest possible error rate for any classifier of a random outcome.

10. To sum it up 🏁

Your goal is now to iterate over different models quickly to beat the baseline and reach your goal metrics.

It’s also important to gain intuition on how neural networks work to avoid doing unnecessary tasks and not to repeat the same mistakes over and over.

If you have any thoughts or want to share your own methodology, please do so in the comments! 😀

--

--