FastAI Installation and Setup

(Updated April 2019)

The most up to date installation instructions are available on Github and the docs site, I would recommend starting there. A list of common troubleshooting issues is also available.

I list the steps I followed for personal reference, which includes solving some minor issues I encountered in setting up a full DL environment on a GPU equipped laptop running Ubuntu 18.04.

If you are installing FastAI to do one of the deep learning courses, I recommend one of the various cloud solutions available instead of setting up a CUDA/Anaconda environment as below.

The instructions listed below installs FastAI v1 within a freshly created Anaconda virtual environment. The instructions below assume you have Anaconda and NVIDIA CUDA (along with an appropriate NVIDIA driver) installed.

First, ensure conda is up to date, otherwise conda might complain about PackagesNotFoundErrors.

conda update conda

I recommend installing into a virtual environment, to prevent interference from other libraries and system packages. You can create a Python 3.6 virtual environment to install FastAI in as follows:

conda create -n fastai python=3.7 mypy pylint jupyter scikit-learn pandas
source activate fastai

Next, if you are planning on installing the GPU version, verify which CUDA you have installed:

nvcc --version # Cuda compilation tools, release 10.0, V10.0.130

You can find the corresponding conda package using:

conda search cuda* -c pytorch

Look for the cudaXX packages that matches your CUDA version as reported by nvcc.

You can now install pytorch and fastai using conda.

conda install cudatoolkit=10.0 -c pytorch -c fastai fastai

A note on CUDA versions: I recommend installing the latest CUDA version supported by Pytorch if possible (10.0 at the time of writing), however, to avoid potential issues, stick with the same CUDA version you have a driver installed for.

You can verify that the CUDA installation went smoothly and that Pytorch is using your GPU using the following command:

python -c "import torch; print(torch.cuda.get_device_name(torch.cuda.current_device()))"

It should print the name of the device (GPU) you have attached to the machine.

Note for NLP (using FastAI v1 for text): if you plan on using FastAI for NLP, I recommend also downloading the relevant language packages for spacy, otherwise you might hit some obscure errors when attempting to parse textual data.

python -m spacy download en

Cloud Environments

A number of Cloud services have first class support for FastAI. I've personally used a lot and can recommend it. There are a number of alternative options. If you are looking for a VM based option (which gives you a little more control over your environment), I recommend Google Cloud Platform or Microsoft Azure.


The FastAI v1 docs are really great, you can find them here:

Andrich van Wyk

Andrich van Wyk

I'm Andrich van Wyk, a software architect and ML specialist. This is my personal blog; I write here about data science, machine and deep learning and software engineering. All opinions are my own.