Read Latex

Friday, April 01, 2022

How to Get a Factor of Ten Speedup In Google Colab Jupyter Notebooks and Other Tricks

I don't work for Google. I'm just a Ph.D. student trying to get by. This evening at 1 AM,  I was working with an autoencoder example that took a long time to train. Autoencoders are cool, but I don't have all day to wait around.

I use Google Colab for Python Jupyter Notebooks because I don't have to install any software on my machine. This also saves gobs of time and disk space.

Google maintains a host of library versions insuring compatibility, which is an enormous convenience since machine learning is a rapidly changing field by the day, even by the hour.

Google Colab is technically free, but $10 a month buys you access to GPU's, TPU's and the promise your job will actually finish. I figure it's my tithe to Google for all the good they do.



At the top of the notebook is the Runtime Menu, go to the bottom item, yellow arrow:


When you run a notebook, you have the choice to use a bare CPU, a GPU, or a TPU, and in addition you can request extra RAM using the questionably named  'Runtime shape' menu:


The option dialogs look like this:



but don't use Standard.

I was curious, for my autoencoder experiments, which configuration of devices were the fastest. Intuition would say, GPU and extra RAM, but I don't trust my intuition when I can just measure something and know for sure. Here is the data generated by logging all possible combinations. You have to restart and run all to make sure that runtime configuration changes stick.



To avoid analysis paralysis these values are thrown into a Python dictionary, with a quick crunch to compute the mean, round the results and sort them from fastest to slowest:




Intuition turned out to be right, but there was a surprise. A TPU with a standard memory configuration was the slowest. This is a 'for-sure' since each case was run three times to account for process noise. I could have easily convinced myself that this was a reasonable choice and taken TEN times longer to get done.

So GPU's with extra RAM are the fastest by a factor of ten for my particular problem which is a fairly run-of-the-mill machine learning task.

Other Tricks
1) Running colab in a Chrome incognito window starts up lots faster for reasons I do not understand. The difference is significant. Don't have time to fix it either so I wrote the billing department at 
colab-billing@google.com. That is a lot faster than trying to get tech support for reasons I completely sympathize with. But hey, I'm a paying customer.

2) To have colab show cell execution times automatically insert this code at the top of the notebook.
!pip install ipython-autotime
%load_ext autotime

No comments: