I use Google Colab for Python Jupyter Notebooks because I don't have to install any software on my machine. This also saves gobs of time and disk space.
Google maintains a host of library versions insuring compatibility, which is an enormous convenience since machine learning is a rapidly changing field by the day, even by the hour.
Google Colab is technically free, but $10 a month buys you access to GPU's, TPU's and the promise your job will actually finish. I figure it's my tithe to Google for all the good they do.
I was curious, for my autoencoder experiments, which configuration of devices were the fastest. Intuition would say, GPU and extra RAM, but I don't trust my intuition when I can just measure something and know for sure. Here is the data generated by logging all possible combinations. You have to restart and run all to make sure that runtime configuration changes stick.
To avoid analysis paralysis these values are thrown into a Python dictionary, with a quick crunch to compute the mean, round the results and sort them from fastest to slowest:
So GPU's with extra RAM are the fastest by a factor of ten for my particular problem which is a fairly run-of-the-mill machine learning task.
Other Tricks
1) Running colab in a Chrome incognito window starts up lots faster for reasons I do not understand. The difference is significant. Don't have time to fix it either so I wrote the billing department at colab-billing@google.com. That is a lot faster than trying to get tech support for reasons I completely sympathize with. But hey, I'm a paying customer.
1) Running colab in a Chrome incognito window starts up lots faster for reasons I do not understand. The difference is significant. Don't have time to fix it either so I wrote the billing department at colab-billing@google.com. That is a lot faster than trying to get tech support for reasons I completely sympathize with. But hey, I'm a paying customer.
2) To have colab show cell execution times automatically insert this code at the top of the notebook.
!pip install ipython-autotime
%load_ext autotime