Despite the ongoing tragedy of the pandemic, we continue to live in a
period of remarkable technical advance. Fortunately our society has advanced
to the point where even when confined to home, we can continue to innovate.
If we let our minds run a little bit, I wonder what we can come up
with.
Any innovation these days is likely to involve both collaboration, and the
most modern arrays of hardware that can be assembled. A certain proverb says, "Many hands make light the work". This is true
for both processors and people. Both get viruses, but I digress.
For the sake of argument, a euphemism I use for
stimulating discussion, let's assume someone has plunked down in
front of us the fastest airplane money can buy. We immediately ask
ourselves, "Where could we go, and more importantly, where should we go with
it?"
John F. Kennedy said, “For of those to whom much is given much is
required”, echoing the writings of Luke the Physician who said “For unto
whomsoever much is given, of [them] shall be much required.”
The question is, "How can we bring the most benefit into the world, from
the gifts we have been given?". Among these gifts is our ability to reason
and communicate nearly instantly worldwide.
Three Observations Motivated by Personal History
1) Time Sharing is Back
I began my computing career with the help of Dr. Carl Sneed, an associate
professor at the University of Missouri, one of five I attended over the
years. I had signed up for an introductory computing course which was
taught on the IBM 360 TSO mainframe in 1975.
IBM 360 with peripherals |
Dr. Sneed was kind enough to walk me through the following process:
Dr. Carl Sneed, University of Missouri |
a) Write one's Fortran IV program on 80-character paper.
b) Transfer each line on the paper to a punched card, using a punched
card machine that announced each character with a kerchunk, like a sewing
machine that has placed a stitch.
The IBM 029 Card Punch |
c) Place the deck of cards on the card reader.
IBM System/360 Model 20 Card Reader |
d) Press the button which made a Las Vegas card fanning sound as the deck
was read.
Card Reader Panel Buttons |
e) In those days, the size of one's deck was very much the status symbol, but I digress.
Card Deck |
f) I specifically remember two programming assignments I had to get running:
- The 3,4,5 triangle problem
- The parabola problem
The parabola problem was the most important to me personally, having grown
up in a family where such figures were important. The assignment did not ask for it, but I was compelled, even obsessed by
the unassigned task of DRAWING the parabola whose roots were computed by
the program. This drawing took place on a Calcomp plotter.
Despite multiple attempts I never succeeded in accomplishing (on the IBM
360) the completion of this task, but the drive to do it never left me. It
became a central focus of all future computing, and led me from aerospace
engineering to computer science to computer graphics at the University of
Utah.
It eventually would result in this,
which you can click on if you like animations.
Various Animations
g) After the card deck was read, the next activity I clearly remember was
the WAIT.
One had to wait to collect the printout that resulted from the execution of
your program to find out if it had functioned correctly, or even at
all.
Wide form computer printout
Like the Tom Petty song, waiting was the hardest part. This would range
from 5 minutes on a good day, to 30 minutes or even, "Pick it up tomorrow"
on a busy day.
h) In those days, the priority with which one's jobs ran was very much a
status symbol, but I digress.
i) On obtaining the tree-consuming fan-folded printout of nearly poster size
proportion, one would deduce, usually in seconds, any shortcoming the
program had, which would lead to a repetition of the steps above.
Now why do I present, in such excruciating detail, the above series of
steps? Because if we skip over the personal computing revolution to the
current state of machine learning we find we have arrived at the same place
again.
Enter Machine Learning
Fast forward 45 years. Besides all the mish-mash of algorithm design and
coding, machine learning (ML) consists of three principal steps:
1) Training the neural network from
data
2) Testing the neural network on data
3) Deploying the resulting inference engine for general
use
The most time-consuming step by far is training the network. The Waiting
problem has reappeared. since for most problems of current interest,
training networks cannot be realistically done on a user's personal
computer in a reasonable amount of time. So it has to be farmed out to a
CPU, TPU, GPU, or APU in the cloud via Microsoft Azure, IBM Cloud, Google
Cloud, Amazon Web Services and the like. The machines that execute the
jobs sit in racks and those racks sit in server farms.
They process our jobs and we wait and we pay. An example of a massively
parallel job is GTP-3, a language inference engine that has 175 billion
weights in its neural network and cost an estimated $12 million dollars to
train.
So to follow Dr. Sneed's kind example, how do we make machine learning as
easy as possible to learn and execute? How can we minimize the number of
steps, the administrative and emotional overhead necessary to appropriate ML
into our computational lives? ML is already available on demand using
services like Google Home Assistant, Microsoft Cortana, Apple Siri, and
Amazon Echo. These enable positively useful C3PO-like conversations with
machines, whose only lack is a robotic delivery mechanism.
C3PO - Ready to Answer Questions |
Transforming the current generation of personal assistants into more
robotically enabled ones would seem to be a natural direction for growth
and development. At this writing, one can already purchase a robotic
canine from Boston Dynamics for $75,000 USD. A Google Assistant to use for
a head is three orders of magnitude less expensive, $29 USD at this
writing. So there is one idea.
So that would be one interesting project, although I personally would
prefer a more anthropomorphic version since hands come in handy for
robotic assistants.
No comments:
Post a Comment