Read Latex

Tuesday, April 03, 2018

#GTC2018 Sipping from the Firehose



After a protracted absence from the conference scene I got the chance to drink from the firehose at #GTC2018, nVIDIA's annual GPU Technology Conference. As drinks go it was a chocolate malt.




GPU stands for "Graphics Processing Unit" and rhymes with CPU or "Central Processing Unit", the heart of our personal computers. The GPU, a square chunk of silicon shown below measures 1.5 inches on a side, nestled into its graphics card home. You can see my fingerprints on the discard below and honestly it feels like touching a piece of Mars. It is the heart and soul of nVidia's latest offering containing a stunning 9 billion transistors:


nVIDIA has taken 16 of these and glommed them together in their latest offering, the DGX-2 graphics and artificial intelligence supercomputer. It's basically somebody's brain floating in space. At 350+ pounds it's a little heavy to float, but I promised Jensen Huang if he hired me I would put it on a diet and cut its weight in half. After all, this is the kind of things that humans like to fling into space.

The nVIDIA DGX-2 Supercomputer - source: nVidia


The conference in toto, was quite a spectacle, reminiscent of my tours of SIGGRAPH conferences during the heyday of computer graphics innovation. GPU's originally started out as add-on cards to enable 3D computer graphics in personal computers and workstations. Jim Clark deserves mention for getting the ball rolling in this field with Silicon Graphics, but I digress. The SIGGRAPH thrust curve, shown below, is one model. One model that is, for what happens when a disruptive technical change of paradigm comes along. The thirty-year period between 1982 and 2012 shows growth followed by reversion to a baseline of progress. Consider that 15,000 people is an army, 50,000 is a city. It takes a village and all. This year about 8500 people attended #GTC2018, compared to 6500 last year, so we are still inflecting upward.


SIGGRAPH Attendance - source: wikipedia


Market income appears as a three-legged stool for nVIDIA. Those three legs are Cryptocurrencies, Gaming, and Artificial Intelligence (AI) in addition to the core business of enabling computer graphics on PC's and workstations. Below is a historic stock chart and it is clear that things are going gangbusters since 2016.

nVIDIA Stock Price Vs. Time - source: google finance
Cryptocurrencies use GPU's to search for numbers that validate blockchains. These virtual currencies, are not currently backed by any good or service of tangible value but that doesn't seem to stop nVIDIA GPU's from selling like hotcakes.
 
"Look at my pretty prime numbers."

Commoditization of AI 
was, for me, the lasting excitement of #GTC2018. I made my way through a series of lecture sessions, labs and an enormous equipment expo, to get a sense of where things are and refine my own data science and robotic skills. If you are wondering, "What good is AI?", remember that everytime you say, "Hey Google, Hey Siri, Hey Alexa, Hey Cortana you are using AI that is becoming so much a part of our life it is invisible. It is used on Amazon in your every encounter. I don't even go to the store anymore, do you?

Other AI concerns include Autonomous Vehicles, Healthcare, Education, Finance, Robotics, Human Resources, Marketing, Media, Music, News, Customer Service, Personal Assistants, Toys and Games and Aviation. I expect it to follow the SIGGRAPH curve, but over a more compressed time scale due to improvements in telecommunication. For me it's fun to surf big technical waves, because that is what I've done all my life.

My first thesis advisor, Art Hale, a visionary mathematician and engineer, emphasized not getting swept up in the river of change for its own sake, but to remain attentive to first principles that dictate productive change and fundamental directions of endeavor. “It's better to stay up on the bank if you can.” he would say. He would also say, "There are millions of people." for some reason.

Thus, as I sat in the sessions I made notes of key principles that emerged invariant or "here to stay". Here they are in somewhat chronological order biased by my own interests:

AI Accountability Equals Explainability:

The first emergent theme is that AI’s need to explain themselves. At present these magic algorithms are somewhat opaque and difficult to audit when it comes to answer questions like, “How did you get to that answer?” If you were hired or fired by an AI, this is something you might want to know.

There was an entire session devoted to the explainability of AI, but it was paradoxical since the lecturer was a stakeholder in a company that uses proprietary methods to enable AI to explain itself.  AI that explains itself using secret techniques remains unexplainable by definition.

Fortunately there is work being done that pulls the curtain back, such as heatmaps for neural network weights. In the example below, I drew a blobby dot in each corner to see which digit would pop out of the neural net. My sketch classified it as a 1, but most humans would not read that as a one. They might read it as a four. To be fair, the neural net wasn't trained on things that appear on the face of a die. But it should not return an answer as one either. Note that the heatmap of things that this neural net classifies as a 1 doesn’t look anything like a 1, so that is a problem too. I hope to look at this in more detail this year.


Heatmaps for Explainability - source: LRP

To nVIDIA's eternal credit, the technical sessions are posted here which is a plus for addressing this.

Hyperparameter Optimization - this is hot bed of current research. It is also too long a word for a very simple idea. If you go to TensorFlow Playgrounds you can construct your own neural net and run it with various settings like "Learning Rate", or "Amount of Noise", or "Test/Train Split" and other configuration details. These are the Hyperparameters and really should be discovered by a machine rather than a human. By this time next year they will be. In the example below you are watching a computer think. I can't get enough of it.


TensorFlow Playgrounds - Source: Smilkov & Carter

Machine Learning is an Optimal Control Theory Problem. Watching the learning curve evolve over time reminds me of aircraft stability and control. There is a best way to proceed to a given flight configuration and these are knowable sorts of things. Even machine learning (ML) algorithms need autopilots.
Autopilot Design - Source: Open Access Korea
Another idea that emerged in the driverless car world is that marking intersections is really important to let driverless cars know to stop. Since most wrecks occur at intersections this makes sense in the driver-full world also. 
Intersection Risk as Function of Position - Source: Me

So we should expect transportation automation to influence road design and vica versa. Road signs are apparently good reflectors of radar, this causes glint in the sensors due to multipath RF propagation. Road signs made of fiberglass would be better since they reduce RF glint. They would also be better if you happen to run into one. Nature has a funny way of breaking what does not bend, at least that's what Jewel says.







No comments: