Read Latex

Thursday, March 07, 2019

Computing and the Future HW 6 Candidate Projects and Innovation Models

1. Project:
  1. If you have not yet identified a project topic, decide on one.
    Describe your project topic and format.
  2. Write 325 words or more for your project.
  3. Decide what would be a good thing to do next on the project. It does not have to be a big thing, but should be something. Describe it.
Answer 1a. Two Chosen Projects

Of the five candidate projects I examined in five previous homeworks I have narrowed the list to two projects. The first is the tensorflow neural network playground demonstration, which I have extended with custom basis functions. It is done and ready for presentation in front of the class on an internet equipped computer.

The second project deals with real neural networks, that is, the human brain. I have read with great interest the book, "The Human Race to the Future", by Dr. Daniel Berleant, especially the section on magnetic stimulation of the brain. I have found myself inspired to the continue an experiment I began in 2006, to build a permanent magnet repetitive transcranial magnetic stimulator, which I am now calling, vTMS. It will be demonstrated with an instrumented scale model I call the vBrain.

Answer 1b. vTMS

My goal is to answer the question, "Can a person sense a changing magnetic field produced by moving permanent magnets in the vicinity of the head?" If they can sense it, does it benefit them? How does it compare with traditional rTMS?

I want to know whether powerful permanent magnets, mounted in rotors can induce a complementary field in brain tissue, in a manner similar to that of more expensive commercial rTMS units.

The project will consist of three components. The vBrain, a gelatin brain simulator housed in a model skull, and the vTMS, which consists of a left hemisphere stimulator and a right hemisphere stimulator and associated controllers.

To insure safely and reduce administrative overhead in demonstrating the concept I have designed a simulated head called the vBrain that will contain an array of magnetic field sensors in the form of miniature compasses. The vBrain is made out of clear gelatin, glycerine and sodium chloride to simulate the impedance characteristics of the brain. The compasses will be embedded in the gelatin.

This will enable me to measure and compare the magnetic field strength of the old unit with that of the new unit and, as an extra, to estimate and compare the magnetic field strength of these units with that of rTMS machines.

I have redesigned the magnet rotor assembly to be more compact, to use more powerful magnets. There are two identical rotor instances, so that there is one unit on each side of the simulated head. This rotor is has been fabricated using 3D printing and delivered. I have ordered and obtained the magnets also. The N52 grade is the most powerful available and two stacks of them must be handled with some care, as they can create a bit of an arm-wrestling situation otherwise. The material is brittle to impact and I have shattered one by letting it impinge on a spherical magnet learning this lesson.




I have selected Pulse Width Modulated DC motors whose speed can be varied from 0 to 600 rpm and, importantly, can be reversed in direction. This will double the potential complexity and intensity of the interacting fields. These motors have been delivered and tested.







There are 6 magnets in the rotor arranged such that their polarity alternates. This provides for the maximum change in magnetic field per unit of rotation. My main concern was that the rotor, made of plastic and 0.18" thick would not deform excessively in the presence of the alternating magnet poles. The rotor disks have turned out to be very strong. The material is rigid and machinable plastic. I am quite happy with how they turned out. I could have had the screw holes and countersunk regions 3D printed. I did those by hand which took an afternoon of careful measuring, drilling and countersinking. The Shapeways 'versatile plastic' material is robust and tolerant to gradual machining. The magnets fit perfectly in each recessed area with a press fit, secured by a drop of clear glue to support the alternating poles.







One revolution of the rotor per second will produce six magnetic pulses. Since 600 rpm is the same as 10 revolutions per second, the rotor will be able to produce magnetic signals at frequencies from 0 to 60 Hertz. This is about 30 Hertz faster than the fastest brainwaves, to provide some experimental margin.

Brainwave frequencies:

  • DELTA (0.1 to 3.5 Hz) The lowest frequencies are delta. ...
  • THETA (4-8 Hz) The next brainwave is theta. ...
  • ALPHA (8-12 Hz) ...
  • BETA (above 12 Hz) ...
  • GAMMA (above 30 Hz)
The motors will be mounted on L brackets which contain adapters for the rotors. A set of six  4-40 pan head screws will connect the rotors to the adapters for flush mounting, so that there is no protrusion on the gelBrain facing side of the rotor.

These components arrived exactly when promised on March 5, 2019 via Amazon.

For a controller I have selected a reversible unit that can supply the voltage and current sufficient to produce one inch-pound of torque for the rotors. I am hoping this will be sufficient torque for two rotors separated by six inches of distance. I am working with 12 Volt supply and the expected current draw will be less than an one ampere, but there is margin for more current if necessary.

The unit I have selected is the Quimat 7-30V DC 10A 300W PWM Speed Adjustable Reversible Switch DC Motor Driver Reversing Switch. These arrived March 5, 2019.

Operating at 12V instead of line voltage enables a battery to be used instead of a power supply connected to line voltage. This increases safety significantly. I am hoping there is enough resolution on the PWM controller to achieve a nearly continuous range of stimulating frequencies. I will be using Strobe tachometer on the iPhone to measure the frequency of rotation. I just installed it on my iPhone and tested it against the 2006 pmTMS unit and it works quite well, freezing the image of the rotor when the correct frequency is selected using a slider.


Answer 1c.
The next thing I am doing on the project is to assemble the rotor components and wire the controllers and begin working on the vBrain. I have ordered and received a medically accurate skull model, and the gelatin will be cast within its boundaries.




To facilitate the casting, I originally intend to trepan the skull model at the apex of the right parietal plate and plug holes at the bottom of the brain case which includes the occipital, left and right temporal floors. I am now leaning towards inverting the skull and doing two 'pours' of the physiological gelatin material. There is a large hole at the base of the skull that facilitates this process, without having to damage the pristine upper surface. I have obtained the magnetic sensors and am working on an electric field sensor using these calculations. The conductance of fresh neocortex is well known. I used the characteristic dimension of 1 inch, since this is the diameter of the stimulating magnets. The result is that a piece of simulated neocortex this size should have a resistance between 355 and 597 Ohms.




2. Theories of Innovation

Pick an industry. Discuss it concisely from the perspective of each of the following theories/models of innovation:

  • Kline and Rosenberg (KR) Model: Market and Technical Forces
  • Abernathy and Utterback (AU) Model: Gas, Liquid, Solid + Transitions
  • Clark and Henderson (CH) Model: System Components vs. Architecture
  • Teece Model (T): Imitation
  • Christensen I (C1) Model: Resources, Processes and Value
  • Christensen II (C2) Model: Sustaining vs. Disruptive
  • Christensen III (C3) Model: Value Chain Evolution


Machine Learning as a Selected Industry


The industry I am picking is Artificial Intelligence, specifically Machine Learning - the revival of a revolution in progress. This revival is being facilitated by open-source software packages like TensorFlow, Caffe, Pytorch and Keras that are developed and run on personal computers, and then deployed and run in the cloud via containers on Amazon Web Services and Google Cloud services. ML is enabling fantastic advances, in parallel, through Big Data processing and the model of neural networks. This is so prevalent that some are repeating Clive Humby's quote that, 'Data is the new oil'. Andrew Ng is comparing the ubiquity of Machine Learning to electricity. ML is becoming a utility, like the phone pole that stands outside, that we take for granted until it stops working. For the Kline/Rosenberg model I compare and contrast the summary presented in class and a comprehensive outline obtained by a careful reading the paper. For the remainder of the models I will use the abbreviated versions presented in class.

Method

Although the Kline and Rosenberg (KR) article is 34 years old (1986) the issues it raises and examples it uses remain quite timely. This article was both pre-internet and pre-personal computer. The authors used examples from transportation and power generation industries. The web and personal computer have sped up the process of innovation, knowledge sharing, acquisition and increase, but the knowledge and conclusions of the paper remain mostly intact.

Kline and Rosenberg (KR) Class Model:
Market and Technical Forces
  • INNOVATION requires multiple inputs.
  • INNOVATION requires feedback. (compound statement cleaved)
  • INNOVATION creates knowledge.
  • INNOVATION is inseparable from its diffusion.
  • The MARKET improves the PRODUCT.
  • The MARKET improves the COMPANY.
KR Paper Outline
  • INNOVATION is controlled by MARKET FORCES
  • INNOVATION is controlled by TECHNICAL FORCES
  • Successful INNOVATION is:
    • 3/4 MARKET Need
    • 1/4 TECHNICAL Opportunity
  • Successful INNOVATION balances:
    • new PRODUCT REQUIREMENTS
    • MANUFACTURING CONSTRAINTS
    • ORGANIZATION SUSTAINABILITY
  • Successful INNOVATION demands right combination of:
    • affordable COST
    • technical PERFORMANCE
    • TIMING of introduction
    • rapid response to FEEDBACK
  • Canonical Examples:
    • Solar Energy had to wait for costs to drop
    • Concorde cost 15x per passenger mile
  • Models of INNOVATION
    • The Linear Model
      • Research, Development, Production, Marketing
      • Lacks FEEDBACK LOOPS
    • The Chain-Linked Model
      • Expanded Linear + FEEDBACK LOOPS
    • Radical vs Evolutionary INNOVATION and Support Organizations
    • Bicycle Dynamics is an unsolved problem. (random but interesting)
  • Identification and Reduction of UNCERTAINTY
  • Transition from CHAOS to ORDER (Similar to the Freezing Model)
  • Separates SCIENCE from ENGINEERING
  • Orthogonalizes SCIENCE vs INVENTION/DESIGN
  • INNOVATION is
    •  inherently UNCERTAIN
    • DISORDERLY
    • composed of COMPLEX SYSTEMS
    • subject to CHANGE/MODIFICATION
    • initiated by DESIGN instead of SCIENCE
    • enabled by FIVE IMPORTANT PATHWAYS
      • FEEDBACK that links R&D with Production/Marketing
      • PERIPHERAL links that serve the central INNOVATION
      • long-range RESEARCH
      • creation of new DEVICES or PROCESSES
      • SCIENTIFIC SUPPORT TOOLING and DEVICES
    • affects
      • MARKET ENVIRONMENT
      • PRODUCTION FACILITIES
      • PRODUCTION KNOWLEDGE
      • SOCIAL CONTEXTS
  • Two Major Variables:
    • UNCERTAINTY
    • LIFE CYCLE STAGE IDENTIFICATION
So how does the KR Model apply to machine learning?

The KR Model from class is the most useful starting point here:
  • The MARKET improves MACHINE LEARNING (ML).
  • The MARKET improves the COMPANY.
KR Discussion

The MARKET improves MACHINE LEARNING: Our proxy stand-ins for the MARKET will be Google, Anaconda, Guido and StackOverflow. Google open-sourced and released TensorFlow which gave everyone an advanced starting point for this programming change-of-paradigm. Joel Barker has a saying, "When there is a change-of-paradigm, everyone starts at zero". In the case of ML, the entity Google, a complex and many-splendored thing, provided everyone who used it the equivalent of a Formula One race car instead of a tricycle. The entity Anaconda, by providing a curated tool environment, further facilitated the use of TensorFlow. This phenom is epitomized by the phrase: 'conda install numpy', three words which equip the user with a high quality numerical analysis library nearly instantly. Consider: 'conda install how-to-fly-a-helicopter'. Guido van Rossum, the author of Python, provided the language to the world effectively license free. This terse programming language, which makes white-space an operator in the language, has had explosive growth and relevance to ML. The web utility stackoverflow.com, which curates programming Q&A, has become an integral part of ML software development. If one has a programming error, one can copy and paste that error into Google search, which provides a solution with a high rate of positive outcomes.

The MARKET improves the COMPANY: Google is now recognized as one of the de facto leaders in machine learning. The Google Home Assistant, summoned by the phrase, "Hey Google", takes verbal queries, translates them into text and invokes the search engine and other company services to provide a response in real time. This author has used the service multiple times just in the interval of writing this piece. Anaconda, Python and stackoverflow.com enjoy virtual monopolies on the software and services they provide, without exploiting their user community financially or emotionally.


Abernathy and Utterback (AU) Class Model:
Fluid, Transitional, Specific Phases



INNOVATION occurs in three phases:
  • FLUID PHASE
    • CHAOS and EXPERIMENTATION
  • TRANSITIONAL PHASE
    • STANDARDS Set
    • PRODUCTIVITY Increases
  • SPECIFIC PHASE
    • One Technology Dominates

AU Modified Model:
Gas, Liquid, Solid + Transitions 

In this version the AU Model is modified to look more thermodynamically consistent.
  • GAS PHASE
    • CHAOS and EXPERIMENTATION
  • CONDENSATION TRANSITION
  • LIQUID
    • CONSTRAINTS EMERGE
    • PROTOTYPING and PRELIMINARY DESIGN
  • FREEZING TRANSITION
  • SOLID PHASE
    • Design is FROZEN
AU Discussion

Hardware engineers love freezing their designs by writing them in the stone of silicon. Once the designs are tested, the engineers can go home and play.

Software engineers never freeze their designs willingly, and are continuously tempted to go in and improve (monkey with) the code. This introduces bugs that are sometimes never found. Software engineers are never done, and they spend their weekends looking for missing semicolons and rewriting code that already works.

The evolution of machine learning through these GAS, LIQUID and SOLID phases can be seen through the timelines below.


Thermodynamics of Machine Learning

The picture below shows some key innovations that have taken place over the last seven decades of time, including an 'AI winter' that occurred between 1970 and 1995. In 1986, mathematician John Spagnuolo and I implemented neural networks at JPL, but it never went anywhere because we did not include back-propagation. This winter corresponded to a transition between the GAS and LIQUID phases, but preceded the FROZEN period and the Cambrian Explosion of TensorFlow applications.

Source
Missing from this figure are some revolutionary developments identified by Ray Kurzweil (Markov Models), Yann Lecun, and Andrew Ng, especially the abstraction of back-propagation using the chain rule from calculus that has greatly accelerated development. Paradoxically it was published in 1970 by Finnish master student Seppo Linnainmaa, but it has had its greatest effect relatively recently. Here is a curve showing number of ML Patents over time. Note that patents inform the condensation and freezing processes of the AU model I have modified to make it more thermodynamically consistent.

Machine Learning Patents


Here is a curve showing Machine Learning Queries to Google from 2004 to Present. Activity increased around 2014.
Machine Learning Google Queries 2004 to present (3/2/2019)

Here is a  more recent timeline with attribution for recent innovations including Generative Adversarial Networks (GANS) and Long Short-Term Memory Cells (LSTM's). I think of GANS as the "Good Cop/Bad Cop" of ML, since they do there work by pairing an agent who synthesizes results and a critic who criticizes them until some kind of convergence is obtained. LSTMS are used in Recurrent Neural Networks (RNN's) and learn when they should remember and when they should forget! This is reminiscent of 'logarithmic forgetting'. RNN's are most useful for problems that evolve over time, while CNN's (Convolutional Neural Networks) are most useful for problems that evolve over space, as in the pixels of an image.


Different Laws for Different Phases

A second primal principle that emerged in our rumination of these innovation models is that different rules apply at different phases of the innovation (creation) process. A business lesson from the best-practices department might be that it is bad to use the rules that govern one phase to guide the activities in another phase. Said thermodynamically, gases have one set of rules, liquids another, and solids yet another.

In other words there are different laws that apply as we transition from the gas phase of innovation (brainstorming, pulling ideas from the ether), to the liquid phase of innovation (selecting which ideas we are going to run with) then finally to the solid phase of innovation (creating and freezing the design). This suggests that if we wanted to devise a reversible innovation process we could melt the solid and boil the liquid. 

Clark Henderson (CH) Model:System Components vs. Architecture

Two Forms of System INNOVATION
  • Improvements to SYSTEM COMPONENTS
  • Changes to SYSTEM ARCHITECTURE
Applying the CH Model to Machine Learning we can make a few comparisons:

Improvements to Machine Learning Components

ML Libraries are improving and evolving. When the call signature of a library function doesn't change but the internal implementation has improved we say, "The SYSTEM COMPONENTS have improved." However overarching architectural changes are occuring as well. TF2 - Tensorflow 2 - was announced today with fewer API's and a better implementation of eager execution. Eager execution is a new and different architecture for TensorFlow. The changes in TensorFlow are reiterated with links in the AI singularity chapter of the book review, "The Human Race to the Future".

Changes to Machine Learning Architectures

Originally in TF, ML operations were executed in a lazy-evaluation model. In this form the user would specify a graph of the problem and define the tensors that would flow through the network beforehand. Solving an ML problem consists of training the network and then testing how well it works. In a programming construct called a 'Session' this predefined graph was executed. At that time it was determined which resources were required in a 'batch mode' of operation. Eager execution replaced this - define the graph - , then - run the session in a batch -, with a, "just do it right now" approach. This had the advantages of making AI problems more intuitive to code.


Teece (T) Model

Can a PRODUCT or SERVICE be IMITATED?
  • Companies will WIN MARKET if they have SIMILAR ASSETS:
    These Assets Include:
    • DISTRIBUTION CHANNELS
    • SUPPLIER RELATIONSHIPS
    • CUSTOMER RELATIONSHIPS
    • MARKETING CAPABILITIES
    • MANUFACTURING CAPABILITIES
Machine Learning Companies with Similar Assets

At the highest level, Google and Amazon are locked in a duel to control the Machine Learning Cloud. Hot on their heels is a plethora of entrepreneurs looking to develop the next, 'killer app' for ML.

Early in the development of the automobile there were hundreds of companies vying to control the market. We are in a similar mode now where hundreds of companies at all levels in the enterprise and value chain are competing to contribute AI and ML products. Some companies are creating 'code-free' approaches to AI programming to save time and include non-coding users and enterprises. Due to the growing ubiquity of open source, what it means to compete is changing. It is difficult to develop proprietary solutions when some teenager in their bedroom can duplicate the same capability from existing github examples in a weekend. Github accomplishes instant distribution by connecting customer and supplier in a peer relationship. Google provides the marketing by indexing Github so customers and suppliers can be connected through common interest. Manufacturing is done by the programmers. The number of middlemen is dropping to zero. It is worth nothing that Microsoft owns Github, but they don't appear (currently) to be flexing any of their muscle to control access to customers or suppliers. If they did, github would immediately cease exist as programmers would pull their code and migrate it to some other open-source portal such as BitBucket.


Cristenson I (C1) Model

C1 is a RESOURCES, PROCESSES and VALUES Model (RPV). RPV determines what a company can and cannot do. Definitions:
  • RESOURCES:
    • People
    • Money
    • Equipment
  • PROCESSES:
    • PROCEDURES of getting things done
  • VALUES:
    • PRINCIPLES that determine HOW DECISIONS are made.
RPV Vary in the ease with which they may be changed:
  • RESOURCES - Easy to Change
  • PROCESSES - Moderately Difficult to Change
  • VALUES - Extremely Difficult to Change
Machine Learning RESOURCES are abundant on the web due to the open-source innovations mentioned above. The monetary cost (MONEY) of developing ML solutions is quite low and there is excellent training available for free on youtube and the web, and for very low cost through organizations like Udemy, which are giving the University system a serious run for its money. They do so by being first to market. By the time a University course is offered, AI epochs have come and gone, as have the opportunities for invention and innovation. Equipment is interesting. Users can develop on their own machines in a platform independent way. Windows, Linux and MacOS solutions all look and run the same when environments such as Python and Jupyter Notebooks are employed for development. There is one caveat. Machine learning models are extremely compute intensive to train on large datasets. There are now straightforward PROCESSES in taking models that were developed on PC's and pushing them into the cloud.  Training jobs can be outsourced to Google Cloud Console or Amazon Web Services where users can rent custom hardware for a fee, including the latest nVidia Graphical Processing Units (GPU's) are especially useful for training and testing ML codes. This leaves an opportunity for University, Institutional and Enterprise participation, by subsidizing the cost of training large models which individuals may not be able to afford. Thus there is a potential syzygy between the individual virtuoso ML programmer and the larger organization. Organizations like the openAI are attempting to institute a set of values and ethics to keep AI from blowing up with catastrophes. One group recently refused to release a story-synthesizing AI for fear it could be used to create fake news.

Cristenson II (C2) Model

C2 distinguishes between SUSTAINING and DISRUPTIVE INNOVATIONS. Definitions: 
  • SUSTAINING INNOVATION:
    • INCREMENTAL IMPROVEMENTS to PRODUCT or SERVICE
  • DISRUPTIVE INNOVATION:
    • ADDRESS a NEW NEED
C2 INNOVATIONS affect EXISTING and NEW CUSTOMER bases:
  • EXISTING CUSTOMERS:
    • who demand better PERFORMANCE and are willing to PAY for it.
  • NEW CUSTOMERS:
    •  yet to experience PERFORMANCE in new area.
As seen in the developmental timelines above, ML developed rather slowly and non-spectacularly as SUSTAINED INNOVATION until back propagation of neuron weights (along with Generative Adversarial Networks) was automated and distributed in packages like TensorFlow, PyTorch and Caffe 1 and 2. The result has been a DISRUPTIVE spike in software development, that is piggybacking along the more widespread availability of high performance GPU hardware both for rent and for sale. These software and hardware spikes are driving each other, although the collapse of cryptocurrencies like BitCoin in the last year has impacted the GPU industry significantly.

Cristenson III (C3) Model: Value Chain Evolution

C3 distinguishes between FULLY INTEGRATED and SPECIALIZED companies.

Definitions: 
  • FULLY INTEGRATED COMPANY:
    • Performs all COMPONENTS of a SYSTEM production in-house
  • SPECIALIZED COMPANY:
    • Produces one COMPONENT of a SYSTEM

C3 trade-offs are in INTEGRATION vs SPECIALIZATION:
  • FULLY INTEGRATED COMPANY:
    • One stop shop e.g.
      • Boom Box
      • Integrate the best one can do
  • SPECIALIZED COMPANY:
    •  Chooses partners to create a TOTAL SOLUTION e.g.
      • Component Stereo
      • Outsource solutions others do better
C3 further specifies that there is a hierarchy of consumer priorities when it comes to the purchasing or adoption of new products. According to this slide from D. Berleants university course, 'Information, Computing and the Future', those priorities are:


Fully Integrated vs. Specialized

In the case of Machine Learning, if one had to choose a one stop shop it would be Google first, Amazon second, Microsoft third and Apple fourth with Facebook as an also-ran. If one had to choose a specialized company it would be those who curate or contribute specific libraries or hardware, such as Anaconda for Python libraries (thus the name) and nVidia for GPU hardware. The explainability problem in AI has also led to the emergence of small companies, often from University incubators, that just address that single problem - how to get an ML code to explain to a court what it has done, why it made the selection or choice that it made. It all is sounding a bit free-will and non-deterministic isn't it? Since ML is being used to hire (and possibly fire) people, explainability is an important unsolved problem that companies like Kyndi are addressing. Another specialized area is hyperparameter optimization. Parameters such as the learning rate, what memory cell to use, what topology of neurons to use can all be adjusted to produce more optimal performance. Small companies like Sigopt and university research groups are addressing these more specialized concerns, while the big guns are driving overall progress in the field.

Customer Choice Priority

Addressing these priorities in reverse order is easier:
  • Price: All the ML development tools are free,
    while training large ML models in the cloud costs money.
  • Customization: ML Software is extremely malleable,
    liquid and extensively customizable.
  • Usability: The advent of Jupyter Notebooks has made modular chunks of machine learning as easy to trade as baseball cards.
  • Reliability: Machine Learning models provide results that are fuzzy, often varying by several percent on different training runs. This is different than the cold, hard and high precision determinism of traditional numerical analysis.
  • Functionality: With CNN's, RNN's, and Reinforcement Learning, functionality is high.


Footnote to Innovation Models 

Mutation Generates Knowledge via a Random Search

In class, we discovered, through the application of these theories of innovation, that nature uses mutation as a search algorithm, for generating knowledge from a very primitive and first principles level. Successful versions survive and the knowledge of that successful architecture is preserved in its DNA. This is an amazing result with far-reaching ramifications. One important thing to know is whether or not systems are imbued with some base-level architecture, which is then tailored for its environment by mutation, or whether there is no base-level architecture, but rather things that work and therefore exist, and things that don't work and therefore don't exist. This would be the, "there is no spoon" option.  As we wind back the clock, to before life existed, we get to planetary accretion and stellar evolution. The same question applies to that context. As we back up further and further towards the big bang, we have to ask at what point the rules were articulated that determine how things interact. As my son points out in an existential gasp, "McDonald's logos came out of the Big Bang", which is fairly terrifying if you think about it. Orthogonally, we might wonder if we are living in an oscillating universe, which generates a conundrum which appears in my answer to a Facebook question posed by Deepak Gupta:


So in the inset to the figure above I am basically arguing that there is a copy of our universe that is identical to ours where time is running in the opposite direction. Maybe that's where my Nobel prize will come from.

3. Grad students:
Read 20 pages in the book you have obtained. Explain what you agree with, disagree with, learned from it, and how your views agree or disagree with the reviewers of the book that you are analyzing.

I have been informed this week that UALR does not consider me a graduate student, despite having two master's degrees. Nonetheless I soldier in the hope that this outrage will be corrected. Therefore I continue my detailed review of, "The Human Race to the Future" a single curated document that is here. In the session for this homework I review Chapters Eleven on the "The AI Singularity" and Chapter Twelve, "Deconstructing Nuclear Nonproliferation". These topics complement each other as the lessons learned from one can be applied to the other.

Thursday, February 21, 2019

Computing and the Future HW 5 - Prediction Markets, Etc.


Q1)  Report on two prediction markets other than intrade.com.
The first prediction market under report, PredictIt.org, is operated from the University of Wellington in Victoria, New Zealand with permission of the CFTC. It brokers a broad array of political election and world issue outcomes. It features identity checking against a government issued ID and will not register users that it cannot verify. According to James Carville, Political Pundit, "PredictIt is the most exciting engine in terms of political opinion." 


PredictIt.org



One example is the contracts for Democratic nominee for the US 2020 election. Amy Klobuchar, whose visibility increased after the Brett Kavanaugh hearings, stood in the snow campaigning when she announced. This was not enough to put her flush with the top contracts, Bernie Sanders and Kamala Harris who are tied at 21 cents a share. Bernie announced his run 3 days ago at this writing, while Kamala Harris has been in the race for 32 days. Bernie is doing pretty well given that Harris had a 29 day head-start in terms of announcing, he was trailing by a penny yesterday and has caught up in just 3 days.




Klobuchar entered 12 days ago and Cory Booker 20 days ago, so they could be headed for the "also ran" category. I jump immediately to the specifics of current politics because it shows the power of prediction markets. When a tool allows us to focus on our problem, rather than its own usage, it becomes a true utility. With InTrade.com in the rear-view mirror we see the power of prediction markets to themselves become influencers of public opinion and participation. Joe Biden has not announced, and is sitting at a 18 cent share, down 2 cents from yesterday. Maybe the markets know something that Joe doesn't! I find this all very exciting. Drilling down past first impressions we obtain the full ranking of US 2020 Democratic Candidates sorted by share price.

PredictIt.org



The presidential panel from PredictIt.org shows the current expectations of the 2020 presidential election:



Taking today's figures and summing them by party we obtain that, if everyone shows up to vote, the odds are 59/41 in favor of a Democratic president in the next election. Honestly, this makes me feel more at ease for some reason. But there is reason for anxiety and here's why: Right now, it's a toss up between Kamala Harris and Bernie Sanders. If the ticket splits because of people we admire like Jessica DeLoach Sabin saying that Bernie isn't a 'real' Democrat, then the odds shift to 46/41/14, Democrat, Republican, Bernie respectively. Bernie loses instantly and the outcome of the election bogs down into the noise signal, with a possible repetition of the 2016 outcome.


The 'second' prediction under report is an ensemble of of three emerging prediction markets (PM) surveyed by Ian Edwards in his
Medium article.

Here is a comparison table generated from the information in the article:



Ian makes an interesting remark/distinction:

"The new generation of prediction market platforms are using trustless, public blockchains, such as Ethereum and Bitcoin, to allow for, in theory, greater transparency and remove the need for a central operator."
This lack of human control, a complete delegation to the blockchain is a change of paradigm whose effects are yet to be fully realized. I dispute the term trustless. Blockchain algorithms achieve public confidence (aka trust) by the distributed accounting ledger and proof-of-work functions that serve as stand-ins for traditional human / institutional trust and replace them with redundant error-correcting codes and watts expended creating hash codes

In the three panels below a terse screen capture is provided of the three sites reviewed:



The Augur.net site while functioning, lacks the sophisticated user-interface of PredictIt.org. It supports politics and sports prediction contracts:


Auger.net


The Gnosis website and its Olympia application is not yet functioning.




The stox.com site functions, but also lacks the sophisticated user-interface of PredictIt.org. It hosts mostly sports predictions from Europe, and NBA basketball in the US.




Q2)  Critique the prediction market idea. Why might their predictions be wrong?

The InTrade.com Romney incident in 2013, covered in class, showed that people will try to manipulate prediction markets even if it costs them a great deal of money. In the Romney incident, one broker ran the table to the tune of more than four million dollars... and lost it all! So the prediction was wrong, but the system rewarded those who bet the right way. So even when prediction markets are wrong, they reward those who see things correctly.





Q3) Pausch claims, "... when you do the right thing, good stuff has a way of happening." And later, "It's not about how to achieve your dreams, it's about how to lead your life. If you lead your life the right way, the karma will take care of itself. The dreams will come to you [i.e., will happen automatically]."
This is an example of something called the "just world hypothesis." On the other hand, there is a well known book entitled When Bad Things Happen to Good People. And recall that Pausch himself gave this lecture after being diagnosed with terminal pancreatic cancer.     Discuss your opinions on this important issue.
The event I always return to when weighing the "just world" idea is the Jewish Holocaust of WWII. Six million Jews were murdered by the German Nazis and more than twenty million Russians died. There is no "just world" in these situations and as I heard one Memorial attendee say, "It is beyond theology". Since that time there has been the Pol Pot genocide where nearly two million Cambodians, doctors, lawyers and intellectuals (25% of the population) were murdered. There was the Rwandan genocide where nearly a million Tutsi were murdered at the hands of the Hutu. These and many more genocides throughout the world are enumerated with more clarity here. My point is, that in a just world, mass genocides do not pop up every few years. So there is no just world at the world scope of human affairs.

There is also no justice in the regional scopes where those who carry more military, political, wealth, or religious influence are in control. We see the military and political injustices in Central America, with kidnappings, murders, extortion of those who do not toe the line. In this country we see that those with wealth encounter a different flavor of justice than those without the means to assemble "Dream Teams" of legal defense attorneys when they are charged. Many get caught in a revolving door of contact with the justice system when they do not pay punitive fines for minor offenses. Those caught in a web of religious influence like those who have been sexually abused by 6000 priests in the US Catholic Church and 700 victims in the Southern Baptist denomination.

Those who are lawyers or have legal training can gain more latitude in legal matters, they can exploit loopholes in the law to their own benefit, that those who are not so trained cannot. So there is no just-world for those without legal expertise or the means to secure it.

Physicians and those with specialized medical training are able to marshal knowledge, supplies, drugs, and clinical tests for themselves and their families, that those without access cannot. Their domain expertise confers on them an advantage, both in times of political stability and social turmoil. Further they are financially equipped to relocate whereas those with fewer resources cannot. Consequently there is no just-world for those without medical expertise or the resources to pay for it.

I find it implausible, let me sharpen that, I find it impossible that any significant number of the victims in the the genocide, abuse, and injustice cases deserved what happened to them. So the world is not a just place, and there is no reason to believe anything has changed that will fix that, or that it will ever be fixed. Injustice is hard-wired into the system. So we must attempt to survive in the presence of it.

So does this injustice give us permission to emulate it? By no means. The first objective of any reasonable caring and moral person is to obstruct the harm that is in the world, and the harm that fulments these situations. Reducing harm is the single most important thing any of us can do. As Edmund Burke said, "All that is necessary for the triumph of evil is that good men do nothing."

I have also noticed a peculiar 'complementarity' as Niels Bohr (1885-1962) put it, almost a conservation principle regarding the duality of good and evil (wave and particle nature of physics). When awful things are happening in one place, equal and oppositely wonderful things are happening in another place. So the trick, if there is one, is to stay in the arena of the wonderful and avoid the arena of horror. But for many wonderful people, that has not been possible to do. As for myself, with limited time, I focus on the wonderful, hoping the evil will eat itself. Often, if one is patient, it does.

Q4) How your project topic will affect your personal future.
I have five candidate projects and I have not heard a preference expressed for any of them. Soon the wave-function of five will collapse to one or possibly two. For the time being I will pursue this multi-processing illusion and answer the question for each of them.

PMTMS: Permanent Magnet Transcranial Magnetic Stimulation
I do not know. On the positive side it could help me focus or provide some kind of therapeutic effect, or possibly lead to some kind of computer-brain interface. On the negative side, there is always the possibility of an accident, a memory erasure or injury of some kind.


CNN: feature extension of the TensorFlow Neural Network Playground

The effect that working with the Convolutional Neural Networks in the Tensorflow playground has had on me is this: I have already seen it, understood it, and extended it. This proved my assertion about basis functions mentioned in a previous homework. Executing this project helped me become familiar with Typescript, Node.js, and npm, the JavaScript package manager for node.js. Npm is in widespread use. This helps improve, extend and maintain my machine learning literacy.

My objective in choosing this project would be to transmit that benefit to other computer and information science students.

RNN: predicting signals over time using machine learning.

The effect that working with the Recurrent Neural Networks in Tensorflow and Python Jupyter notebooks has had on me is this: I have seen it, understood it, and am in the process of codifying, extending and generalizing it. 

I have done a lot of work in time and frequency domain signal processing. In fact I have written a book on it. What interests me about RNN's is that they constitute a smart predictive filter that is calibrated using data, but is not limited to a specific choice of basis functions, say like Fourier analysis is. The machine learning model is trained against a very general set of signal inputs and learns to output the most likely signal as a result. That is a deeply powerful tool whose application could benefit me and others.

My objective in choosing this project would be to transmit that benefit to other computer and information science students.

SGL: Soft Gate Logic as a future computing architecture

The effect that defining and articulating Soft Gate Logic has had on me is this: I have defined it, solved some theoretical problems necessary for its correct definition, understood it, and am in the process of codifying and using it. 

What interests me about SGL is that it constitutes a dramatic extension to Boolean logic, that in all cases reverts to current computing, but enables fuzzy computing. It is also a gateway to quantum computing, both in terms of understanding and using.

My objective in choosing this project would be to participate in the revolution that such an extension would provide.

WQC: Warm Quantum Computer

The effect that defining and articulating Warm Quantum Computer has had on me is this: I have defined some key pieces of it, have proposed how those might be built, but I'm just at the very beginning of understanding its potential impact. My hope is that it could be used to create a more powerful imaging technology, a quantum camera, if you will that would have applications in medicine, photography, videography, communication and entertainment. 

What interests me about WQC is that it could conceivably create a more affordable quantum computer, and thus a "Personal Quantum Computer".

My objective in choosing this project would be to participate in the revolution that such a technology would provide.


Q5) Identify 8+ sources of information about your project topic. Provide the URLs and give 2-3 sentences describing each one.
PMTMS: Permanent Magnet Transcranial Magnetic Stimulation
  • Description of my 2006 experiment.
    I dug up the transcript of a TMS experiment I did 13 years ago.
  • If was going to design version 2 I would use these magnets.
    Rare earth neodymium magnets are now available in high intensity versions.
  • Shapeways
    A 3D printing service that can be used for rapid prototyping of TMS equipment.
  • Here is an article on dosing.
    It is important to consider dosing in any brain stimulating technology,
    whether electrical, magnetic or pharmacological.
  • Here is an article on the circuitry, note the 4.5 kV pulse voltage!
    rTMS pulses fed to low resistance coils produce very powerful
    magnetic transients. This is important to realize.
    To some degree rTMS is a blunt instrument.
  • Wiki on Repetitive Transcranial Magnetic Stimulation
    Risks increase at higher frequencies of operation
  • RTMS Hardware
    The machinery for rTMS is available in a variety of models and form factors.
  • RTMS Overview
    Basic description of the medical technology including enumeration of those
    who should not have the procedure.
  • NIH Overview of Brain Stimulation Therapies
    This site compares and contrasts various brain stimulation therapies.
I decided to consider a gel-brain stimulator to show the effects of pmTMS without the thorny issue of human subjects. Here is a first step:


CNN: feature extension of the TensorFlow Neural Network Playground 
RNN: predicting signals over time using machine learning
  • Jupyter Widgets - the ipywidgets library
    Typical Invocation >  
    from ipywidgets import *
    Provides interactivity for Python programs running in Jupyter Notebooks
  • NumPy - scientific computing library for Python
    Typical Invocation >  import numpy as np
    Provides numerical analysis data structures and routines
  • Pandas - Python Data Analysis Library
    Typical Invocation > import pandas as pd
    Provides Procedural 'Excel-Like' Data Tables and Operators
  • SciPy (“Sigh Pie”) - open-source S/W for math, science, and engineering
    Typical Invocation > import scipy as sp
    Example Invocation > from scipy.integrate import trapz, simps
    Numerical integration using trapezoidal and Simpson's rule
  • Scikit-learn - Machine Learning Library for Python
    Example Invocation > from sklearn.metrics import mean_squared_error
    Compute the RMS error in a calculuation
  • Matplotlib Library - Python Rich 2D Plotting Library
    Typical Invocation >  import matplotlib.pyplot as plt
    Special Invocation > %matplotlib inline, enables real-time visualization
  • Mlab - Python scripting for 3D plotting with mayavi
    Typical Invocation > 
    from mayavi import mla
                               > mlab.init_notebook
    ()
    Enables 3D visualization and rendering in the notebook
  • Math Library - Python Math Library
    Typical Invocation > import math
    This library is Python's version of math.h from the C language.
SGL: Soft Gate Logic as a future computing architecture
  • Analog Computers
    I am interested in these because prior to the digital age,
    computing with integrators and differentiators implemented as
    operational amplifiers was being developed, but this development
    was curtailed with the appearance of digital machinery.
    This is a shame because analog computing is real-time computing.
  • Missile guidance
    The V2 rocket was the first rocket to use an analog guidance system,
    so it is good to review the history of missile guidance for this technology.
    They were also incorporated into bomb sights.
  • Inertial guidance
    Inertial guidance systems were a generalization to other forms of transport.
  • GPS navigation
    GPS navigation is the terminal node of current development for guidance systems
    it would be interesting to consider a soft gate version of GPS and its gold code.
  • Fuzzy Logic
    This is an overview of fuzzy logic that define logic on the continuous interval [0,1]
    It has been around since the 1920's in one form or another.
  • Fuzzy Thinking
    A book about thinking with Fuzzy Logic, as opposed to just being confused.
  • Complex Numbers
    Complex numbers and analysis can be implemented using two fuzzy bits.
    This is the natural gateway to quantum computing.
    Everyone should know how to add, subtract, multiply and divide complex numbers.
  • Wave Functions
    Wave functions are an alternate view of reality, contrasting with particles.
    As such the are fundamentally closer to reality, as well as being less intuitive.
WQC: Warm Quantum Computer 
  • Gisin's New York Times Article on Entanglement
    This exciting 1997 article discusses Nicolas Gisin's experiment of that time.
    It was my first exposure to photon entanglement.
  • Definition of a qubit
    This wiki describes qubits and develops the Bloch sphere
    model of the qubit and bra-ket notation
  • The IBM quantum computer you can use
    This is a real code-and-go quantum computer
    I find QC results and ML results similar in that they fuzzy,
    unlike the exact results we are used to in numerical analysis.
  • Lithium Niobate as a photon splitter (downconversion)
    It is this technology that Gisin and others have used to
    generate entangled photons.
  • Three solution photon combiner (upconversion)
    This article describes a novel three-dye system for
    combining two photons back into one.
    This is critical for me, providing an 'inverse-operator'.
  • Braket notation for quantum mechanics
    This video runs over an hour but is worth every second.
    The instructor starts with simple matrix notation and
    ends up explaining a good deal of quantum mechanics and Dirac notation.
  • How to add 1 and 1 on a quantum computer
    This blog entry I did earlier in preparing for this class
    addresses the, "so what's it good for?" question from first principles.
  • Solving Rubik's cube on a quantum computer
    There is a simultaneity and interconnectedness to the faces of a Rubik's cube.
    They are, to some degree, entangled with each other and to some degree
    independent. I'm interested in programming languages for QC that
    would provide for their rapid and intuitive embedding.

Building Your Personal Future


Q6) ("Do"). Based on your work on this topic, identify something that will progress you toward your vision, such as a strategic action that helps address a strategic issue, such as a core strategic issue that is a prerequisite to others. (On your blog, simply note if you succeeded on this question, but don't put the answer there.)
I have ruminated on this question. A useful construct was to consider others who have done so as well, adding a people and experience dimension to the activity.
Q7) ("Monitor"). Consider the results of the "Do" step you just completed. How well did it work, or is it working now? If well, then you are on track. If instead any problems of any kind arose, see 3.  (On your blog, note if it worked or not, but don't put what it was.)
It worked out but I have sustained a minor injury, which may or may not be related, but could be a warning to proceed with care. 
Q8) ("Accountability"). Consider the results of the "Monitor" step you just completed. If it worked, reward yourself! If things didn't go as well as hoped for, analyze why and what you could do to circumvent them. Take notes on this for future reference. (On your blog, note the result of this question in general terms, but don't put any details.)
It is important to create sandboxes where innovations may be safely tested and evaluated before they are distributed or disclosed.
9) Grad students only. Read 20 pages in the book you have obtained. Explain what you agree with, disagree with, learned from it, and how your views agree with or disagree with the reviewers of the book.

I have moved this answer to my ongoing review of the book, "The Human Race to the Future" a single curated document that is here. In the session for this question I reviewed chapters nine and ten of the book.