Read Latex

Thursday, March 07, 2019

Computing and the Future HW 6 Candidate Projects and Innovation Models

1. Project:
  1. If you have not yet identified a project topic, decide on one.
    Describe your project topic and format.
  2. Write 325 words or more for your project.
  3. Decide what would be a good thing to do next on the project. It does not have to be a big thing, but should be something. Describe it.
Answer 1a. Two Chosen Projects

Of the five candidate projects I examined in five previous homeworks I have narrowed the list to two projects. The first is the tensorflow neural network playground demonstration, which I have extended with custom basis functions. It is done and ready for presentation in front of the class on an internet equipped computer.

The second project deals with real neural networks, that is, the human brain. I have read with great interest the book, "The Human Race to the Future", by Dr. Daniel Berleant, especially the section on magnetic stimulation of the brain. I have found myself inspired to the continue an experiment I began in 2006, to build a permanent magnet repetitive transcranial magnetic stimulator, which I am now calling, vTMS. It will be demonstrated with an instrumented scale model I call the vBrain.

Answer 1b. vTMS

My goal is to answer the question, "Can a person sense a changing magnetic field produced by moving permanent magnets in the vicinity of the head?" If they can sense it, does it benefit them? How does it compare with traditional rTMS?

I want to know whether powerful permanent magnets, mounted in rotors can induce a complementary field in brain tissue, in a manner similar to that of more expensive commercial rTMS units.

The project will consist of three components. The vBrain, a gelatin brain simulator housed in a model skull, and the vTMS, which consists of a left hemisphere stimulator and a right hemisphere stimulator and associated controllers.

To insure safely and reduce administrative overhead in demonstrating the concept I have designed a simulated head called the vBrain that will contain an array of magnetic field sensors in the form of miniature compasses. The vBrain is made out of clear gelatin, glycerine and sodium chloride to simulate the impedance characteristics of the brain. The compasses will be embedded in the gelatin.

This will enable me to measure and compare the magnetic field strength of the old unit with that of the new unit and, as an extra, to estimate and compare the magnetic field strength of these units with that of rTMS machines.

I have redesigned the magnet rotor assembly to be more compact, to use more powerful magnets. There are two identical rotor instances, so that there is one unit on each side of the simulated head. This rotor is has been fabricated using 3D printing and delivered. I have ordered and obtained the magnets also. The N52 grade is the most powerful available and two stacks of them must be handled with some care, as they can create a bit of an arm-wrestling situation otherwise. The material is brittle to impact and I have shattered one by letting it impinge on a spherical magnet learning this lesson.




I have selected Pulse Width Modulated DC motors whose speed can be varied from 0 to 600 rpm and, importantly, can be reversed in direction. This will double the potential complexity and intensity of the interacting fields. These motors have been delivered and tested.







There are 6 magnets in the rotor arranged such that their polarity alternates. This provides for the maximum change in magnetic field per unit of rotation. My main concern was that the rotor, made of plastic and 0.18" thick would not deform excessively in the presence of the alternating magnet poles. The rotor disks have turned out to be very strong. The material is rigid and machinable plastic. I am quite happy with how they turned out. I could have had the screw holes and countersunk regions 3D printed. I did those by hand which took an afternoon of careful measuring, drilling and countersinking. The Shapeways 'versatile plastic' material is robust and tolerant to gradual machining. The magnets fit perfectly in each recessed area with a press fit, secured by a drop of clear glue to support the alternating poles.







One revolution of the rotor per second will produce six magnetic pulses. Since 600 rpm is the same as 10 revolutions per second, the rotor will be able to produce magnetic signals at frequencies from 0 to 60 Hertz. This is about 30 Hertz faster than the fastest brainwaves, to provide some experimental margin.

Brainwave frequencies:

  • DELTA (0.1 to 3.5 Hz) The lowest frequencies are delta. ...
  • THETA (4-8 Hz) The next brainwave is theta. ...
  • ALPHA (8-12 Hz) ...
  • BETA (above 12 Hz) ...
  • GAMMA (above 30 Hz)
The motors will be mounted on L brackets which contain adapters for the rotors. A set of six  4-40 pan head screws will connect the rotors to the adapters for flush mounting, so that there is no protrusion on the gelBrain facing side of the rotor.

These components arrived exactly when promised on March 5, 2019 via Amazon.

For a controller I have selected a reversible unit that can supply the voltage and current sufficient to produce one inch-pound of torque for the rotors. I am hoping this will be sufficient torque for two rotors separated by six inches of distance. I am working with 12 Volt supply and the expected current draw will be less than an one ampere, but there is margin for more current if necessary.

The unit I have selected is the Quimat 7-30V DC 10A 300W PWM Speed Adjustable Reversible Switch DC Motor Driver Reversing Switch. These arrived March 5, 2019.

Operating at 12V instead of line voltage enables a battery to be used instead of a power supply connected to line voltage. This increases safety significantly. I am hoping there is enough resolution on the PWM controller to achieve a nearly continuous range of stimulating frequencies. I will be using Strobe tachometer on the iPhone to measure the frequency of rotation. I just installed it on my iPhone and tested it against the 2006 pmTMS unit and it works quite well, freezing the image of the rotor when the correct frequency is selected using a slider.


Answer 1c.
The next thing I am doing on the project is to assemble the rotor components and wire the controllers and begin working on the vBrain. I have ordered and received a medically accurate skull model, and the gelatin will be cast within its boundaries.




To facilitate the casting, I originally intend to trepan the skull model at the apex of the right parietal plate and plug holes at the bottom of the brain case which includes the occipital, left and right temporal floors. I am now leaning towards inverting the skull and doing two 'pours' of the physiological gelatin material. There is a large hole at the base of the skull that facilitates this process, without having to damage the pristine upper surface. I have obtained the magnetic sensors and am working on an electric field sensor using these calculations. The conductance of fresh neocortex is well known. I used the characteristic dimension of 1 inch, since this is the diameter of the stimulating magnets. The result is that a piece of simulated neocortex this size should have a resistance between 355 and 597 Ohms.




2. Theories of Innovation

Pick an industry. Discuss it concisely from the perspective of each of the following theories/models of innovation:

  • Kline and Rosenberg (KR) Model: Market and Technical Forces
  • Abernathy and Utterback (AU) Model: Gas, Liquid, Solid + Transitions
  • Clark and Henderson (CH) Model: System Components vs. Architecture
  • Teece Model (T): Imitation
  • Christensen I (C1) Model: Resources, Processes and Value
  • Christensen II (C2) Model: Sustaining vs. Disruptive
  • Christensen III (C3) Model: Value Chain Evolution


Machine Learning as a Selected Industry


The industry I am picking is Artificial Intelligence, specifically Machine Learning - the revival of a revolution in progress. This revival is being facilitated by open-source software packages like TensorFlow, Caffe, Pytorch and Keras that are developed and run on personal computers, and then deployed and run in the cloud via containers on Amazon Web Services and Google Cloud services. ML is enabling fantastic advances, in parallel, through Big Data processing and the model of neural networks. This is so prevalent that some are repeating Clive Humby's quote that, 'Data is the new oil'. Andrew Ng is comparing the ubiquity of Machine Learning to electricity. ML is becoming a utility, like the phone pole that stands outside, that we take for granted until it stops working. For the Kline/Rosenberg model I compare and contrast the summary presented in class and a comprehensive outline obtained by a careful reading the paper. For the remainder of the models I will use the abbreviated versions presented in class.

Method

Although the Kline and Rosenberg (KR) article is 34 years old (1986) the issues it raises and examples it uses remain quite timely. This article was both pre-internet and pre-personal computer. The authors used examples from transportation and power generation industries. The web and personal computer have sped up the process of innovation, knowledge sharing, acquisition and increase, but the knowledge and conclusions of the paper remain mostly intact.

Kline and Rosenberg (KR) Class Model:
Market and Technical Forces
  • INNOVATION requires multiple inputs.
  • INNOVATION requires feedback. (compound statement cleaved)
  • INNOVATION creates knowledge.
  • INNOVATION is inseparable from its diffusion.
  • The MARKET improves the PRODUCT.
  • The MARKET improves the COMPANY.
KR Paper Outline
  • INNOVATION is controlled by MARKET FORCES
  • INNOVATION is controlled by TECHNICAL FORCES
  • Successful INNOVATION is:
    • 3/4 MARKET Need
    • 1/4 TECHNICAL Opportunity
  • Successful INNOVATION balances:
    • new PRODUCT REQUIREMENTS
    • MANUFACTURING CONSTRAINTS
    • ORGANIZATION SUSTAINABILITY
  • Successful INNOVATION demands right combination of:
    • affordable COST
    • technical PERFORMANCE
    • TIMING of introduction
    • rapid response to FEEDBACK
  • Canonical Examples:
    • Solar Energy had to wait for costs to drop
    • Concorde cost 15x per passenger mile
  • Models of INNOVATION
    • The Linear Model
      • Research, Development, Production, Marketing
      • Lacks FEEDBACK LOOPS
    • The Chain-Linked Model
      • Expanded Linear + FEEDBACK LOOPS
    • Radical vs Evolutionary INNOVATION and Support Organizations
    • Bicycle Dynamics is an unsolved problem. (random but interesting)
  • Identification and Reduction of UNCERTAINTY
  • Transition from CHAOS to ORDER (Similar to the Freezing Model)
  • Separates SCIENCE from ENGINEERING
  • Orthogonalizes SCIENCE vs INVENTION/DESIGN
  • INNOVATION is
    •  inherently UNCERTAIN
    • DISORDERLY
    • composed of COMPLEX SYSTEMS
    • subject to CHANGE/MODIFICATION
    • initiated by DESIGN instead of SCIENCE
    • enabled by FIVE IMPORTANT PATHWAYS
      • FEEDBACK that links R&D with Production/Marketing
      • PERIPHERAL links that serve the central INNOVATION
      • long-range RESEARCH
      • creation of new DEVICES or PROCESSES
      • SCIENTIFIC SUPPORT TOOLING and DEVICES
    • affects
      • MARKET ENVIRONMENT
      • PRODUCTION FACILITIES
      • PRODUCTION KNOWLEDGE
      • SOCIAL CONTEXTS
  • Two Major Variables:
    • UNCERTAINTY
    • LIFE CYCLE STAGE IDENTIFICATION
So how does the KR Model apply to machine learning?

The KR Model from class is the most useful starting point here:
  • The MARKET improves MACHINE LEARNING (ML).
  • The MARKET improves the COMPANY.
KR Discussion

The MARKET improves MACHINE LEARNING: Our proxy stand-ins for the MARKET will be Google, Anaconda, Guido and StackOverflow. Google open-sourced and released TensorFlow which gave everyone an advanced starting point for this programming change-of-paradigm. Joel Barker has a saying, "When there is a change-of-paradigm, everyone starts at zero". In the case of ML, the entity Google, a complex and many-splendored thing, provided everyone who used it the equivalent of a Formula One race car instead of a tricycle. The entity Anaconda, by providing a curated tool environment, further facilitated the use of TensorFlow. This phenom is epitomized by the phrase: 'conda install numpy', three words which equip the user with a high quality numerical analysis library nearly instantly. Consider: 'conda install how-to-fly-a-helicopter'. Guido van Rossum, the author of Python, provided the language to the world effectively license free. This terse programming language, which makes white-space an operator in the language, has had explosive growth and relevance to ML. The web utility stackoverflow.com, which curates programming Q&A, has become an integral part of ML software development. If one has a programming error, one can copy and paste that error into Google search, which provides a solution with a high rate of positive outcomes.

The MARKET improves the COMPANY: Google is now recognized as one of the de facto leaders in machine learning. The Google Home Assistant, summoned by the phrase, "Hey Google", takes verbal queries, translates them into text and invokes the search engine and other company services to provide a response in real time. This author has used the service multiple times just in the interval of writing this piece. Anaconda, Python and stackoverflow.com enjoy virtual monopolies on the software and services they provide, without exploiting their user community financially or emotionally.


Abernathy and Utterback (AU) Class Model:
Fluid, Transitional, Specific Phases



INNOVATION occurs in three phases:
  • FLUID PHASE
    • CHAOS and EXPERIMENTATION
  • TRANSITIONAL PHASE
    • STANDARDS Set
    • PRODUCTIVITY Increases
  • SPECIFIC PHASE
    • One Technology Dominates

AU Modified Model:
Gas, Liquid, Solid + Transitions 

In this version the AU Model is modified to look more thermodynamically consistent.
  • GAS PHASE
    • CHAOS and EXPERIMENTATION
  • CONDENSATION TRANSITION
  • LIQUID
    • CONSTRAINTS EMERGE
    • PROTOTYPING and PRELIMINARY DESIGN
  • FREEZING TRANSITION
  • SOLID PHASE
    • Design is FROZEN
AU Discussion

Hardware engineers love freezing their designs by writing them in the stone of silicon. Once the designs are tested, the engineers can go home and play.

Software engineers never freeze their designs willingly, and are continuously tempted to go in and improve (monkey with) the code. This introduces bugs that are sometimes never found. Software engineers are never done, and they spend their weekends looking for missing semicolons and rewriting code that already works.

The evolution of machine learning through these GAS, LIQUID and SOLID phases can be seen through the timelines below.


Thermodynamics of Machine Learning

The picture below shows some key innovations that have taken place over the last seven decades of time, including an 'AI winter' that occurred between 1970 and 1995. In 1986, mathematician John Spagnuolo and I implemented neural networks at JPL, but it never went anywhere because we did not include back-propagation. This winter corresponded to a transition between the GAS and LIQUID phases, but preceded the FROZEN period and the Cambrian Explosion of TensorFlow applications.

Source
Missing from this figure are some revolutionary developments identified by Ray Kurzweil (Markov Models), Yann Lecun, and Andrew Ng, especially the abstraction of back-propagation using the chain rule from calculus that has greatly accelerated development. Paradoxically it was published in 1970 by Finnish master student Seppo Linnainmaa, but it has had its greatest effect relatively recently. Here is a curve showing number of ML Patents over time. Note that patents inform the condensation and freezing processes of the AU model I have modified to make it more thermodynamically consistent.

Machine Learning Patents


Here is a curve showing Machine Learning Queries to Google from 2004 to Present. Activity increased around 2014.
Machine Learning Google Queries 2004 to present (3/2/2019)

Here is a  more recent timeline with attribution for recent innovations including Generative Adversarial Networks (GANS) and Long Short-Term Memory Cells (LSTM's). I think of GANS as the "Good Cop/Bad Cop" of ML, since they do there work by pairing an agent who synthesizes results and a critic who criticizes them until some kind of convergence is obtained. LSTMS are used in Recurrent Neural Networks (RNN's) and learn when they should remember and when they should forget! This is reminiscent of 'logarithmic forgetting'. RNN's are most useful for problems that evolve over time, while CNN's (Convolutional Neural Networks) are most useful for problems that evolve over space, as in the pixels of an image.


Different Laws for Different Phases

A second primal principle that emerged in our rumination of these innovation models is that different rules apply at different phases of the innovation (creation) process. A business lesson from the best-practices department might be that it is bad to use the rules that govern one phase to guide the activities in another phase. Said thermodynamically, gases have one set of rules, liquids another, and solids yet another.

In other words there are different laws that apply as we transition from the gas phase of innovation (brainstorming, pulling ideas from the ether), to the liquid phase of innovation (selecting which ideas we are going to run with) then finally to the solid phase of innovation (creating and freezing the design). This suggests that if we wanted to devise a reversible innovation process we could melt the solid and boil the liquid. 

Clark Henderson (CH) Model:System Components vs. Architecture

Two Forms of System INNOVATION
  • Improvements to SYSTEM COMPONENTS
  • Changes to SYSTEM ARCHITECTURE
Applying the CH Model to Machine Learning we can make a few comparisons:

Improvements to Machine Learning Components

ML Libraries are improving and evolving. When the call signature of a library function doesn't change but the internal implementation has improved we say, "The SYSTEM COMPONENTS have improved." However overarching architectural changes are occuring as well. TF2 - Tensorflow 2 - was announced today with fewer API's and a better implementation of eager execution. Eager execution is a new and different architecture for TensorFlow. The changes in TensorFlow are reiterated with links in the AI singularity chapter of the book review, "The Human Race to the Future".

Changes to Machine Learning Architectures

Originally in TF, ML operations were executed in a lazy-evaluation model. In this form the user would specify a graph of the problem and define the tensors that would flow through the network beforehand. Solving an ML problem consists of training the network and then testing how well it works. In a programming construct called a 'Session' this predefined graph was executed. At that time it was determined which resources were required in a 'batch mode' of operation. Eager execution replaced this - define the graph - , then - run the session in a batch -, with a, "just do it right now" approach. This had the advantages of making AI problems more intuitive to code.


Teece (T) Model

Can a PRODUCT or SERVICE be IMITATED?
  • Companies will WIN MARKET if they have SIMILAR ASSETS:
    These Assets Include:
    • DISTRIBUTION CHANNELS
    • SUPPLIER RELATIONSHIPS
    • CUSTOMER RELATIONSHIPS
    • MARKETING CAPABILITIES
    • MANUFACTURING CAPABILITIES
Machine Learning Companies with Similar Assets

At the highest level, Google and Amazon are locked in a duel to control the Machine Learning Cloud. Hot on their heels is a plethora of entrepreneurs looking to develop the next, 'killer app' for ML.

Early in the development of the automobile there were hundreds of companies vying to control the market. We are in a similar mode now where hundreds of companies at all levels in the enterprise and value chain are competing to contribute AI and ML products. Some companies are creating 'code-free' approaches to AI programming to save time and include non-coding users and enterprises. Due to the growing ubiquity of open source, what it means to compete is changing. It is difficult to develop proprietary solutions when some teenager in their bedroom can duplicate the same capability from existing github examples in a weekend. Github accomplishes instant distribution by connecting customer and supplier in a peer relationship. Google provides the marketing by indexing Github so customers and suppliers can be connected through common interest. Manufacturing is done by the programmers. The number of middlemen is dropping to zero. It is worth nothing that Microsoft owns Github, but they don't appear (currently) to be flexing any of their muscle to control access to customers or suppliers. If they did, github would immediately cease exist as programmers would pull their code and migrate it to some other open-source portal such as BitBucket.


Cristenson I (C1) Model

C1 is a RESOURCES, PROCESSES and VALUES Model (RPV). RPV determines what a company can and cannot do. Definitions:
  • RESOURCES:
    • People
    • Money
    • Equipment
  • PROCESSES:
    • PROCEDURES of getting things done
  • VALUES:
    • PRINCIPLES that determine HOW DECISIONS are made.
RPV Vary in the ease with which they may be changed:
  • RESOURCES - Easy to Change
  • PROCESSES - Moderately Difficult to Change
  • VALUES - Extremely Difficult to Change
Machine Learning RESOURCES are abundant on the web due to the open-source innovations mentioned above. The monetary cost (MONEY) of developing ML solutions is quite low and there is excellent training available for free on youtube and the web, and for very low cost through organizations like Udemy, which are giving the University system a serious run for its money. They do so by being first to market. By the time a University course is offered, AI epochs have come and gone, as have the opportunities for invention and innovation. Equipment is interesting. Users can develop on their own machines in a platform independent way. Windows, Linux and MacOS solutions all look and run the same when environments such as Python and Jupyter Notebooks are employed for development. There is one caveat. Machine learning models are extremely compute intensive to train on large datasets. There are now straightforward PROCESSES in taking models that were developed on PC's and pushing them into the cloud.  Training jobs can be outsourced to Google Cloud Console or Amazon Web Services where users can rent custom hardware for a fee, including the latest nVidia Graphical Processing Units (GPU's) are especially useful for training and testing ML codes. This leaves an opportunity for University, Institutional and Enterprise participation, by subsidizing the cost of training large models which individuals may not be able to afford. Thus there is a potential syzygy between the individual virtuoso ML programmer and the larger organization. Organizations like the openAI are attempting to institute a set of values and ethics to keep AI from blowing up with catastrophes. One group recently refused to release a story-synthesizing AI for fear it could be used to create fake news.

Cristenson II (C2) Model

C2 distinguishes between SUSTAINING and DISRUPTIVE INNOVATIONS. Definitions: 
  • SUSTAINING INNOVATION:
    • INCREMENTAL IMPROVEMENTS to PRODUCT or SERVICE
  • DISRUPTIVE INNOVATION:
    • ADDRESS a NEW NEED
C2 INNOVATIONS affect EXISTING and NEW CUSTOMER bases:
  • EXISTING CUSTOMERS:
    • who demand better PERFORMANCE and are willing to PAY for it.
  • NEW CUSTOMERS:
    •  yet to experience PERFORMANCE in new area.
As seen in the developmental timelines above, ML developed rather slowly and non-spectacularly as SUSTAINED INNOVATION until back propagation of neuron weights (along with Generative Adversarial Networks) was automated and distributed in packages like TensorFlow, PyTorch and Caffe 1 and 2. The result has been a DISRUPTIVE spike in software development, that is piggybacking along the more widespread availability of high performance GPU hardware both for rent and for sale. These software and hardware spikes are driving each other, although the collapse of cryptocurrencies like BitCoin in the last year has impacted the GPU industry significantly.

Cristenson III (C3) Model: Value Chain Evolution

C3 distinguishes between FULLY INTEGRATED and SPECIALIZED companies.

Definitions: 
  • FULLY INTEGRATED COMPANY:
    • Performs all COMPONENTS of a SYSTEM production in-house
  • SPECIALIZED COMPANY:
    • Produces one COMPONENT of a SYSTEM

C3 trade-offs are in INTEGRATION vs SPECIALIZATION:
  • FULLY INTEGRATED COMPANY:
    • One stop shop e.g.
      • Boom Box
      • Integrate the best one can do
  • SPECIALIZED COMPANY:
    •  Chooses partners to create a TOTAL SOLUTION e.g.
      • Component Stereo
      • Outsource solutions others do better
C3 further specifies that there is a hierarchy of consumer priorities when it comes to the purchasing or adoption of new products. According to this slide from D. Berleants university course, 'Information, Computing and the Future', those priorities are:


Fully Integrated vs. Specialized

In the case of Machine Learning, if one had to choose a one stop shop it would be Google first, Amazon second, Microsoft third and Apple fourth with Facebook as an also-ran. If one had to choose a specialized company it would be those who curate or contribute specific libraries or hardware, such as Anaconda for Python libraries (thus the name) and nVidia for GPU hardware. The explainability problem in AI has also led to the emergence of small companies, often from University incubators, that just address that single problem - how to get an ML code to explain to a court what it has done, why it made the selection or choice that it made. It all is sounding a bit free-will and non-deterministic isn't it? Since ML is being used to hire (and possibly fire) people, explainability is an important unsolved problem that companies like Kyndi are addressing. Another specialized area is hyperparameter optimization. Parameters such as the learning rate, what memory cell to use, what topology of neurons to use can all be adjusted to produce more optimal performance. Small companies like Sigopt and university research groups are addressing these more specialized concerns, while the big guns are driving overall progress in the field.

Customer Choice Priority

Addressing these priorities in reverse order is easier:
  • Price: All the ML development tools are free,
    while training large ML models in the cloud costs money.
  • Customization: ML Software is extremely malleable,
    liquid and extensively customizable.
  • Usability: The advent of Jupyter Notebooks has made modular chunks of machine learning as easy to trade as baseball cards.
  • Reliability: Machine Learning models provide results that are fuzzy, often varying by several percent on different training runs. This is different than the cold, hard and high precision determinism of traditional numerical analysis.
  • Functionality: With CNN's, RNN's, and Reinforcement Learning, functionality is high.


Footnote to Innovation Models 

Mutation Generates Knowledge via a Random Search

In class, we discovered, through the application of these theories of innovation, that nature uses mutation as a search algorithm, for generating knowledge from a very primitive and first principles level. Successful versions survive and the knowledge of that successful architecture is preserved in its DNA. This is an amazing result with far-reaching ramifications. One important thing to know is whether or not systems are imbued with some base-level architecture, which is then tailored for its environment by mutation, or whether there is no base-level architecture, but rather things that work and therefore exist, and things that don't work and therefore don't exist. This would be the, "there is no spoon" option.  As we wind back the clock, to before life existed, we get to planetary accretion and stellar evolution. The same question applies to that context. As we back up further and further towards the big bang, we have to ask at what point the rules were articulated that determine how things interact. As my son points out in an existential gasp, "McDonald's logos came out of the Big Bang", which is fairly terrifying if you think about it. Orthogonally, we might wonder if we are living in an oscillating universe, which generates a conundrum which appears in my answer to a Facebook question posed by Deepak Gupta:


So in the inset to the figure above I am basically arguing that there is a copy of our universe that is identical to ours where time is running in the opposite direction. Maybe that's where my Nobel prize will come from.

3. Grad students:
Read 20 pages in the book you have obtained. Explain what you agree with, disagree with, learned from it, and how your views agree or disagree with the reviewers of the book that you are analyzing.

I have been informed this week that UALR does not consider me a graduate student, despite having two master's degrees. Nonetheless I soldier in the hope that this outrage will be corrected. Therefore I continue my detailed review of, "The Human Race to the Future" a single curated document that is here. In the session for this homework I review Chapters Eleven on the "The AI Singularity" and Chapter Twelve, "Deconstructing Nuclear Nonproliferation". These topics complement each other as the lessons learned from one can be applied to the other.

No comments: