Read Latex

Sunday, January 20, 2019

Computing and the Future 8 - Thoughts on Quantum Computing


Richard Feynman once said, "...nobody understands quantum mechanics" and "If you think you understand quantum mechanics, you don't understand quantum mechanics." So let me be clear, I don't understand quantum mechanics. But I am taking some stabs at how to compute with it and have collected a few constructs along the way. We will  start with this overview by Dr. Shohini Ghose, then we will do some backstory, code something up, and suggest some exciting things to look into.

Quantum Computing Explained in 10 Minutes

If you haven't seen this, stop whatever you are doing right now and watch it unless you are doing brain surgery or flying a 747. Click on the caption below:


- Via Ted Library

The Illuminating Facebook Question

As a matter of practice I posted the video link on my Facebook. After a time Seiichi Kirikami, a geometer and mechanical engineer from Ibaraki, Japan, asked a honest, simple, and incredibly stimulating question:



My edited response was as follows:

I think of it in interval arithmetic, a chunk of numberline, using the notation [x1,x2] for closed intervals that include their endpoints we have:



[0,1] + [0,1] ~= [0,2]

I place the ~ character to include the fact that if entangled addends could be unentangled, then we could invert the operation to find out what the addends were AFTER the operation was complete. This is impossible with conventional addition where the addends are eventually discarded producing closure in TIME, which is interesting. Recall traditional closure means that if we add any two integers, we are guaranteed to get another integer we can represent in the space (barring finite state machine overflow concerns we need not worry about here ). Of the four basic operations +,-,*,/, three are closed, but division by zero is not closed, since it has two possible values, that could not differ by a greater amount. Those two values are +∞ and -∞ depending on whether the denominator our division problem is approaching zero from the left or the right and switching infinitely fast (like an entangled particle fate, faster than light?) as we cross the event horizon from positive to negative x or vica versa.


In her lecture, Dr. Ghose is doing a quantum Boolean operation, which does not have the citizenship of addition in its space. She is doing quantum AND ∩, OR ∪ or NOT(¬). This involves:

  • Two quantities in the binary AND(a,b) case
  • Two quantities in the binary OR (a,b) case, and
  • One quantity in the unary NOT(a) case.

In traditional computing we create addition using the notion of a carry.


Answering Seiichi's Question

Note to the reader: the word "right" is spelled "rite" below so that it has the same number of letters as "left". Apologies to the orthographically sensitive.

If you look at the definition of a qubit on wikipedia you get a Bloch Sphere where the value of the qubit in its superposition state is representable by two parameters that we can think of as latitude and a longitude.





Until the qubit is "read" or "measured" and its wave function collapses the values of latitude and longitude can be anywhere on the Bloch sphere. The symbol |ψ> symbol is in ket notation where the ψ suggests the wave function. When you look at Ghose's video she uses a color swatch where the "color" can range between pure yellow or pure blue or some mixture. In my Facebook answer I started by using an interval to show that the state of the qubit could take on a continuous range of values between zero and one, stand-in's for yellow and blue, but the {lat,lon} vector is what we should really be using. For now I think that this {lat,lon} vector is just a complex number A + Bi, or {A, B} if you prefer. The alert reader will notice there is an issue as to how we resolve the interval arithmetic version [leftX, riteX] with {a, b} vector version. It would look something like {[leftA, riteA],[leftB,riteB]}, but we know that this apparent four degrees of freedom collapses to two as explained in the wiki, leaving us with a single complex number with a real and imaginary component. Also these can be represented in ket notation as |Z> = A + Bi for the qubit case. For multiple qubits we have:



- Great Quantum Mechanic Notation Video

where each ai, bi, is itself a complex number.

In the quantum world of wavefunctions the value of a qubit is a complex number, which allows for the mysteries of entanglement; This is just like the mystery that two complex numbers can be added together to get a real number with no complex component or an imaginary number with no real component. The latter can be thought of as what happens when waves in a ripple tank interfere constructively or destructively.







Adding Two Integers Bitwise

Instead of going straight for the quantum juglar, let's review:

"How do we add two integers using bitwise operators?"


This takes us to a discussion on StackOverflow that answers the question for imperative coding styles. Two solutions are provided, the recursive version is the most elegant so we code it up in our online tool that lets the reader run the code. The operator names are changed to reflect our intention, where the leading q stands for quantum. We hope we can morph this code into a more realistic complex case. Run this code by clicking on the caption below:
- Run this code

Towards a More Complete Answer

A more complete answer can be obtained by repeating Seiichi's Question literally asking:

"How do we add 1 + 1 on a quantum computer."

Again StackOverflow has a well developed answer to this question.


Implementing an Adder on the IBM Quantum Computer


Your assignment is to embed [0,1]+[0,1] on the IBM quantum computer

Later develop the following:


  • think big, not small
  • the antenna issue and microwave circulators
  • Nicolas Gisin entangled photon experiment
  • lithium niobate downconversion
  • triplet fusion upconversion - summary here,  full article here.
  • the half adder
  • the full adder
  • von Neumann architectures
  • dsp architectures
  • quantum computing architectures
  • Gyros are weird, cognitive impediments to progress
The last item in the list led to my proposal for a Warm Quantum Computer.

The year is 1943 for quantum computing, what happens next?




Saturday, January 19, 2019

Computing and the Future 3 - Algorithms and Prediction


Underfitting and Overfitting the Future


An example of Python Jupyter Notebook running in the MyBinder environment is "Fit - Polynomial Under and Overfitting in Machine Learning", written by the author. Because the source code is present in the notebook it can be adapted to search for polynomial laws that predict trends such as those modeled by Berleant et.al. in "Moore’s Law, Wright’s Law and the Countdown to Exponential Space". MyBinder enables web publication of a Python Jupyter Notebooks. There is a little overhead (yes, waiting) to run these remotely, but it is not punitive compared to the utility obtained. The user can develop in the high-performance personal environment for high-speed, privacy and convenience,  then deploy the result in a public setting for review and general edification of the many.

MyBinder Deployed Notebooks


Taylor Series Expansion for Future Prediction

The Taylor Series approximation of a function is a mathematical way of forecasting the future behavior of a spatial or temporal curve using a polynomial and derivatives of the function. With each additional term, the approximation improves, and the expansion is best in a local neighborhood of the function. Here is an animated example I did for John Conway. In this example we are trying to approximate a cyclic future, represented by the red sine wave with polynomials of ever higher degree, starting with a green line, a blue cubic, a purple quintic and so forth. The more terms we take, the better our forecast is, but our horizon of foresight remains limited. Notice that if we had chosen to represent our future using cyclics like sines and cosines, our predictions could be perfect, provided we sampled at the Nyquist frequency or better.




Taylor Series introduces the notion of a "basis function". The world looks like the sum of the primitive functions that one uses to model it. In the above example we are trying to model or approximate a periodic function - the sine wave, with a polynomial, that is not periodic. This phenom appears in machine learning also. If you do not have a given feature (aka basis function) in the training set, that feature will not be properly recognized in the test set.

Identifying Fallacies in Machine Learning


Consider training an AI to recognize different breeds of dogs. Then show that same AI a cat. It will find the dog that is closest in "dog space" to the given cat, but it still won't be a cat. If the basis function one chooses does not mimic the property you want to detect or approximate, it is unlikely to be successful. This behavior can be seen in the TensorFlow Neural Network Playground mentioned previously. This is an important principle that helps us to cut through deceitful glamour, false-mystique and unrealistic expectations of machine learning. It is so fundamental we could place it in the list, "Fallacies of Machine Learning", filed under the header, "If all you have is a hammer, everything looks like a nail". See "Thirty-Two Common Fallacies Explained" written by the author.

In conclusion, we discover, "the basis function should have behaviors/ingredients such that their combination can approximate the complexity of the behavior of the composite system", be they cats, or sine waves. Basis functions occur in finite-element analysis of structures which asks "Does the bridge collapse?". They appear in computational geometry as rational B-Splines, rational because regular B-Splines cannot represent a circle. Periodic functions appear in Fourier Analysis, such as the wildly successful Discrete Fourier Transform (DFT), audio graphic equalizers and Wavelet Transforms, a class of discrete, discontinuous and periodic basis functions. The richness of the basis functions we choose strictly limit our accuracy in prediction, temporally (the future), or spatially (the space we operate in). 

Genetics Algorithms

Predicting the future can be made to look like an optimization problem where we search a space of parallel possibilities for the most likely candidate possibility.
Sometimes the space that we want to search for an optimal solution is too large to enumerate and evaluate every candidate collection of parameters. In this circumstance we can use grid sampling methods, either regularly or adaptively spaced, refining our estimates by sampling those regions which vary the most.


We can use Monte Carlo random methods and we can use Genetic Algorithms to "breed" ever better possible solutions. For sufficiently complex problems, none of these methods are guaranteed to produce the optimal solution. When sampling with grids, we are not guaranteed that our grid points will be on the optimal set although we do have guidelines, like the Nyquist sampling theorem that says if we sample at twice the rate of the highest frequency of a periodic waveform, then we can reproduce that waveform with arbitrary accuracy. If we sample at a coarser resolution than the highest frequency then we get unwanted "aliases" of our original signal. 

An example of spatial aliasing is the "jaggies" of a computer display:


An example of temporal (time) aliasing are propeller spin reversals when filmed with a motion picture camera:


But sometimes the future we are trying to predict is not periodic and "history is not repeating itself". But forget all that. I mentioned all this to tell you about genetic algorithms. These are explained here by Marek Obitko who developed one of the clearest platforms for demonstrating them. Unfortunately Java Applets no longer work in popular browsers due to the discontinuance of NPAPI support. A world of producers and consumers demonstration is here.




Approximation and Refinement of Prediction


Sometimes we want to make an initial guess, and then refine this guess. An intuitive model for this is Chaikin's algorithm:


In this case we have some expected approximation of the future represented by a control polygon with few vertices. As we refine the polygon by recursively chopping off corners we end up with a smoothly curve or surface.

Iterated Systems

These are my favorite, so much that I've written a book on them. They are truly "future-based" equations that only assume that the future evolves from the past according to some set of steps. Because of their similarity to fractals I like the chaotic nature of the representation. If we want to model a chaotic future we need an chaotic underlying basis function. 



When the coefficients of the iterated system have a component of "noise" or randomness we can simulate an envelope of possible futures. Take for example the prediction of the future of the landing spot of an arrow. Random factors such as the wind, temperature, humidity, precipitation and gravitational constant (which varies with elevation) can all affect the final outcome. My final project may draw from this area.

Virtualizations

There are some virtualizations that are so effective they have become working principles in science and engineering. An example of one these is the principle of virtual work which is used to derive strain energy relationships in structural mechanics to enable the prediction of whether bridges will collapse at some point in the future. The amazing thing about the principle of virtual work is that a load is put on the bridge and then the displacements of various points on the bridge are imagined, even though the bridge is not really moving at all. The degree of this virtual displacement is used to calculate the strain in each element of the bridge or structure. If in any member element the strain exceeds the strength of the material, that member fails, and the collapse can be predicted and avoided by strengthening just the weak member.

Another virtualization that is timely are complex numbers that occur in wave functions, such as acoustics and quantum computing. (Imagine a sound machine that simulates quantum superposition!) When two complex waves meet they each contain imaginary components that do not exist. Yet if they combine additively or multiplicatively they can produce real outcomes. This is spooky and interesting.

Other virtualizations include:


All of which can be used in computer simulations of complex system like the stock market, terrorism and cancer.

Robert H. Cannon Jr. of CalTech in his 1967  book, "Dynamics of Physical Systems" discusses the convolution integral in control theory which incorporates the past state of the system to continuously impact the current and future states. This book written at the height of the Apollo era was amazingly ahead of its time in codifying control system analysis techniques using Laplace transforms, a complex number transform similar to the Fourier transform.  Kalman filtering can be applied to complex systems where the state of the system is changing rapidly. 

The Stop Button Problem

The Stop Button Problem in Artificial General Intelligence (AGI) is a fascinating study in what happens when what the Robot wants is different from what the Person wants. The video, The Stop Button Problem,  by Rob Miles describes the problem in detail.

The I, Robot Problems mentioned in the ICF course April 2013 based on Isaac Asimovs, "Three Laws of Robotics" discuss also.

Miles proposes proposes a solution to this problem using Cooperative Inverse Reinforcement Learning. 



The take home message is, "Make sure that humans and robots want the same thing or there will be trouble."

Sentiment Analysis

Clive Humby, a UK mathematician has said that, "Data is the new oil". Andrew Ng, a leading ML researcher makes the statement that "AI is like electricity", compounding this information-as-{utility, power, energy} metaphor. I have used the phrase, "Hot and Cold Running Knowledge" to describe the situation we currently find ourselves in.


- From DreamGrow


Social networks like Twitter, Facebook, Instagram, are fertile fields for harvesting user sentiments. User sentiment affect purchasing decisions, voting decision, and market prices for commodities such as stocks, bonds and futures.

Machine Learning is being increasingly used to scrape these social networks to determine sentiments in a kind of superpredictive Delphi method.


Handy's Law in Geotagging

Mentioned in the course notes for ICF January 2014, "Handy's Law" states, "pixels/dollar doubles yearly". Consider Nokia's new five camera design.



The question in my mind is this a tip of the hat to superfluous excess like the fin race of the fifties, a tech/fashion bubble of yesteryear - or does it represent a true increase in utility?

Computational Medicine

This is a separate essay.



Friday, January 11, 2019

Computing and the Future 2 - Computational Medicine


Introduction


I am developing this line of reasoning for a course I plan to take: Information Computing and the Future.

Computational Medicine is an area that is socially, politically and technically fascinating.

It is socially fascinating because, with the advent of successful machine learning, it holds the promise to democratize access to medical care.

It is politically fascinating because there are entrenched interests making large amounts of money from the status quo. These interests include health insurance companies, hospitals and care providers. The term "care providers" is a catch all for doctors, generalists and specialists, nurses, RN's and LPN's and support staff such as radiation technicians, respiratory therapists, physical therapists, lab technicians and so forth.

It is technically fascinating because everyone is in need of competent health care and to the extent that some portions of diagnosis and treatment can be automated, more people can receive timely and effective treatment.

I will focus on the computational aspects since they hold much promise for progress and are more tractable than the social and political areas.

Computational Medicine in the Small

The turn of the millennium produced the first nearly complete sequence of the human genome, which is computationally a base-four digital Turing Tape whose length is two instances of a 3.234 Giga base pairs, one from each parent. These codes are replicated 37 trillion times in the 250+ cell types of the body. There has been substantial recent progress in gene sequencing, but a gap remains between the code of the genes (genotype), and the invisible circuits that regulate gene expression (phenotype).

Understanding the underlying genetic components of disease will continue to be great step forward in guiding accurate diagnosis and treatment.

Computational Medicine in the Large

Machine learning has been applied with great effect to cancer detection at the macroscopic level of breast cancer radiology inspection and skin cancer (melanoma) screening. For a season a schism emerged between those who are domain experts in the biological sciences and those who are so trained in the computational sciences. As Machine Learning continues to outpace expensive cancer specialists, there may be a "circling of the wagons" by those who have held an economic monopoly in this diagnostic area. They can become conflicted in their duty to heal the patient and amass large profits for themselves and their institutions.

Computational Medicine in the Huge

We can zoom our computational lens out to include populations of people, taking an "epidemiological" point of view to the national or worldwide level.

On January 2, 2019, a list of the drugs approved by the FDA in 2018 was released. Some of these drugs are "orphan drugs", that is drugs that treat conditions that are relatively rare in the world population. There is less economic incentive to manufacture and research these drugs than those for more common conditions such as cancer, HIV, cystic fibrosis, malaria and river-blindness. However the emergent theme in most of these new drugs is their astronomical pricing, making them unavailable to those, who in many cases, most need them. Here are just 3 out of 59 entries from the list above, two of which cost more than $250,000 wholesale per year:
One of the drugs in this list - Galafold, for fatty buildup disease - costs $315,000 per year, yet could be synthesized by someone with high school lab skill and a chemistry set!

This pharmacological injustice can lead to the social bifurcation of "haves and have-nots" - preconditions that fulment unrest, conflict and sometimes all out war.

But here I want to focus on a pattern of computational interaction that has a more positive end, and that could ultimately democratize access to diagnosis and treatment - all facilitated by information processing in the future.

Daisy Wheel Diagnostics

Preamble

What follows below is more autobiographical than I want or like. I am attempting to reconstruct a line of thinking, a chain of reasoning that led to my current perspective, and enlightens what will come next. Apologies for the first-person perspective.

Introduction

When a family member of mine developed cancer I felt that it was important to understand this complex disease from a comprehensive point of view. Typically we are conditioned to look at "single-factor" causes of diseases that are in fact multifactor in nature. The first thing I did was to start trying to draw cause and effect graphs between the genes that are implicated in cancer, since visualization of that which cannot be seen has been a source of breakthroughs in clinical medicine, both with radiology, the microscope and clinical lab spectroscopy.

Some Connections to the Her2Neu Receptor Gene
After that I took genetics, biochemistry and molecular biology and wrote a summary treatise on the various factors that enable cancer to develop and treatment approaches.

Four Categories of Carcinogenesis

In broad strokes, here is a pattern of inquiry that I have developed over time, out of habit, first from seven years working in clinical laboratories, and later with five family members who have had cancer, or died of it, and two who have had mental illness. I have drawn the blood and taken EKG's on thousands of people. I have run hematology, clinical chemistry, and bacteriology tests on these same people and produced reports that were provided to attending physicians that determined treatment. I have attended appointments with surgical oncologists, radiation oncologists, and hematological oncologists (the mnemonic in cancer is "slash-burn-poison" for radiotherapy, chemotherapy and surgery respectively). That is the cancer front.

Mental illness is more of a black box with respect to the clinical laboratory because we do not as of yet have a way of sampling the concentration of intrasynaptic neurotransmitter levels along the neural tracts in the living brain. Nonetheless behavior and thought-patterns themselves can be diagnostic, which creates huge opportunities for machine learning diagnostics.

To challenge the black-box definition: Let me conclude this introduction with an observation, useful in the definition and treatment of mental illness:

"If a patient is successfully treated with a certain drug known to be specific for their symptoms, then in all likelihood, they have the disease associated with that set of symptoms." This is not circular, it is rather "substance-oriented".The constellation of drugs administered to a successfully treated patient, constitute a definition of their specific condition. The inference being made is that the successful suite of substances that restores neurotransmitter concentrations and brain chemistry to normal levels serves as a label for the condition from which the person suffers.

In computational terms, these treatments constitute an alternate name for their condition, to wit, "The condition successfully treated by XYZ in amounts xyz." So the patient has the condition XYZxyz. This makes sense if we consider the underlying biochemical nature of mental illness at the small molecule level as being that of chords of specific receptors being up and down regulated in certain patterns at certain times. There are larger aggregations of neural tract organization that are obviously also important, but my sense is that these are more significant in aphasia and specific disabilities that are separately discerned from conditions such as bipolar disorder, obsessive-compulsive disorder, schizophrenia and so forth. End of introduction.

Patterns of Human vs. Computational Inquiry

Over time I have developed some habits of inquiry, due to my mother, a medical technologist and my dad, a software engineer, who taught me how to assess whether a given care provider "was good". Answering the question, "but are they good" was an unspoken goal that attempted to assess their competence, depth of education and instinctive ability to accurately diagnose and treat various illness. Whenever I am engaged with a medical care provider, I am trying to make an accurate estimation of their abilities, since in the end, it can spell life or death. That is a purely human activity.

Over the years, part of this process has become more computational in nature as I attempted to ask the most actionable set of questions during an appointment. These questions create an atmosphere of seriousness that most competent care providers enjoy. The advent of Google has amplified the ability to prepare to an extreme degree and can positively impact the continuing education of the care providers as well. Patient-provider education is an two-way, continuing and iterative process.

Design Patterns of Computationally-Assisted Inquiry:


  • Drug-driven
  • Disease-driven
  • Cell-driven
  • Test-driven
  • Physiology-driven
Let's define each case:

Drug-Driven (or pharmacology driven):

In this scheme, the name of a candidate drug typically prescribed for treating a condition is used to find:
  • Chemical structure, 
  • Indications for use
  • Contraindications
  • Mode of action
  • Side effects
  • Toxicity / L:D
Drug-driven strategies use variants in baseline structures to optimize treatments and minimize side effects. This optimization is man-machine process.


The Physician's Desk Reference canonizes this information for commonly prescribed drugs. For example, digoxin, derived from the Foxglove plant is indicated for congestive heart failure and improves pumping efficiency, but its toxic level is very close to its therapeutic level making it somewhat risky to use.
There are intellectual rights issues that emerge, but machine learning can address these by enabling knowledge utilization without distribution of the original content.

Imagine you are in a pharmacy where all known drugs are present. Given you, your genes, and your current state, what is the optimal combination of these substances and the dosing schedule that most benefit both short and long term health and quality of life?

Disease Driven (or illness driven):

In this approach:
  • A complaint is registered
  • A history is taken
  • Vital signs are measured
  • Patient's symptoms are recorded
  • Diagnostic tests are ordered and results recorded
These are combined in a complex composite record which is represented as the patient's chart which includes text, handwriting, hand-annotated diagrams, 
These records accumulate as a stack with each successive visit. eCharting has been taking hold for the past few years, but a remarkable amount of patient information lies fallow in discarded, obsolete patient records in cold-storage. It is essential that organizations like Google digitize such patients charts before they are lost to perpetuity. This would involve scanning treatments and outcomes with the same scale and enthusiasm that the world's libraries have enjoyed in recovering old books. Thank you Google! This would create large training sets for machine learning and further of codification of medical knowledge in a machine-learning context. HIPAA laws and frivolous copyright lawsuits have obstructed this Nobel-prize worth activity, perhaps due to concerns of litigation-sensitive record custodians.

Cell-Driven (or genetics driven):

Cell-Driven strategies include:
  • Cell population counting in hematology
  • Differential cell identification (diff)
  • Flow cytometry in cancer diagnosis
  • Genome sequencing

Cell population counting includes red cell, white cell, platelets and differential cell identification as part of the complete blood count (CBC and Diff). Differential cell identification can be done manually or by flow cytometry where white cells are sorted into types including lymphocytes, monocytes, neutrophils, eosinophils and basophils.

Modern and future approaches include characterizing the
  • Receptors (cell logic gates) expressed and their mutations
  • Cytokines (chemical signals) that are driving their differentiation
  • Metabolic products
Consider the gene-product as drug treatment point of view. If we knew the exact amount of over or under expression of genes and their cognate gene products, these could be compensated for by a custom cocktail of appropriate pharmacological agents. Small molecule agents can be ingested via the digestive tract, while protein products must often be administered intravenously since the protein structures are degraded when in contact with the digestive tract enzymes and low pH acidic conditions.

Test-Driven (clinical lab and radiology driven):

Test-Driven strategies include Clinical:
  • Chemistry for Kidney, Liver, Heart and Organ System Function
  • Bacteriology for Infections Disease
  • Histology for Determination of Abnormal Cell Types
  • Ultrasound in 2, 3 and 4 dimensions
  • X-Rays
  • PET scans (Positron-Emission Tomography)
  • MRI/fMRI (functional Magnetic Resonance Imagery)
  • CAT scans (Computer-Aided Tomography)

Physiology Driven (or illness driven):

Higher levels of organization than just the cell can be players in disease process.  Organs operate in synchrony like the members of an orchestra, some autonomously and some with directed signaling. The sympathetic vs. parasympathetic nervous system are example of this. Endocrine disorders can manifest at relatively large distances from the source of the problem. Diabetes is a complex disease with multiple factors operating at multiple scales including genetics and environment.

With these design patterns in place, to borrow a phrase from software engineering, we should think about how they might be combined into a diagnostic and treatment system that could reduce the cost of healthcare help the most people possible.

Now recall the Daisy Wheel printers of the past. They achieved a speedup factor of two simply by not starting at the beginning of the next line to be printed. Since the mechanical movement of the print head was the most expensive step, the even lines were printed left to right, while the odd lines were printed right to left, which required only the minimal computation of reversing the order of the characters to be printed on the odd lines. Does this metaphor fit with the application of knowledge derived from each of the medical processes described above?










Friday, January 04, 2019

Computing and the Future HW 1 - Too Many Languages




An interesting course at the University of Arkansas, led by Dr. Jared Daniel Berleant is Information, Computing and the Future. Homeworks are submitted via blog, and lectures dating  back to 2012 are archived, which helps getting oriented to the material.

There is an assertion in these archived notes that programmer productivity is increasing for various reasons. The exact phrase that captured my interest is "Increased Programmer Productivity" or IPP for short. I want to to assert the existence of some mitigating factors to make the case that programmer productivity is decreased over what it might be due to the paradox of "too many choices", which manifests as:


  • too many languages
  • too many Interactive Development Environments
  • too many target architectures

The Tiobe language index lists 18 languages whose penetration exceeds 1%. Assuming there is an Interactive Development Environment for each language we have 18 IDE's. This is a conservative estimate. Enumerating target architectures we again can conservatively estimate 18 from this StackOverflow list. In the worst case this leads the programmer to a minimum of 183 "choices" or 5000+ things to be "good at". Obviously this is all somewhat contrived, since programmer's specialize and there isn't a separate IDE for each language, but how do programmers reassure themselves they have specialized in the best language+IDE skillset? At 182, there are over 300 combinations, and learning a language and IDE takes some time, say a year, to master. If you don't have 300 years lying around check out some favorable candidates below.

I am interested in the overhead that occurs when we create codes that are illustrative, informative, provable and self-explanatory. The Explainability Problem in AI is an example of this. Addressing this has become the law in the EU. Brexit anyone? A hot topic in machine learning research is hyperparameter optimization. Briefly, how does one choose the activation function (the neuron response), the learning rate, the optimal partition between training set and test set, etc. to create the fastest learning and best performing AI? Speaking of this:
To see hyperparameters and neural networks in action, visit the TensorFlow neural network playground. Academic excursion question: Bayesian Learning is known to be effective, but combinatorially expensive. Can it be shown that in the limit of sufficiently exploring the parameter space, that hyperparameter optimization is effectively just as intractable, because of the number of configurations that must be explored? 





Another thing I am interested in, and the main focus of this entry, is how we bring people into our train of thought, into our line of reasoning, as seamlessly as possible. We document our thoughts, communicate them, run simulations, and perform mathematical analyses to make statements about the future and the limitations that we have in predicting it1. Some tools (and their requisite URLS) that assist in this process are:


  • Blogger for creating running Blogs in the Cloud
  • Scrimba for Teaching/Demonstrating Algorithms in Vue.js
  • JSFiddle for Teaching/Demonstrating Algorithms in JavaScript
  • Jupyter Notebooks for running Machine Learning in Python
  • MyBinder for hosting deployed Jupyter Notebooks

The links above can be run for brevity, the discussion below enumerates some of the issues they raise or address.

Blogger

Blogger is an adequate blogging platform, enabling the inclusion of links, images, and videos such as the opening one above. The link provided above compares blogger.com to several competing platforms. It's not particularly futuristic per se, but it is better than paper or email. I would include an illustration, but this very page serves that purpose very well. It has been somewhat "future-proof" for the past few years anyway. It does the job, so no further commentary is necessary.

Scrimba

This system overcomes weaknesses in passive video tutorials. The architects of Scrimba wanted a system that would engage the student, that would let them make the rite of passage from student to contributor rapidly, and that would reduce the overhead of learning by enabling instant interaction with the code for the example at hand. The strong suite of Scrimba is currently its tutorial on Vue.js, that enable dynamic programming of graphical user interfaces via JavaScript, in html pages. The Scrima: Vue: "Hello World" example narrated by Per Harald Borgen is the perfect example of making the sale before the customer even realizes they've been sold. Per's intro leads to a more detailed exploration of Vue that actually does useful work, facilitated by Zaydek Michels-Gualtieri. Between these two tutorials one can learn the basics of Vue.js in a little over an hour. This makes the case FOR increased programmer productivity (IPP). However, the advent of yet another language and IDE adds yet another choice to the implementation mix, a mitigating factor AGAINST IPP.




Note 1: If programming were a crime, we could speak in terms of aggravating and mitigating (extenuating) circumstances that would enhance or reduce the programmer's guilt and corresponding sentence.

Note 2: According to my offspring, a powerful development trifecta is created when  Vue.js development is combined with Google Firebase user authentication and  Netlifly, a platform for deploying applications.


JSFiddle

Byte-sized proof of concepts for functions, hacks, tips and tricks. JSFiddle simultaneously shows the html, css, and javascript in three separate INTERACTIVE windows, along with a main window where the html markup, css stylesheets and JavaScript code are executed in the browser to show the outcome. JSFiddle lives on the web, in the public cloud so that anyone can contribute. Scrimba improves upon this concept by enabling the user to grab control of the session and the code at anytime during the presentation, pausing the narrator, saving past states and audio recording by the same user, who can eventually narrate their own content.




Jupyter Notebooks

"Writing code that tells a story." I include this because Python 3 programming in Jupyter has become the lingua franca of information exchange in the current machine learning (ML) revolution. The Tiobe index, cited above, has declared Python to be the language of the year for 2018! One can obtain both Jupyter, Python 3, and a host of useful libraries like numpy, scipy, and pandas from the Anaconda consortium, a useful one-stop shop. It is worth noting that these codes are evolving very rapidly, I have to check my pandas version as we speak. An important feature of Jupyter Notebooks is platform independence, they run on Windows, MacOS Unix, Ubuntu Linux, etc. Further this platform is not "owned" by a company, like the disastrous ownership of Java by Oracle, or that of C# by Microsoft.
The video link claims that Java is supported, but I find such support tenuous at best. Kotlin is supported at an alpha level, and since the terser Kotlin compiles into Java, indirect support is implied.

It is worth noting that Jupyter can run 100+ different kernels, or language environments. So the general idea of a working lab notebook that runs on a local server has really taken off and thus is the wave of the future. I like the relative permissiveness of the local sandbox, compared to Java development, and the fact that results can be reproduced by any other investigator with a Jupyter Notebook setup. I also like the "bite-sized" morsel of a Jupyter notebook that can focus on a single problem, in the context of a greater objective.







1 Who doesn't love a good footnote? I hope to amplify notions of the thermodynamics of trust, showstoppers and such in future entries to this chain.

Sunday, April 22, 2018

More AI, More ML: An Open Letter to Ancestry.com and 23andMe.com

That's it. You can stop reading now. Just do the title. How hard can it be?

Ancestry.com principally and 23andMe.com to a lesser extent let you use their genealogical services to assemble a family tree. I will focus on Ancestry there, but similar reasoning applies to 23andMe.com. There are two components to the family-tree building process, the PAPER of existing records and the BIOLOGY of DNA samples which both services analyze. However there is a glaring problem of when it comes to certifying the authenticity of family trees derived from historical documents, that is, PAPER. Do you trust the source? Can you read the document? Are the spelling changes plausible and if so how much? By using both DNA and PAPER one can cross check one against the other to confirm authentic lineages and refute specious ones. But there must be quality control in both the PAPER and the DNA. Laboratory techniques for DNA handling use statistical quality control methods that are reliable, however there is no equivalent quality control methodology for PAPER, which in large part has been converted to MICROFILM and digitized with varying levels of quality control image processing. There are chain of custody issues when one submits a DNA sample to both services and one should really submit multiple samples to be sure that the correct sample has been tested and labeled. There are also handing issues as samples make their way through the mail, postal and delivery systems. More or less the later issues are being addressed.

Ancestry.com currently requires you to chase hints in time and space to determine if you are related to a given candidate ancestor listed in a public record or another family tree. For large trees this can be extremely labor intensive, without guarantee that one has constructed a forensically certifiable result.


One error source is this; Ancestry let's you use other's family trees that are themselves mashups of information of dubious origin and there is no rhyme or reason to confirming whether information in these other trees is accurate. In other words there is no quality control. No assurance that one is dealing in fact.

The addition of DNA helps one connect with living ancestors and to add ground truth to previously assembled trees. There are forensic methodologies that increase certainty, such as this: when independent sources of information confirm the information. The more redundancy of independent records, the higher the certainty that the conclusions, the facts are authentic.

The problem is, after ones' family tree gets to an interesting level of complexity, the number of hints grows exponentially and many of the 'hints' lead to completely specious assemblies of data.

The fix to this is to associate with each tree, and with each fact in each tree, a certainty that the fact in the tree is indeed true. For a given ancestral line, these certainties can be multiplied together to provide a composite value that indicates the reliability of information. As a detail certainty is a number between 0 and 1 inclusive. A 1 means certainty is complete (which never exists in the real world of statistics). A 0 means there is no certainty whatsoever. A certainty of 0.9 means that the fact has a 90% chance of being true. If we chain two facts together each with certainty of 0.9 we have a 0.81% certainty that both facts are true.

There are a host of microfilmed documents from all over the earth that have been read, digitized and collated by human beings and many of these have been collected by Ancestry.com in what constitutes a controlling monopoly over historical ancestral information. This source of this control, this power has its roots in Mormonism. This can be good thing in that there is a single long-term historical and motivating organization or presence. This could be a bad thing if religious exclusion occurs.

The point of my open letter is this:

Recent advances in machine learning would enable PAPER documents to be parsed by machines and would associate with each fact gleaned from them a level of certainty. Previous entries in this blog discuss summarize these advances in detail.

The Mormon Church and Ancestry.com have close affiliations. In Salt Lake City, Utah, both have excelled in using advanced computation to solve important problems.

The problem is that there is a financial conflict of interest at play. Many families have invested generations of time and thousands of hours of work in building family trees using manual and computational methods. They may not take kindly to having their work, especially closely-held beliefs or assumptions questioned when those beliefs provide them with self-esteem or status in the community.

For people who have spent 30 years justifying that they are related to a Hindenburg or a Henry the 2nd like myself, this will be good news and bad news. It will be good news in that it will allow a more comprehensive family tree to be assembled, more RAPIDLY with less human error. It will be GOOD NEWS that it will allow a precise certainty to be associated with each fact in the tree. It will be bad news for those who have a need to be related to someone famous or historic and are not and have significant social capital in those claims.

I have a large tree of both famous and historic ancestors, including kings and martyrs. But I would gladly trade it off for a complete and accurate picture of who I am actually related to.

Mainly, I don't have time to chase the 31,000 hints that have popped up in my Ancestry.com Family Tree, especially when I know that machines can do it better. To that end, it is time to make more exhaustive and complete use of handwriting and document analysis using the burgeoning progress taking place in Artificial Intelligence and Machine Learning. The opportunity for true and factual historical insight could be spectacular.