Read Latex

Saturday, January 19, 2019

Computing and the Future 3 - Algorithms and Prediction


Underfitting and Overfitting the Future


An example of Python Jupyter Notebook running in the MyBinder environment is "Fit - Polynomial Under and Overfitting in Machine Learning", written by the author. Because the source code is present in the notebook it can be adapted to search for polynomial laws that predict trends such as those modeled by Berleant et.al. in "Moore’s Law, Wright’s Law and the Countdown to Exponential Space". MyBinder enables web publication of a Python Jupyter Notebooks. There is a little overhead (yes, waiting) to run these remotely, but it is not punitive compared to the utility obtained. The user can develop in the high-performance personal environment for high-speed, privacy and convenience,  then deploy the result in a public setting for review and general edification of the many.

MyBinder Deployed Notebooks


Taylor Series Expansion for Future Prediction

The Taylor Series approximation of a function is a mathematical way of forecasting the future behavior of a spatial or temporal curve using a polynomial and derivatives of the function. With each additional term, the approximation improves, and the expansion is best in a local neighborhood of the function. Here is an animated example I did for John Conway. In this example we are trying to approximate a cyclic future, represented by the red sine wave with polynomials of ever higher degree, starting with a green line, a blue cubic, a purple quintic and so forth. The more terms we take, the better our forecast is, but our horizon of foresight remains limited. Notice that if we had chosen to represent our future using cyclics like sines and cosines, our predictions could be perfect, provided we sampled at the Nyquist frequency or better.




Taylor Series introduces the notion of a "basis function". The world looks like the sum of the primitive functions that one uses to model it. In the above example we are trying to model or approximate a periodic function - the sine wave, with a polynomial, that is not periodic. This phenom appears in machine learning also. If you do not have a given feature (aka basis function) in the training set, that feature will not be properly recognized in the test set.

Identifying Fallacies in Machine Learning


Consider training an AI to recognize different breeds of dogs. Then show that same AI a cat. It will find the dog that is closest in "dog space" to the given cat, but it still won't be a cat. If the basis function one chooses does not mimic the property you want to detect or approximate, it is unlikely to be successful. This behavior can be seen in the TensorFlow Neural Network Playground mentioned previously. This is an important principle that helps us to cut through deceitful glamour, false-mystique and unrealistic expectations of machine learning. It is so fundamental we could place it in the list, "Fallacies of Machine Learning", filed under the header, "If all you have is a hammer, everything looks like a nail". See "Thirty-Two Common Fallacies Explained" written by the author.

In conclusion, we discover, "the basis function should have behaviors/ingredients such that their combination can approximate the complexity of the behavior of the composite system", be they cats, or sine waves. Basis functions occur in finite-element analysis of structures which asks "Does the bridge collapse?". They appear in computational geometry as rational B-Splines, rational because regular B-Splines cannot represent a circle. Periodic functions appear in Fourier Analysis, such as the wildly successful Discrete Fourier Transform (DFT), audio graphic equalizers and Wavelet Transforms, a class of discrete, discontinuous and periodic basis functions. The richness of the basis functions we choose strictly limit our accuracy in prediction, temporally (the future), or spatially (the space we operate in). 

Genetics Algorithms

Predicting the future can be made to look like an optimization problem where we search a space of parallel possibilities for the most likely candidate possibility.
Sometimes the space that we want to search for an optimal solution is too large to enumerate and evaluate every candidate collection of parameters. In this circumstance we can use grid sampling methods, either regularly or adaptively spaced, refining our estimates by sampling those regions which vary the most.


We can use Monte Carlo random methods and we can use Genetic Algorithms to "breed" ever better possible solutions. For sufficiently complex problems, none of these methods are guaranteed to produce the optimal solution. When sampling with grids, we are not guaranteed that our grid points will be on the optimal set although we do have guidelines, like the Nyquist sampling theorem that says if we sample at twice the rate of the highest frequency of a periodic waveform, then we can reproduce that waveform with arbitrary accuracy. If we sample at a coarser resolution than the highest frequency then we get unwanted "aliases" of our original signal. 

An example of spatial aliasing is the "jaggies" of a computer display:


An example of temporal (time) aliasing are propeller spin reversals when filmed with a motion picture camera:


But sometimes the future we are trying to predict is not periodic and "history is not repeating itself". But forget all that. I mentioned all this to tell you about genetic algorithms. These are explained here by Marek Obitko who developed one of the clearest platforms for demonstrating them. Unfortunately Java Applets no longer work in popular browsers due to the discontinuance of NPAPI support. A world of producers and consumers demonstration is here.




Approximation and Refinement of Prediction


Sometimes we want to make an initial guess, and then refine this guess. An intuitive model for this is Chaikin's algorithm:


In this case we have some expected approximation of the future represented by a control polygon with few vertices. As we refine the polygon by recursively chopping off corners we end up with a smoothly curve or surface.

Iterated Systems

These are my favorite, so much that I've written a book on them. They are truly "future-based" equations that only assume that the future evolves from the past according to some set of steps. Because of their similarity to fractals I like the chaotic nature of the representation. If we want to model a chaotic future we need an chaotic underlying basis function. 



When the coefficients of the iterated system have a component of "noise" or randomness we can simulate an envelope of possible futures. Take for example the prediction of the future of the landing spot of an arrow. Random factors such as the wind, temperature, humidity, precipitation and gravitational constant (which varies with elevation) can all affect the final outcome. My final project may draw from this area.

Virtualizations

There are some virtualizations that are so effective they have become working principles in science and engineering. An example of one these is the principle of virtual work which is used to derive strain energy relationships in structural mechanics to enable the prediction of whether bridges will collapse at some point in the future. The amazing thing about the principle of virtual work is that a load is put on the bridge and then the displacements of various points on the bridge are imagined, even though the bridge is not really moving at all. The degree of this virtual displacement is used to calculate the strain in each element of the bridge or structure. If in any member element the strain exceeds the strength of the material, that member fails, and the collapse can be predicted and avoided by strengthening just the weak member.

Another virtualization that is timely are complex numbers that occur in wave functions, such as acoustics and quantum computing. (Imagine a sound machine that simulates quantum superposition!) When two complex waves meet they each contain imaginary components that do not exist. Yet if they combine additively or multiplicatively they can produce real outcomes. This is spooky and interesting.

Other virtualizations include:


All of which can be used in computer simulations of complex system like the stock market, terrorism and cancer.

Robert H. Cannon Jr. of CalTech in his 1967  book, "Dynamics of Physical Systems" discusses the convolution integral in control theory which incorporates the past state of the system to continuously impact the current and future states. This book written at the height of the Apollo era was amazingly ahead of its time in codifying control system analysis techniques using Laplace transforms, a complex number transform similar to the Fourier transform.  Kalman filtering can be applied to complex systems where the state of the system is changing rapidly. 

The Stop Button Problem

The Stop Button Problem in Artificial General Intelligence (AGI) is a fascinating study in what happens when what the Robot wants is different from what the Person wants. The video, The Stop Button Problem,  by Rob Miles describes the problem in detail.

The I, Robot Problems mentioned in the ICF course April 2013 based on Isaac Asimovs, "Three Laws of Robotics" discuss also.

Miles proposes proposes a solution to this problem using Cooperative Inverse Reinforcement Learning. 



The take home message is, "Make sure that humans and robots want the same thing or there will be trouble."

Sentiment Analysis

Clive Humby, a UK mathematician has said that, "Data is the new oil". Andrew Ng, a leading ML researcher makes the statement that "AI is like electricity", compounding this information-as-{utility, power, energy} metaphor. I have used the phrase, "Hot and Cold Running Knowledge" to describe the situation we currently find ourselves in.


- From DreamGrow


Social networks like Twitter, Facebook, Instagram, are fertile fields for harvesting user sentiments. User sentiment affect purchasing decisions, voting decision, and market prices for commodities such as stocks, bonds and futures.

Machine Learning is being increasingly used to scrape these social networks to determine sentiments in a kind of superpredictive Delphi method.


Handy's Law in Geotagging

Mentioned in the course notes for ICF January 2014, "Handy's Law" states, "pixels/dollar doubles yearly". Consider Nokia's new five camera design.



The question in my mind is this a tip of the hat to superfluous excess like the fin race of the fifties, a tech/fashion bubble of yesteryear - or does it represent a true increase in utility?

Computational Medicine

This is a separate essay.



Friday, January 11, 2019

Computing and the Future 2 - Computational Medicine


Introduction


I am developing this line of reasoning for a course I plan to take: Information Computing and the Future.

Computational Medicine is an area that is socially, politically and technically fascinating.

It is socially fascinating because, with the advent of successful machine learning, it holds the promise to democratize access to medical care.

It is politically fascinating because there are entrenched interests making large amounts of money from the status quo. These interests include health insurance companies, hospitals and care providers. The term "care providers" is a catch all for doctors, generalists and specialists, nurses, RN's and LPN's and support staff such as radiation technicians, respiratory therapists, physical therapists, lab technicians and so forth.

It is technically fascinating because everyone is in need of competent health care and to the extent that some portions of diagnosis and treatment can be automated, more people can receive timely and effective treatment.

I will focus on the computational aspects since they hold much promise for progress and are more tractable than the social and political areas.

Computational Medicine in the Small

The turn of the millennium produced the first nearly complete sequence of the human genome, which is computationally a base-four digital Turing Tape whose length is two instances of a 3.234 Giga base pairs, one from each parent. These codes are replicated 37 trillion times in the 250+ cell types of the body. There has been substantial recent progress in gene sequencing, but a gap remains between the code of the genes (genotype), and the invisible circuits that regulate gene expression (phenotype).

Understanding the underlying genetic components of disease will continue to be great step forward in guiding accurate diagnosis and treatment.

Computational Medicine in the Large

Machine learning has been applied with great effect to cancer detection at the macroscopic level of breast cancer radiology inspection and skin cancer (melanoma) screening. For a season a schism emerged between those who are domain experts in the biological sciences and those who are so trained in the computational sciences. As Machine Learning continues to outpace expensive cancer specialists, there may be a "circling of the wagons" by those who have held an economic monopoly in this diagnostic area. They can become conflicted in their duty to heal the patient and amass large profits for themselves and their institutions.

Computational Medicine in the Huge

We can zoom our computational lens out to include populations of people, taking an "epidemiological" point of view to the national or worldwide level.

On January 2, 2019, a list of the drugs approved by the FDA in 2018 was released. Some of these drugs are "orphan drugs", that is drugs that treat conditions that are relatively rare in the world population. There is less economic incentive to manufacture and research these drugs than those for more common conditions such as cancer, HIV, cystic fibrosis, malaria and river-blindness. However the emergent theme in most of these new drugs is their astronomical pricing, making them unavailable to those, who in many cases, most need them. Here are just 3 out of 59 entries from the list above, two of which cost more than $250,000 wholesale per year:
One of the drugs in this list - Galafold, for fatty buildup disease - costs $315,000 per year, yet could be synthesized by someone with high school lab skill and a chemistry set!

This pharmacological injustice can lead to the social bifurcation of "haves and have-nots" - preconditions that fulment unrest, conflict and sometimes all out war.

But here I want to focus on a pattern of computational interaction that has a more positive end, and that could ultimately democratize access to diagnosis and treatment - all facilitated by information processing in the future.

Daisy Wheel Diagnostics

Preamble

What follows below is more autobiographical than I want or like. I am attempting to reconstruct a line of thinking, a chain of reasoning that led to my current perspective, and enlightens what will come next. Apologies for the first-person perspective.

Introduction

When a family member of mine developed cancer I felt that it was important to understand this complex disease from a comprehensive point of view. Typically we are conditioned to look at "single-factor" causes of diseases that are in fact multifactor in nature. The first thing I did was to start trying to draw cause and effect graphs between the genes that are implicated in cancer, since visualization of that which cannot be seen has been a source of breakthroughs in clinical medicine, both with radiology, the microscope and clinical lab spectroscopy.

Some Connections to the Her2Neu Receptor Gene
After that I took genetics, biochemistry and molecular biology and wrote a summary treatise on the various factors that enable cancer to develop and treatment approaches.

Four Categories of Carcinogenesis

In broad strokes, here is a pattern of inquiry that I have developed over time, out of habit, first from seven years working in clinical laboratories, and later with five family members who have had cancer, or died of it, and two who have had mental illness. I have drawn the blood and taken EKG's on thousands of people. I have run hematology, clinical chemistry, and bacteriology tests on these same people and produced reports that were provided to attending physicians that determined treatment. I have attended appointments with surgical oncologists, radiation oncologists, and hematological oncologists (the mnemonic in cancer is "slash-burn-poison" for radiotherapy, chemotherapy and surgery respectively). That is the cancer front.

Mental illness is more of a black box with respect to the clinical laboratory because we do not as of yet have a way of sampling the concentration of intrasynaptic neurotransmitter levels along the neural tracts in the living brain. Nonetheless behavior and thought-patterns themselves can be diagnostic, which creates huge opportunities for machine learning diagnostics.

To challenge the black-box definition: Let me conclude this introduction with an observation, useful in the definition and treatment of mental illness:

"If a patient is successfully treated with a certain drug known to be specific for their symptoms, then in all likelihood, they have the disease associated with that set of symptoms." This is not circular, it is rather "substance-oriented".The constellation of drugs administered to a successfully treated patient, constitute a definition of their specific condition. The inference being made is that the successful suite of substances that restores neurotransmitter concentrations and brain chemistry to normal levels serves as a label for the condition from which the person suffers.

In computational terms, these treatments constitute an alternate name for their condition, to wit, "The condition successfully treated by XYZ in amounts xyz." So the patient has the condition XYZxyz. This makes sense if we consider the underlying biochemical nature of mental illness at the small molecule level as being that of chords of specific receptors being up and down regulated in certain patterns at certain times. There are larger aggregations of neural tract organization that are obviously also important, but my sense is that these are more significant in aphasia and specific disabilities that are separately discerned from conditions such as bipolar disorder, obsessive-compulsive disorder, schizophrenia and so forth. End of introduction.

Patterns of Human vs. Computational Inquiry

Over time I have developed some habits of inquiry, due to my mother, a medical technologist and my dad, a software engineer, who taught me how to assess whether a given care provider "was good". Answering the question, "but are they good" was an unspoken goal that attempted to assess their competence, depth of education and instinctive ability to accurately diagnose and treat various illness. Whenever I am engaged with a medical care provider, I am trying to make an accurate estimation of their abilities, since in the end, it can spell life or death. That is a purely human activity.

Over the years, part of this process has become more computational in nature as I attempted to ask the most actionable set of questions during an appointment. These questions create an atmosphere of seriousness that most competent care providers enjoy. The advent of Google has amplified the ability to prepare to an extreme degree and can positively impact the continuing education of the care providers as well. Patient-provider education is an two-way, continuing and iterative process.

Design Patterns of Computationally-Assisted Inquiry:


  • Drug-driven
  • Disease-driven
  • Cell-driven
  • Test-driven
  • Physiology-driven
Let's define each case:

Drug-Driven (or pharmacology driven):

In this scheme, the name of a candidate drug typically prescribed for treating a condition is used to find:
  • Chemical structure, 
  • Indications for use
  • Contraindications
  • Mode of action
  • Side effects
  • Toxicity / L:D
Drug-driven strategies use variants in baseline structures to optimize treatments and minimize side effects. This optimization is man-machine process.


The Physician's Desk Reference canonizes this information for commonly prescribed drugs. For example, digoxin, derived from the Foxglove plant is indicated for congestive heart failure and improves pumping efficiency, but its toxic level is very close to its therapeutic level making it somewhat risky to use.
There are intellectual rights issues that emerge, but machine learning can address these by enabling knowledge utilization without distribution of the original content.

Imagine you are in a pharmacy where all known drugs are present. Given you, your genes, and your current state, what is the optimal combination of these substances and the dosing schedule that most benefit both short and long term health and quality of life?

Disease Driven (or illness driven):

In this approach:
  • A complaint is registered
  • A history is taken
  • Vital signs are measured
  • Patient's symptoms are recorded
  • Diagnostic tests are ordered and results recorded
These are combined in a complex composite record which is represented as the patient's chart which includes text, handwriting, hand-annotated diagrams, 
These records accumulate as a stack with each successive visit. eCharting has been taking hold for the past few years, but a remarkable amount of patient information lies fallow in discarded, obsolete patient records in cold-storage. It is essential that organizations like Google digitize such patients charts before they are lost to perpetuity. This would involve scanning treatments and outcomes with the same scale and enthusiasm that the world's libraries have enjoyed in recovering old books. Thank you Google! This would create large training sets for machine learning and further of codification of medical knowledge in a machine-learning context. HIPAA laws and frivolous copyright lawsuits have obstructed this Nobel-prize worth activity, perhaps due to concerns of litigation-sensitive record custodians.

Cell-Driven (or genetics driven):

Cell-Driven strategies include:
  • Cell population counting in hematology
  • Differential cell identification (diff)
  • Flow cytometry in cancer diagnosis
  • Genome sequencing

Cell population counting includes red cell, white cell, platelets and differential cell identification as part of the complete blood count (CBC and Diff). Differential cell identification can be done manually or by flow cytometry where white cells are sorted into types including lymphocytes, monocytes, neutrophils, eosinophils and basophils.

Modern and future approaches include characterizing the
  • Receptors (cell logic gates) expressed and their mutations
  • Cytokines (chemical signals) that are driving their differentiation
  • Metabolic products
Consider the gene-product as drug treatment point of view. If we knew the exact amount of over or under expression of genes and their cognate gene products, these could be compensated for by a custom cocktail of appropriate pharmacological agents. Small molecule agents can be ingested via the digestive tract, while protein products must often be administered intravenously since the protein structures are degraded when in contact with the digestive tract enzymes and low pH acidic conditions.

Test-Driven (clinical lab and radiology driven):

Test-Driven strategies include Clinical:
  • Chemistry for Kidney, Liver, Heart and Organ System Function
  • Bacteriology for Infections Disease
  • Histology for Determination of Abnormal Cell Types
  • Ultrasound in 2, 3 and 4 dimensions
  • X-Rays
  • PET scans (Positron-Emission Tomography)
  • MRI/fMRI (functional Magnetic Resonance Imagery)
  • CAT scans (Computer-Aided Tomography)

Physiology Driven (or illness driven):

Higher levels of organization than just the cell can be players in disease process.  Organs operate in synchrony like the members of an orchestra, some autonomously and some with directed signaling. The sympathetic vs. parasympathetic nervous system are example of this. Endocrine disorders can manifest at relatively large distances from the source of the problem. Diabetes is a complex disease with multiple factors operating at multiple scales including genetics and environment.

With these design patterns in place, to borrow a phrase from software engineering, we should think about how they might be combined into a diagnostic and treatment system that could reduce the cost of healthcare help the most people possible.

Now recall the Daisy Wheel printers of the past. They achieved a speedup factor of two simply by not starting at the beginning of the next line to be printed. Since the mechanical movement of the print head was the most expensive step, the even lines were printed left to right, while the odd lines were printed right to left, which required only the minimal computation of reversing the order of the characters to be printed on the odd lines. Does this metaphor fit with the application of knowledge derived from each of the medical processes described above?










Friday, January 04, 2019

Computing and the Future HW 1 - Too Many Languages




An interesting course at the University of Arkansas, led by Dr. Jared Daniel Berleant is Information, Computing and the Future. Homeworks are submitted via blog, and lectures dating  back to 2012 are archived, which helps getting oriented to the material.

There is an assertion in these archived notes that programmer productivity is increasing for various reasons. The exact phrase that captured my interest is "Increased Programmer Productivity" or IPP for short. I want to to assert the existence of some mitigating factors to make the case that programmer productivity is decreased over what it might be due to the paradox of "too many choices", which manifests as:


  • too many languages
  • too many Interactive Development Environments
  • too many target architectures

The Tiobe language index lists 18 languages whose penetration exceeds 1%. Assuming there is an Interactive Development Environment for each language we have 18 IDE's. This is a conservative estimate. Enumerating target architectures we again can conservatively estimate 18 from this StackOverflow list. In the worst case this leads the programmer to a minimum of 183 "choices" or 5000+ things to be "good at". Obviously this is all somewhat contrived, since programmer's specialize and there isn't a separate IDE for each language, but how do programmers reassure themselves they have specialized in the best language+IDE skillset? At 182, there are over 300 combinations, and learning a language and IDE takes some time, say a year, to master. If you don't have 300 years lying around check out some favorable candidates below.

I am interested in the overhead that occurs when we create codes that are illustrative, informative, provable and self-explanatory. The Explainability Problem in AI is an example of this. Addressing this has become the law in the EU. Brexit anyone? A hot topic in machine learning research is hyperparameter optimization. Briefly, how does one choose the activation function (the neuron response), the learning rate, the optimal partition between training set and test set, etc. to create the fastest learning and best performing AI? Speaking of this:
To see hyperparameters and neural networks in action, visit the TensorFlow neural network playground. Academic excursion question: Bayesian Learning is known to be effective, but combinatorially expensive. Can it be shown that in the limit of sufficiently exploring the parameter space, that hyperparameter optimization is effectively just as intractable, because of the number of configurations that must be explored? 





Another thing I am interested in, and the main focus of this entry, is how we bring people into our train of thought, into our line of reasoning, as seamlessly as possible. We document our thoughts, communicate them, run simulations, and perform mathematical analyses to make statements about the future and the limitations that we have in predicting it1. Some tools (and their requisite URLS) that assist in this process are:


  • Blogger for creating running Blogs in the Cloud
  • Scrimba for Teaching/Demonstrating Algorithms in Vue.js
  • JSFiddle for Teaching/Demonstrating Algorithms in JavaScript
  • Jupyter Notebooks for running Machine Learning in Python
  • MyBinder for hosting deployed Jupyter Notebooks

The links above can be run for brevity, the discussion below enumerates some of the issues they raise or address.

Blogger

Blogger is an adequate blogging platform, enabling the inclusion of links, images, and videos such as the opening one above. The link provided above compares blogger.com to several competing platforms. It's not particularly futuristic per se, but it is better than paper or email. I would include an illustration, but this very page serves that purpose very well. It has been somewhat "future-proof" for the past few years anyway. It does the job, so no further commentary is necessary.

Scrimba

This system overcomes weaknesses in passive video tutorials. The architects of Scrimba wanted a system that would engage the student, that would let them make the rite of passage from student to contributor rapidly, and that would reduce the overhead of learning by enabling instant interaction with the code for the example at hand. The strong suite of Scrimba is currently its tutorial on Vue.js, that enable dynamic programming of graphical user interfaces via JavaScript, in html pages. The Scrima: Vue: "Hello World" example narrated by Per Harald Borgen is the perfect example of making the sale before the customer even realizes they've been sold. Per's intro leads to a more detailed exploration of Vue that actually does useful work, facilitated by Zaydek Michels-Gualtieri. Between these two tutorials one can learn the basics of Vue.js in a little over an hour. This makes the case FOR increased programmer productivity (IPP). However, the advent of yet another language and IDE adds yet another choice to the implementation mix, a mitigating factor AGAINST IPP.




Note 1: If programming were a crime, we could speak in terms of aggravating and mitigating (extenuating) circumstances that would enhance or reduce the programmer's guilt and corresponding sentence.

Note 2: According to my offspring, a powerful development trifecta is created when  Vue.js development is combined with Google Firebase user authentication and  Netlifly, a platform for deploying applications.


JSFiddle

Byte-sized proof of concepts for functions, hacks, tips and tricks. JSFiddle simultaneously shows the html, css, and javascript in three separate INTERACTIVE windows, along with a main window where the html markup, css stylesheets and JavaScript code are executed in the browser to show the outcome. JSFiddle lives on the web, in the public cloud so that anyone can contribute. Scrimba improves upon this concept by enabling the user to grab control of the session and the code at anytime during the presentation, pausing the narrator, saving past states and audio recording by the same user, who can eventually narrate their own content.




Jupyter Notebooks

"Writing code that tells a story." I include this because Python 3 programming in Jupyter has become the lingua franca of information exchange in the current machine learning (ML) revolution. The Tiobe index, cited above, has declared Python to be the language of the year for 2018! One can obtain both Jupyter, Python 3, and a host of useful libraries like numpy, scipy, and pandas from the Anaconda consortium, a useful one-stop shop. It is worth noting that these codes are evolving very rapidly, I have to check my pandas version as we speak. An important feature of Jupyter Notebooks is platform independence, they run on Windows, MacOS Unix, Ubuntu Linux, etc. Further this platform is not "owned" by a company, like the disastrous ownership of Java by Oracle, or that of C# by Microsoft.
The video link claims that Java is supported, but I find such support tenuous at best. Kotlin is supported at an alpha level, and since the terser Kotlin compiles into Java, indirect support is implied.

It is worth noting that Jupyter can run 100+ different kernels, or language environments. So the general idea of a working lab notebook that runs on a local server has really taken off and thus is the wave of the future. I like the relative permissiveness of the local sandbox, compared to Java development, and the fact that results can be reproduced by any other investigator with a Jupyter Notebook setup. I also like the "bite-sized" morsel of a Jupyter notebook that can focus on a single problem, in the context of a greater objective.







1 Who doesn't love a good footnote? I hope to amplify notions of the thermodynamics of trust, showstoppers and such in future entries to this chain.

Sunday, April 22, 2018

More AI, More ML: An Open Letter to Ancestry.com and 23andMe.com

That's it. You can stop reading now. Just do the title. How hard can it be?

Ancestry.com principally and 23andMe.com to a lesser extent let you use their genealogical services to assemble a family tree. I will focus on Ancestry there, but similar reasoning applies to 23andMe.com. There are two components to the family-tree building process, the PAPER of existing records and the BIOLOGY of DNA samples which both services analyze. However there is a glaring problem of when it comes to certifying the authenticity of family trees derived from historical documents, that is, PAPER. Do you trust the source? Can you read the document? Are the spelling changes plausible and if so how much? By using both DNA and PAPER one can cross check one against the other to confirm authentic lineages and refute specious ones. But there must be quality control in both the PAPER and the DNA. Laboratory techniques for DNA handling use statistical quality control methods that are reliable, however there is no equivalent quality control methodology for PAPER, which in large part has been converted to MICROFILM and digitized with varying levels of quality control image processing. There are chain of custody issues when one submits a DNA sample to both services and one should really submit multiple samples to be sure that the correct sample has been tested and labeled. There are also handing issues as samples make their way through the mail, postal and delivery systems. More or less the later issues are being addressed.

Ancestry.com currently requires you to chase hints in time and space to determine if you are related to a given candidate ancestor listed in a public record or another family tree. For large trees this can be extremely labor intensive, without guarantee that one has constructed a forensically certifiable result.


One error source is this; Ancestry let's you use other's family trees that are themselves mashups of information of dubious origin and there is no rhyme or reason to confirming whether information in these other trees is accurate. In other words there is no quality control. No assurance that one is dealing in fact.

The addition of DNA helps one connect with living ancestors and to add ground truth to previously assembled trees. There are forensic methodologies that increase certainty, such as this: when independent sources of information confirm the information. The more redundancy of independent records, the higher the certainty that the conclusions, the facts are authentic.

The problem is, after ones' family tree gets to an interesting level of complexity, the number of hints grows exponentially and many of the 'hints' lead to completely specious assemblies of data.

The fix to this is to associate with each tree, and with each fact in each tree, a certainty that the fact in the tree is indeed true. For a given ancestral line, these certainties can be multiplied together to provide a composite value that indicates the reliability of information. As a detail certainty is a number between 0 and 1 inclusive. A 1 means certainty is complete (which never exists in the real world of statistics). A 0 means there is no certainty whatsoever. A certainty of 0.9 means that the fact has a 90% chance of being true. If we chain two facts together each with certainty of 0.9 we have a 0.81% certainty that both facts are true.

There are a host of microfilmed documents from all over the earth that have been read, digitized and collated by human beings and many of these have been collected by Ancestry.com in what constitutes a controlling monopoly over historical ancestral information. This source of this control, this power has its roots in Mormonism. This can be good thing in that there is a single long-term historical and motivating organization or presence. This could be a bad thing if religious exclusion occurs.

The point of my open letter is this:

Recent advances in machine learning would enable PAPER documents to be parsed by machines and would associate with each fact gleaned from them a level of certainty. Previous entries in this blog discuss summarize these advances in detail.

The Mormon Church and Ancestry.com have close affiliations. In Salt Lake City, Utah, both have excelled in using advanced computation to solve important problems.

The problem is that there is a financial conflict of interest at play. Many families have invested generations of time and thousands of hours of work in building family trees using manual and computational methods. They may not take kindly to having their work, especially closely-held beliefs or assumptions questioned when those beliefs provide them with self-esteem or status in the community.

For people who have spent 30 years justifying that they are related to a Hindenburg or a Henry the 2nd like myself, this will be good news and bad news. It will be good news in that it will allow a more comprehensive family tree to be assembled, more RAPIDLY with less human error. It will be GOOD NEWS that it will allow a precise certainty to be associated with each fact in the tree. It will be bad news for those who have a need to be related to someone famous or historic and are not and have significant social capital in those claims.

I have a large tree of both famous and historic ancestors, including kings and martyrs. But I would gladly trade it off for a complete and accurate picture of who I am actually related to.

Mainly, I don't have time to chase the 31,000 hints that have popped up in my Ancestry.com Family Tree, especially when I know that machines can do it better. To that end, it is time to make more exhaustive and complete use of handwriting and document analysis using the burgeoning progress taking place in Artificial Intelligence and Machine Learning. The opportunity for true and factual historical insight could be spectacular.


Sane Public Policy With a Gun Census







I observe. I think. I've thought. The pen is mightier than the AR-15. The AR-15 killing machine will rust and jam, especially if you shoot NATO ammunition. The pen endures forever. I polish this article every-time there is a mass-shooting. It is getting way too polished.

We are at a social Tipping Point. Malcolm Gladwell, in his book by the same name, makes the point of "opt-in" vs "opt-out" when it comes to states enrolling organ donors. States that require DMV applicants to opt-out of organ donation produce more organ donors than those who must opt-in because people are lazy.

Lazy or not, under current law, nearly anyone who is 18 or older, can buy an assault-rifle and bump stock that converts it to fully automatic use. To be denied the right to purchase there must be a glaring red flag in the rubber-stamp background check that kills the purchase. Many mass-shooters are "first-time" offenders, therefore the response, by definition is always too-little, too-late.

Conferring on someone the right to perform mass-execution MUST have a higher barrier to entry than the current one. The burden should be on the applicant to prove that they:
1) are of sound mind
2) the people in their household and circle of trust are of sound mind
3) their killing horsepower need is reasonable

There is a difference between weapons for self-protection and those for mass-execution. When the 2nd amendment was developed, weapons did not confer on any single individual the ability to perform mass-execution. Give me a second while I reload my musket.

There is very good math, and I am a mathematician, that show that proportionality in war is a good idea. Mainly it avoids mass-extinctions. See Robert McNamara in the "Fog of War" for an excellent explanation of this. So let's do some simple math:

If everyone had in their garage, a hydrogen bomb, that they could detonate when they became angry, depressed, despondent or mad at the neighborhood association, one person could destroy a city, a state, or even a nation.

As a democracy, the decision has been made that we do not allow individuals to possess or carry nuclear weapons because there is currently no scenario that would justify this. So now we have an immediate obligation to the constitution:

Imagine if every AR-15 owner had a hydrogen bomb, locked and loaded in their house, maybe under the bed or something. That would make for 3 million H-bombs. If that were the case how often would we read about H-bomb explosions? The suicide rate is currently listed as 13 per 100,000 people. that works out to 390 incidents per year. Not so good for Earth Day festivities. The murder rate is 6 per 100,000 people. Maybe it is comforting that people kill themselves more often than others, show how good people really are. Anyway, that works out to 180 H-bomb explosions per year for a total of 570. Now we are assuming that AR-15/H-bomb owners commit suicide and murder as the same rate as the general population. Prepper be ready.

So we don't allow this and for good reason. Through the miracle of calculus, consider the following limiting arguments. How much killing horsepower should one individual be allowed to possess? I've already shown that a hydrogen bomb is too much. One can similarly reason that an atomic bomb is too much. We can continue to reason to more reasonable scales. Current law does not allow individuals to carry grenades, presumably because that is too much killing horsepower to confer on a single individual. Yet we allow uncredentialed people to purchase assault rifles willy-nilly. This is contradictory.

We can reason from below, as well as from above. We can reason from too little as well as from too much. How much killing horsepower is too little for self-defense? If one is attacked by a single person, you need to have what they have, plus a little bit extra so that you win. If a single attacker can have a hydrogen bomb, then you need two hydrogen bombs, just to make sure. So now we have a paradox. Whatever we allow someone else to have, is all that we are allowed to have! To fix that we must allow ourselves to have anything that they could possible have plus a safety margin and viola - we are in the current impasse enabling mass-shootings at schools, movies, concerts and work.

BUT a group of reasonable people could get together and say, here is how much killing horsepower we are going to confer on any single individual (this has tremendous ramifications for military leaders, but I digress). The killing horsepower that you are certified for depends on: 1) the soundness of your mind and 2) the threats you reasonably expect to confront in your current daily life. This creates a table of permitted killing horsepower. A commander-in-chief I know of, has the killing horsepower of the entire hydrogen-bomb arsenal of the United States. That is probably too much power to confer on one person, who in a moment of questionable judgement, could make a mistake that would destroy the world. Again I digress.

In short, a table of permitted killing horsepower is created and everyone gets a ranking. This table of permitted killing horsepower is created by proper research, debate and due process, while we mourn those lost in the meantime. In the future Artificial Intelligence tools will be used to screen applicants fairly, by quickly applying the wisdom of history and the hive. The model is that of a DMV for killing horsepower. Everybody hates the DMV, even so you can't drive a big truck without a special license and the lawyers come when they run you off the road. We know how to make DMV's. Apple could make them more user-friendly though. Part One Done.

The second part is an inventory, a complete common-sense gun census. An inventory must be made of the location and type of every gun and every round of ammunition. The manufacturers of weapons are required to keep records on how many are made. Many of these can then be tracked down by those records in combination with registration databases and store records.  The location of ALL guns and ammunition repositories must be part of the certification process. If someone tries to cache weapons John Wick style, his killing horsepower privileges are reduced or removed. If you use a gun, that use is going to be audited by the justice system, and the gun census is part of that process. A gun census doesn't take anyone's weapons away that shouldn't have them in the first place.

With a complete inventory, it will be possible to screen for those who are currently in possession of killing horsepower outside the realm of their daily need and their soundness of mind.

Any other approach to this problem will find us lamenting the murder of our children in schools, the murder of our friends at movie theaters, concerts and work. Schools, movies, concerts and work makes life worthwhile. Possession of killing horsepower outside our need or ability to wield it makes life more tragic than it already is.

In conclusion. We must implement these two fixes now. If things don't improve we can implement a sundown clause, a return to crazy town and see how that works. Judging from current events, crazy town isn't working at all.



Friday, April 13, 2018

A Blazing Fast Introduction to Machine Learning

Introduction

In what follows I'm going to talk about Artificial Intelligence (AI), Machine Learning (ML) and Artificial Neural Networks (ANN's). More to the point, "What are they good for?", in practical terms. If you are wondering what you can use them for the answer is, "most everything". From light to radio, from radio to sound, from graphics to games, from design to medicine, the list goes on. Here are some working concepts.

Artificial Neurons (ANN's) are an abstraction of Biological Neurons. The first thing we notice is that biological neurons use "many inputs to many outputs" connectivity. So in a mathematical sense they are not classic functions, because functions only have one output.
Anatomy of a Neuron
Biological Neuron

Artificial neurons have a "many inputs to one output" connectivity. So they are functions. Functions can have many inputs, provided they only have one output.


Artificial Neuron aka 'Perceptron'

This apparent shortcoming is remedied by connecting the output of a single artificial neuron to the inputs of as many other artificial neurons we want. This happens in subsequent or "hidden" layers, restoring their power by forking or duplicating their outputs. It seems forgivable to think of biological neurons working this way also, but improvements in the future may reexamine this assumption. The Perceptron link records its invention in 1958 where it was envisioned as a machine rather than a software entity. This pivoting between software and hardware continues as special purpose processors are being developed to speed machine learning computations.


Neural Network with Four Layers


Another thing we notice about the Artificial Neuron is that the magnitude of the output of any given neuron is clamped to some maximum value. So if you are in space staring at the sun, your brain doesn't fry because of the neural output, it fries because you standing next to the sun. How peaceful.

I would be remiss here if I didn't mention that until recently, programming languages implemented the concept of functions in a similar way, many inputs were allowed but only one output was returned per function call. 



Python, which has become the de facto language of AI allows one to return many outputs from a procedure call, thus implementing many to many relations. This is extremely convenient, amplifying the expressive power of the language considerably.

There are many details one must attend in programming neural nets. These include the number of layers, the interconnection topology, the learning rate and the activation function - that include S-shaped sigmoid function shown above at the tail of the Artificial Neuron. Activation functions and come in many flavors. There are also cost or loss functions that enable us to evaluate how well a neuron is performing for the weights of each of its inputs. These cost functions come in linear, quadratic and logarithmic forms, the latter of which has the mystical name "Cross Entropy". Remember if you want to know more about something you can always google it. 

Great strides have been made in neural networks by adjusting the input weights using "Gradient Descent" algorithms which attempt to find optimal combinations of input weights that maximize the effectiveness of each neuron. A neuron has many inputs to consider - many things shouting at it simultaneously -  and its job is to figure out who to listen to, who to ignore and by how much. These inspirational and optimal values are "Back Propagated" using the Chain Rule from our dear friend Calculus. This is repeated until the ensemble of neurons as a whole are functioning their best as a group. The act of getting this to happen is called "Training the Neural Network". You can think of it as taking the Neural Network to school. So the bad news is, robots in the future will have to go to school. The good news is that once a single robot is trained a whole fleet can be trained for the cost of a download. This is wonderful and scary but I digress.

TensorFlow Playground

Before we go any further, you must visit TensorFlow Playground. It is a magical place and you will learn more in ten minutes spent there than doing almost anything else. If you feel uneasy do what I do. Just start pushing buttons willy nilly until things start making sense. You will be surprised how fast they do because your neurons are learning about their neurons and its peachy keen.


TensorFlow Playground

Types of Neural Networks

CNN - Convolutional Neural Networks

Convolutional Neural Networks are stacks of neurons that can classify spatial features in images. They are useful for recognition problems, such as handwriting recognition and translation. 


Typical CNN
CNN's can also be used to recognize objects in an image such as these that occur in the CIFAR database. In this case the input to the CNN is an image and the output is a word such as "truck", "cat" or "airplane".


CIFAR Database
MNIST is a famous database of carefully curated handwriting samples used to train and subsequently recognize handwriting.

MNIST Training and Recognition
CNN's can also be used to transfer the style of one image to another as in Google's Deep Dream generator.  


Deep Dream Generator

RNN - Recurrent Neural Networks

Just as Convolutional Neural Networks can be used to process and recognize images in novel ways, Recurrent Neural Networks can be used to process signals that vary over time. This can be used to predict prices, crop production, or even make music. Recurrent Neural Networks use feedback and connect their outputs into their inputs in that deeply cosmic Jimi Hendrix sort of way. They can be unwound in time and when this is done they take on the appearance of a digital filter.
Unwinding an RNN in Time
RNN's can be used to predict the next most likely word in a sentence. They can also continue patterns seen in periodic functions.


Predicting Periodic Functions with an RNN


AE - AutoEncoders

Autoencoders are a unique topology in the world of CNN's because they have as many output layers as input layers. They are useful for unsupervised learning. They are designed to reproduce their input at the output layer. They can be used for principal component analysis (PCA) and dimensionality reduction, a form of data compression. 

RL - Reinforcement Learning

With Reinforcement Learning a neural net is trained subject to conditions of rewards, both positive and negative until the desired behavior is encoded in the neural net. Training can take a long time, but this technique is very useful for training robots to do adaptive tasks like walking and obstacle avoidance. This style of machine learning is one of the most intuitive and easiest to connect to.


Components of a Reinforcement Learning System

GANS - Generative Adversarial Networks

GANS are useful for unsupervised learning, an echelon above routine categorization tasks. They typically have two parts, a Generator and a Discriminator. The Generator creates an output, often an image, and the Discriminator decides if the image is plausible according to its training. In the MNIST example below the gist of the program is, "Draw something that looks like a number". In an interesting limitation, the program does not know the value of the number, but simply that the image looks like a number. Of course it would be a quick trip to the trained CNN to get the number recognized.
GAN Instructed to "Draw Something That Looks Like A Number"

Conclusion

This short note details several approaches to and application of machine learning. Hope you found it interesting. For more information just follow the links above.