Read Latex

Friday, April 01, 2022

How to Get a Factor of Ten Speedup In Google Colab Jupyter Notebooks and Other Tricks

I don't work for Google. I'm just a Ph.D. student trying to get by. This evening at 1 AM,  I was working with an autoencoder example that took a long time to train. Autoencoders are cool, but I don't have all day to wait around.

I use Google Colab for Python Jupyter Notebooks because I don't have to install any software on my machine. This also saves gobs of time and disk space.

Google maintains a host of library versions insuring compatibility, which is an enormous convenience since machine learning is a rapidly changing field by the day, even by the hour.

Google Colab is technically free, but $10 a month buys you access to GPU's, TPU's and the promise your job will actually finish. I figure it's my tithe to Google for all the good they do.



At the top of the notebook is the Runtime Menu, go to the bottom item, yellow arrow:


When you run a notebook, you have the choice to use a bare CPU, a GPU, or a TPU, and in addition you can request extra RAM using the questionably named  'Runtime shape' menu:


The option dialogs look like this:



but don't use Standard.

I was curious, for my autoencoder experiments, which configuration of devices were the fastest. Intuition would say, GPU and extra RAM, but I don't trust my intuition when I can just measure something and know for sure. Here is the data generated by logging all possible combinations. You have to restart and run all to make sure that runtime configuration changes stick.



To avoid analysis paralysis these values are thrown into a Python dictionary, with a quick crunch to compute the mean, round the results and sort them from fastest to slowest:




Intuition turned out to be right, but there was a surprise. A TPU with a standard memory configuration was the slowest. This is a 'for-sure' since each case was run three times to account for process noise. I could have easily convinced myself that this was a reasonable choice and taken TEN times longer to get done.

So GPU's with extra RAM are the fastest by a factor of ten for my particular problem which is a fairly run-of-the-mill machine learning task.

Other Tricks
1) Running colab in a Chrome incognito window starts up lots faster for reasons I do not understand. The difference is significant. Don't have time to fix it either so I wrote the billing department at 
colab-billing@google.com. That is a lot faster than trying to get tech support for reasons I completely sympathize with. But hey, I'm a paying customer.

2) To have colab show cell execution times automatically insert this code at the top of the notebook.
!pip install ipython-autotime
%load_ext autotime

Wednesday, March 30, 2022

A Note on Clarity in Machine Learning

We live in an internet of information and an internet of things.

 

The trifecta of childhood dreams, rockets, robots and radio has never seen a

brighter time. They are the three R's for the digital age.

 

Oz is giving something to the Tin Man, that he didn't already have - an artificial brain.

 

Adding to the seismic shift we find ourselves in is the explosion of machine learning (ML) techniques. There are two kinds of detonation taking place.

 

The first is the increase in the number of fundamental algorithms.

 

The second is in permutations or variants of those algorithms.

 

Consider the garden of fundamental algorithms:

 

1.     Linear Regression (Supervised ML)

2.     Logistic Regression (Supervised)

3.     Decision Tree (Supervised)

4.     Random Forest – Ensemble of Decision Trees (Supervised)

5.     Support Vector Machines - SVM (Supervised)

6.     Naive Bayes (Supervised)

7.     Gradient Boosting Algorithms - XG Boost (Supervised)

8.     Convolutional Neural Network – CNN (Supervised)

9.     Recurrent Neural Network – RNN (Supervised)

10.  K-Nearest Neighbors – kNN (Supervised)

11.  K-Means (Unsupervised)

12.  Dimensionality Reduction: Principal Components Analysis – PCA (Unsupervised)

13.  Generative Adversarial Networks – GANs (Both)

14.  Reinforcement Learning – RL (Neither)

15.  Attention Mechanisms that Prioritize Machine Learning Operations (Self Supervised)

 

 

Consider the garden of a specific ML device, the autoencoder, we have:

 

1.     Denoising Auto-encoders

2.     Sparse Autoencoders

3.     Deep Autoencoders

4.     Contractive Autoencoders

5.     Undercomplete Autoencoders

6.     Convolutional Autoencoders

7.     Variational Autoencoders

8.     Recurrent Autoencoders

9.     SeqToSeq Autoencoders

 

For each kind of ML algorithm and variant, we require the following representations to understand them:

1.     The underlying equations

2.     The block diagram showing how the parts fit together

3.     An outline of the code that computes them

4.     An animation of the progression of the algorithm

 

For algorithm selection and cost estimation would also like to know:

 

1.     The time complexity

2.     The space complexity

3.     The loss function

4.     The hyperparameters, learning rate, activation function, etc.

 

During the execution of an algorithm, it is useful to watch the loss function decrease, as this portrays the learning that is taking place.

 

Without these algorithm representations, selection and cost estimation we are just shooting in the dark. This especially so when we are comparing our work with that of others.

 

So that is my note on clarity.

 

 

 

 

References

Different Types of Autoencoders

Feature Extraction by Sequence-to-sequence Autoencoder

Wednesday, March 09, 2022

Least Damaged World

 Least Damaged World

As I write these words, blinking lights in basements, garages, attics, and she-sheds, millions of gaming and mining rigs crunch away on a single new problem. Least Damaged World.
Up till now these rigs were running Grand Theft Auto, Minecraft, PlayersUnknown, and hundreds of other sick titles happily warming the oceans.
But these machines and their highly skilled ops now run a sim that seeks to solve but one solicitation:
How do we save New York?
Why all that horsepower on such a short question? Because it’s not just New York. It’s London, Paris, Munich and everyone pop music. Let me talk about it [short on time]:
We’re 100 seconds to midnight and we can do anything we want with those few precious moments.
The thing we should do is answer the question, like so:
There is some order of nuclear exchanges, starting with the tactical one now aimed at Ramstein AFB, that kills the least people and another set that kills the most people. Right now, no one on Earth, knows the answer to that simple question. No brass, no fruit salad, no one. That question is identical to the first one. How do we save New York, first on the list?
But we could know it. It is knowable.
There is some best order of bartered exchanges, gained only through brute force simulation, that says what targets must be struck to save the Newest New Yorkers. This includes the lovely option of striking no one anywhere. This includes shooting the gun out of their cold dead hand. This includes shooting them in face even because no one holds gfriend by the neck but me.
Cut to the chase:
It unrolls like this. WormOfData Googles the nuclear inventories of all the card-carrying powers that be. DawnOfThunder puts them in the Unreal and Unity game engines. HiFionWiFi builds a Matrix quality sim good down to the dumpster decal. SereneDipity hacks a pure speed sim that trades away ray tracing in favor of getting the answer before 100 seconds are up.
ZipTie googles the fuzzed out regions to data-is-beautiful mine where the BIG ONES live, and which ones have their mouths open ready to barf up megadeath.
DateScroller divines the submarine inventory and Monte Carlos the favorite positions in an electrostatic LoveBoat episode of pissed off leaders in unhappy places.
SomebodySpecial forks a faster leaner version 12, built in as many days, outed to the world as open source.
You get the idea. Touch big red and watch the world end in a sim. Now change one fateful thing. A fewer number of people die. In a few million GPU hours you don’t just have crypto coinage, you saved New York, and a ton of all creatures great and small. Why? Because you and everybody else now knows something that wasn’t known before. What is the order, what is the optimal target list? Is the best thing to do nothing? What would have happened if we did nothing after 911? Would it be better than what we did? We don’t know till the sims run. We just don’t know. And second order ignorance is an unhappy end, whether you’re brass or fruit salad or no one like me.
What is best thing [to do] after the First Strike?
That would make a nice name for Citizen SDI, but I’m sure it is already taken.
Is there even a world without New York? We tasted that one before…
- Van2022

Thursday, December 16, 2021

 

Personal Reflections on the Design of the Webb Space Telescope


L. Van Warren MS CS, AE, PhD. Candidate CS




4.4% Bounce Cost

The Webb Space Telescope has an optical path consisting of a concave primary mirror, a convex secondary mirror, a concave tertiary mirror, a flat steering mirror and a final focal surface. A ray of light impinging on the primary mirror thus has 4 reflections before final instrument entry on bounce 5. The reflectance of each mirrored surface has been measured to be within a neighborhood of 98.5%.  Multiplying the loss at each bounce gives us a final effective signal strength of 94.1%.

Contrast the current design with one that stations the instrument entry in pThis would enable an improvement of 98.5-94.1 =4.4% in signal strength corresponding to an equivalent effective surface area increase, or a corresponding weight reduction if the primary mirrors were scaled down. It would also result in a significant reduction in mechanical complexity and cost if the three intervening mirrors were eliminated, at some cost in versatility.lace of the secondary mirror.

The convexity of the secondary mirror implies that the primary would also have to be reground if it were to immediately feed an instrument focal plane, which itself could be a difficult endeavor. So that is item one.

Solar Panels vs. RTG

The Voyager spacecraft have exhibited longevity that exceeds 44 years, due in part to the use of Radioisotope Thermal Generators whose performance is not dependent on solar distance. The planned orbit of Webb around the L2 Lagrangian point is a million miles from the sun, ~1 percent further than that of the Earth itself, so the solar irradiance is like Earth’s. I wonder if solar panels will have the longevity of RTG’s. In any case fuel to remain on station about L2, and to unload the reaction wheels would seem to be the factor limiting telescope lifetime, rather than power source. I also wonder if an ion thruster could offset the need for expendable fuel, resulting in increased spacecraft lifetime.




Wednesday, June 24, 2020

Machine Learning is the New Timesharing or Give the Dog a Head




"We write about the past to discover the future"

Despite the ongoing tragedy of the pandemic, we continue to live in a period of remarkable technical advance. Fortunately our society has advanced to the point where even when confined to home, we can continue to innovate. If we let our minds run a little bit, I wonder what we can come up with. 

Any innovation these days is likely to involve both collaboration, and the most modern arrays of hardware that can be assembled. A certain proverb says, "Many hands make light the work". This is true for both processors and people. Both get viruses, but I digress.

For the sake of argument, a euphemism I use for stimulating discussion, let's assume someone has plunked down in front of us the fastest airplane money can buy. We immediately ask ourselves, "Where could we go, and more importantly, where should we go with it?"

John F. Kennedy said, “For of those to whom much is given much is required”, echoing the writings of Luke the Physician who said “For unto whomsoever much is given, of [them] shall be much required.”

The question is, "How can we bring the most benefit into the world, from the gifts we have been given?". Among these gifts is our ability to reason and communicate nearly instantly worldwide.

Three Observations Motivated by Personal History

1) Time Sharing is Back
I began my computing career with the help of Dr. Carl Sneed, an associate professor at the University of Missouri, one of five I attended over the years. I had signed up for an introductory computing course which was taught on the IBM 360 TSO mainframe in 1975.

IBM 360 with peripherals

Dr. Sneed was kind enough to walk me through the following process:
Dr. Carl Sneed, University of Missouri

a) Write one's Fortran IV program on 80-character paper.
IBM Coding Paper

b) Transfer each line on the paper to a punched card, using a punched card machine that announced each character with a kerchunk, like a sewing machine that has placed a stitch.
The IBM 029 Card Punch

c) Place the deck of cards on the card reader.
IBM System/360 Model 20 Card Reader

d) Press the button which made a Las Vegas card fanning sound as the deck was read.

Card Reader Panel Buttons

e) In those days, the size of one's deck was very much the status symbol, but I digress.
Card Deck


f) I specifically remember two programming assignments I had to get running:
      - The 3,4,5 triangle problem
     - The parabola problem

The parabola problem was the most important to me personally, having grown up in a family where such figures were important. The assignment did not ask for it, but I was compelled, even obsessed by the unassigned task of DRAWING the parabola whose roots were computed by the program. This drawing took place on a Calcomp plotter.
Calcomp 565 Plotter

Despite multiple attempts I never succeeded in accomplishing (on the IBM 360) the completion of this task, but the drive to do it never left me. It became a central focus of all future computing, and led me from aerospace engineering to computer science to computer graphics at the University of Utah.
It eventually would result in this, which you can click on if you like animations.



Various Animations

g) After the card deck was read, the next activity I clearly remember was the WAIT.

One had to wait to collect the printout that resulted from the execution of your program to find out if it had functioned correctly, or even at all.
Wide form computer printout

Like the Tom Petty song, waiting was the hardest part. This would range from 5 minutes on a good day, to 30 minutes or even, "Pick it up tomorrow" on a busy day.

h) In those days, the priority with which one's jobs ran was very much a status symbol, but I digress.

i) On obtaining the tree-consuming fan-folded printout of nearly poster size proportion, one would deduce, usually in seconds, any shortcoming the program had, which would lead to a repetition of the steps above.

Now why do I present, in such excruciating detail, the above series of steps? Because if we skip over the personal computing revolution to the current state of machine learning we find we have arrived at the same place again.

Enter Machine Learning
Fast forward 45 years. Besides all the mish-mash of algorithm design and coding, machine learning (ML) consists of three principal steps:
    1) Training the neural network from data
    2) Testing the neural network on data
    3) Deploying the resulting inference engine for general use

The most time-consuming step by far is training the network. The Waiting problem has reappeared. since for most problems of current interest, training networks cannot be realistically done on a user's personal computer in a reasonable amount of time. So it has to be farmed out to a CPU, TPU, GPU, or APU in the cloud via Microsoft Azure, IBM Cloud, Google Cloud, Amazon Web Services and the like. The machines that execute the jobs sit in racks and those racks sit in server farms.

They process our jobs and we wait and we pay. An example of a massively parallel job is GTP-3, a language inference engine that has 175 billion weights in its neural network and cost an estimated $12 million dollars to train.

So to follow Dr. Sneed's kind example, how do we make machine learning as easy as possible to learn and execute? How can we minimize the number of steps, the administrative and emotional overhead necessary to appropriate ML into our computational lives? ML is already available on demand using services like Google Home Assistant, Microsoft Cortana, Apple Siri, and Amazon Echo. These enable positively useful C3PO-like conversations with machines, whose only lack is a robotic delivery mechanism.

C3PO - Ready to Answer Questions

Transforming the current generation of personal assistants into more robotically enabled ones would seem to be a natural direction for growth and development. At this writing, one can already purchase a robotic canine from Boston Dynamics for $75,000 USD. A Google Assistant to use for a head is three orders of magnitude less expensive, $29 USD at this writing. So there is one idea.


FrankenSpot = Spot + Google Assistant


So that would be one interesting project, although I personally would prefer a more anthropomorphic version since hands come in handy for robotic assistants.

Thursday, July 11, 2019

Teleportation on ARRL Field Day 2019, a Recapitulation.



Introduction

Given the precarious nature of conditions in ourselves and the world, we never know when the current song is our swansong. Nonetheless we forge ahead, looking for interesting moments of personal discovery. In ham radio everyone is good at something and all the people I know in it are much more qualified in the many subspecialties than myself. One thing I like to do is listen. My wife might dispute this, understanding though she is.

Background

Last year I attempted to monitor our Field Day 2018 communications using a Software Defined Radio that was deployed on site. My hope was that by visualizing ham band traffic, we could make more contacts. Making more contacts is the principal figure of merit for Field Day performance. There is no particular depth of relationship between transcontinental operators. Let me momentarily illustrate the situation. The conversation typically goes like this:

Operator A: CQ Field Day, CQ Field Day, this is operator K1AA.
Operator B: CQ Field Day, CQ Field Day, this is operator K2BB.

Operator A: Roger operator K?BB, My callsign is K1AA, the class is 1A and my section is AR.
Operator B: Roger operator K2??. Could you please repeat your callsign, class and section?

Operator A: Yes, they are K1AA, 1A, Alpha Romeo.
Operator B: Thank you. Please copy K2BB, 2B, and TX. Have a good field day.

Operator A: Roger K2BB. (static and whistling noises)

This terse back and forth is usually the minimum necessary for a two-way contact to be logged and recorded. Because of the pure noise and vagaries of analog single sideband transmission, it usually takes several attempts for each party to provide their callsign, section and class for the logs, data now collected with "logging software". But it remains a century-old fascination that we can communicate over long distances without wires.

Let me amplify one point. In amateur radio Field Day activity is effectively zero relationship. There is no, "Are you okay?" or "How are you doing?" There is no, "Hey, what's the weather where you are?". No, this is the purely distilled state information has been completely sterilized from the contamination of social information. Contrast this with present day social media, which is nothing but personal information, and sometimes a bit too much of it!  Field Day is about Scalps without Faces. It is a Trophy Hunt for Hits, a Safari for Signs, a Contest where only the number of contacts matters, not what content was exchanged. The tacit contract is, "Well we could have done that if we wanted to, the radios, after all, are working! It is an Olympic effort where only numbers matter. As such there is a strange emptiness in the midst of all the chatter, like the weather itself.

Compensating for this angst are the coterie of operators who sit in a circle, away from their radios, eating, drinking and rag chewing about the news of the day, perhaps talking about rigs, antennas and the technology of radio. It is in these local circles that the deepest relationships are built.


Before Teleportation comes Strategy



There are two well-known strategies in Field Day communications. In the first, an operator sits on a frequency and lets people come to them. This is called, "Running". In the second the operator searches the bands for new contacts in a strategy called "Search and Pounce".

In prior times, this search for contacts was entirely "by ear". The operator would slowly scan a frequency band looking for people who happen to be transmitting at that moment AND who were also unoccupied. If they were busy in a conversation, a waiting line of those ready to pounce would form, adding to the cacophony of noise.

The advent of Software Defined Radio is changing this hit and miss style of brute force search into a more deterministic process, at least, that is my current working belief.

The waterfall displays of SDR (shown above) enables us, at a glance, to observe all the traffic that is in play, "by eye". We can then select potential contacts using criteria such as signal strength and operator consistency. But for operators who have spent years training their ears, change can be hard!

Besides showing all available traffic instantaneously, waterfall displays show how busy the band is, and also how good signal propagation conditions are. These conditions can vary significantly over the course of the day, and over longer time frames due to solar effects on the ionosphere which has its own "weather".

Limitations of Previous Attempts

In my efforts last year to implement SDR Signal Spotting or "fish finding" four limitations were encountered.

  1. The SDR hardware required an additional computer and source of power.
  2. The SDR loop antenna characteristics did not match those of the main radios which used an antenna called a G5RV. This meant the SDR operator and the contact operators were hearing and seeing completely different views of the signal space. Using a duplicate G5RV's for the SDR was likely to fry the SDR receiver and possibly its host computer due to the strength of nearby transmissions.
  3. Even with a different antenna, transmission by any local Field Day operator would completely swamp the SDR input during the call. For those operating in digital modes, their local output obliterated all incoming signals about half the time, significantly limiting the usefulness of the signal spotter to less than a 50% duty cycle.
  4. Traditional operators like to search "by ear" rather than using the SDR to search "by eye". Initially operators would be enthusiastic, but under the grind of a long field day, they would revert to habits honed over long years of operating.

A New Approach Based on WebSDR

In an attempt to overcome these limitations a different approach was taken this year. This attempt was greatly facilitated by Tim Lee and John Nordlund. Tim is a Marlboro man of amateur radio appearing at the most helpful of moments. John is an encyclopedia of radio and electronics knowledge, which is dispensed on demand to those like me who were under the mistaken belief that they already knew something. Most of the time some unanticipated nuance is learned.  I had initially planned on running the SDR from my home, and remotely monitoring the signal landscape using an internet link. But Tim and John had suggested I could use WebSDR which would enable me to monitor our signals from several geographically distributed locations. It took me awhile to parse their suggestion into a form I could use, but this approach proved most effective and possibly revolutionary. Further, it has significant headroom for future improvement. It could change amateur radio.

Using WebSDR as the fulcrum for this year's experiment I established five concurrent sessions with Software Defined Radios in:

  • Double Oak, Texas, operated by Larry Story, W5CQU
  • Washington DC Area operated by Mehmet Ozcan, NA5B
  • Dahlonega, GA operated Phil Heaton, NY4Q
  • Corinne, Utah, operated by Clinton Turner, KA7OEI
  • Half Moon Bay, California USA, operated by Craig McCartney, W6DRZ

The open source WebSDR software can be set up by anyone using inexpensive hardware and a web server. It can also be customized for specific locations and radio configurations. Here is a typical installation operating on the 40-meter band:




The Teleportation Part

During the course of operations, it was easy to tell if we were heard at the remote locations, and whether operators at those locations were audible by us. This moment of determination resulted in a weird cognitive experience that I can only describe as a sense of teleportation, similar to that which has been described in binaural radios. I could experience the activity of two operators at two locations simultaneously in a peculiar form of binocular vision/audition that is easier and more spectacular to experience than explain. Tweaking the remote vs. local experience is a bit of art I have only begun to explore. It is like being two different people at the same instant, in two different places, hearing two other people and their respective renderings in the fog of noise and single sideband. One distinctly interesting aspect of this experience is that one can recognize a familiar voice even if it has been distorted in tone, timber and frequency. This distortion comes from the signal being encoded, transmitted and decoded by an analog system. The simultaneity of this creates a peculiar cognitive dissonance, the source of the deja vu feelings of teleportation suggested previously.

Let me describe the hardware of this experience. While I, the teloportee, am observing the visual signals of the conversation on the SDR waterfall display, I am wearing headphones with one ear up, like a Cessna pilot who is monitoring an aviation radio simultaneously with conversations in the aircraft. So, in one ear I am hearing our local operator locally, and in the other ear I am hearing them remotely. Then, in complement I hear the remote operator locally on the local operators rig and the remote operator remotely on the software defined radio. All this while tuned to their 3 kHz wide chunk of radio frequency. This whole sonic and visual arrangement pivots back and forth where local and remote conversation halves are experienced from both the local and remote points of view simultaneously!

For our work this weekend, I was surprised by the low-latency of the communication. Conversations quite nearly matched in time, but not in tone, timbre and frequency. It was interesting that despite the aural distortion, you could recognize a familiar operator. The low-latency was especially surprising given the hack job of an internet connection I had cobbled together on short notice. Our ham club facilitator Dick Wallace made efforts to gain a proper wireless connection to a neighboring agency who was sponsoring the event. But due to various (and perfectly reasonable!) security restrictions, no wireless internet connections were available.

The search however did allow us the benefit of a half-hour of undeserved air-conditioning on a hot and muggy Arkansas day. In desperation I finally set up my iPhone as a personal hotspot, using Cellular Data to accomplish the web connection. This web connection enabled the five concurrent SDR sessions with few if any drop-outs of audio or waterfall display. As the afternoon turned to night, I imagined that I would return home to an unhappy wife holding a cell phone data bill of gargantuan proportions. Instead I was pleasantly surprised to find out that after all this DXing and teleportation I was still within the confines of my data plan allowance. My wife is very understanding of my experimentation and I would be remiss in not saying so.



Summary

In summary, this year's approach of multiple, remote SDR's accessed via a web link solved and eliminated the three of the four limitations encountered in the previous year's exploration.

It partially reduced the fourth limitation as well.

This problem was solved to the point of nearly eliminating the fourth limitation by collocating the SDR fishfinder op and the SSB operator (several) at the same table with our respective hardware. This eliminated the need for 2-meter coordination chatter on the handheld radios and allowed for the simultaneity of the audio that produced the teleportation experience. This also solved the see versus hear problem.

There is more potential for automation of SDR enhanced signal spotting, also known as let the machine log the big numbers.   We might want to consider a local WebSDR installation.

Footnotes and Acknowledgements

There were many contributions made by others that made this Field Day 2019 a success. One that affected me personally was that Roger Fidler provided fabric floors for the tents that provided excellent insect abatement, specifically chiggers. The Carolina Windom antenna was spectacular in the way it was deployed with a portable mast and a flagpole. Roger provided hydration that prevented heat stroke and Tim Lee provided pizza that prevented collapse.

At one point we had a telecommunications wonderland in play. We had the operators and their operations on 7 and 14 Megahertz bands (40 and 20 meters). We had handhelds at 146 Megahertz (2 meters) for tent-to-tent SDR-spotting communication. We had at 2 and 4 Gigahertz bands (15 and 7.5 centimeters) running the internet wireless part of the operation. All these were running simultaneously to accomplish our first signal spotting of this year.

Disclaimer

These comments reflect the experiences, viewpoints and opinions of the author only and not necessarily those of any agency or individual connected with the event. They are offered without warranty of merchantability or fitness for any purpose expressed or implied.

Sunday, July 07, 2019

A Fresh New Web Site in 96 lines of Python

It was time to rebuild my website to make it, "responsive", which is code for, "works on phone and desktop".

My site uses a simple structure of three buttons in sequential rows. Three is a good number for remembering and choosing things. It works well for both mobile and desktop applications.

I began forging ahead on this, by accident, after I saw that Adobe Dreamweaver supports bootstrap.

So I started the one hour course on bootstrap, but didn't finish it BECAUSE:

While I was watching scrimba (which is fantastic by the way, because you can stop the video and edit the code they are showing you in real time on the screen) I thought, "What if bootstrap isn't the best way to do this?"

So then I googled some reviews and found the scrimba course on CSS Grid, which is even more fantastic - for my purposes it was cleaner than bootstrap. Thank you Per Harald Borgen Co-founder of Scrimba! He is very clear.

In the meantime, I asked some computer experts, who are also my kids, whether I was headed in the right direction.

By the time they got back to me that same evening, I had already learned CSS Grid on Scrimba and was immediately starting to code up a prototype. I took the course in the listed 62 minutes, but was already producing useable code in 20 minutes. This of course blew my mind.


Then I found out I could transfer my scrimba notes into this JSFiddle and get a working version while viewing the HTML and CSS at the same time. Because CSS Grid is so powerful I didn't need any JavaScript in this case. JSFiddle enabled me to preview my hacks in real time. When I got that running I could just dump the prototype code into Dreamweaver and try it out. (Sad story: Dreamweaver tried to rewrite my code and broke it in the process so I told it not to do that by setting the "Rewrite my code" preferences to "Off"!)

As the complexity of my new site escalated I needed to generate more HTML automatically. The CSS didn't need to be changed at all. To generate said HTML I used Python running in a Jupyter Notebook. This decision was really an important one because I could have written a 'C', Java, or JavaScript program, or I could have written a Unix shell script, or a sed/awk job and so forth. Python in Jupyter was the right decision.

I was able to build a little hack previewer into Jupyter because the libraries for Python are so rich.



So in short the project went a little faster than I expected. My new site, https://wdv.com, is up and running if you want to try it. You can stop reading now unless you like details and statistics.

By way of complexity here are the word, line and character counts:
Old site html file:  134 words     878 lines   14468 characters
New site html file: 237 words    465 lines     8244 characters
New site  CSS file: 120 words   276 lines     2533 characters

Thus the new site is quite a bit simpler if we're counting lines and characters. It loads faster too.

The Python Jupyter Notebook "make-web-site" generates the site from scratch using simple rules:

make-web-site does an external ls, and enumerates a list of directories that contain content.
It prints them in triples, and these can be compared to the current version of the site.



The enumerated list is used to build the index.html for that directory using a couple of Python functions:


These functions are then called to generate each row of buttons for the site:



The resulting html can be copied straight out of the notebook and dumped into Dreamweaver or any other tool. Then the HTML and CSS files are uploaded to the server.



The look and feel of the website uses CSS Grid and is contained in the file index.css.
Watch the scrimba to figure that out.

Assumptions:
1) That each folder is the location of content and contains and index file
2) That there is an images directory that contains an image file for each folder
3) We will assume that each content directory starts with an uppercase letter
4) We will assume that the images directory starts with a lowercase letter and is named, "images".

These assumptions can be violated by giving different arguments to the functions:

That's it!

Now this was just the index file of a thousand page website, so obviously there is more to do, but I was happy with this approach.

Saturday, May 11, 2019

Friction In Space: The Statistical Mechanics of a Certain Missing Quora Question

I spent a good chunk of this evening around eight o'clock answer a question on Quora. Quora is a source of a host of bizarre and interesting questions. After answering the question with the most clarity I could, I thought I posted it, but it was sucked into a vacuum, never to be found through any amount of indexing or searching.

The question was, "Is there friction in space?"

I really liked the question, so much that I am devoting a blog entry to it.

The first task is to reconstruct my original answer, which goes something like:

Yes, there is friction in space because space is not empty. There is about 1 hydrogen atom per cubic centimeter of space, more or less.

When a hydrogen atom collides with an object there is an exchange of kinetic energy that obeys the rules of conservation of momentum. Part of this energy is radiated away as heat, as photons whose wavelength, frequency or color is distributed according to the energy associated with the collision. Part of this energy results in a change in direction, however slight of the two parties involved in the collision. The destiny of collision can be represented as a pair vectors with a direction and magnitude.

There is also a 'relative wind' effect. The ensemble of hydrogen atoms that the object collides with has an average velocity that can be moving along with object (running in nautical terms), against the object (upwind), crosswind, or some combination. This relative wind will affect the spectrum of the photons radiated by the travel of the object, red-shifting it when the object is running, blue-shifting it upwind or some combination for the crosswind.

When an object reenters the atmosphere the density of the gases it encounters increases rapidly and heat is dissipated as the, 'searing heat of reentry'.

If the object was initially launched from earth, it must dissipate as heat, the kinetic energy associated with putting it in orbit in the first place. Fire on the way out, fire on the way back.

Curiously only about a tenth of the energy of placing an object in orbit has to do with it achieving the height of its orbit. The other nine-tenths is expended in achieving the velocity of its orbit. This is why if you look at the exhaust plume of a rocket that it gently curves. Early in the launch it is desirable to reduce aerodynamic drag by going straight up. Later in flight the rocket has to tilt towards the horizon to achieve orbital velocity parallel to the surface of the earth at its orbital altitude.

Image Credit

There is a term, "Max Q" which translates "Maximum Dynamic Pressure" on the vehicle that you may have heard in connection with rocket launches. As a rocket ascends vertically two things are happening - the atmosphere is rapidly getting thinner, that is the density is decreasing rapidly, and the speed of the rocket with respect to that atmosphere is increasing rapidly.

There comes a point at which the product of these two quantities reaches its maximum and the structural forces on the rocket are at their maximum. This is the Max Q point. Q stands for dynamic pressure which the half the product of velocity squared times density.

The aft end of the rocket has a characteristic spectrum or heat signature and the friction between the surface of the rocket and the atmosphere can heat it considerably.

Because of staging, much of the energy associated with delivering the spacecraft to orbit is lost to the stages that got it there. The spacecraft only retains the kinetic energy associated with its own mass, and the potential energy associated with its mass and altitude. Typically only 1 to 3% of the mass of the original rocket, the 'throw weight' makes it to orbit. But there is still a lot of energy that has to be radiated as heat when it returns, thus, 'the searing heat of reentry'.

Because of aerodynamic drag, or the 'friction' of the original question, everything is destined to reenter after some period of time.

There is a rule in physics that any time a charged particle is accelerated, it is obligated to emit a photon corresponding to the energy that caused the acceleration.

This means that spacecraft flying through the ionosphere (and nearly all of them are since it extends to 1000 miles above the surface of the earth) are causing emissions of thermal radiation corresponding to the kinetic energy of the collisions or 'friction' occurring at their specific altitude and velocity.