Read Latex

Wednesday, March 09, 2022

Least Damaged World

 Least Damaged World

As I write these words, blinking lights in basements, garages, attics, and she-sheds, millions of gaming and mining rigs crunch away on a single new problem. Least Damaged World.
Up till now these rigs were running Grand Theft Auto, Minecraft, PlayersUnknown, and hundreds of other sick titles happily warming the oceans.
But these machines and their highly skilled ops now run a sim that seeks to solve but one solicitation:
How do we save New York?
Why all that horsepower on such a short question? Because it’s not just New York. It’s London, Paris, Munich and everyone pop music. Let me talk about it [short on time]:
We’re 100 seconds to midnight and we can do anything we want with those few precious moments.
The thing we should do is answer the question, like so:
There is some order of nuclear exchanges, starting with the tactical one now aimed at Ramstein AFB, that kills the least people and another set that kills the most people. Right now, no one on Earth, knows the answer to that simple question. No brass, no fruit salad, no one. That question is identical to the first one. How do we save New York, first on the list?
But we could know it. It is knowable.
There is some best order of bartered exchanges, gained only through brute force simulation, that says what targets must be struck to save the Newest New Yorkers. This includes the lovely option of striking no one anywhere. This includes shooting the gun out of their cold dead hand. This includes shooting them in face even because no one holds gfriend by the neck but me.
Cut to the chase:
It unrolls like this. WormOfData Googles the nuclear inventories of all the card-carrying powers that be. DawnOfThunder puts them in the Unreal and Unity game engines. HiFionWiFi builds a Matrix quality sim good down to the dumpster decal. SereneDipity hacks a pure speed sim that trades away ray tracing in favor of getting the answer before 100 seconds are up.
ZipTie googles the fuzzed out regions to data-is-beautiful mine where the BIG ONES live, and which ones have their mouths open ready to barf up megadeath.
DateScroller divines the submarine inventory and Monte Carlos the favorite positions in an electrostatic LoveBoat episode of pissed off leaders in unhappy places.
SomebodySpecial forks a faster leaner version 12, built in as many days, outed to the world as open source.
You get the idea. Touch big red and watch the world end in a sim. Now change one fateful thing. A fewer number of people die. In a few million GPU hours you don’t just have crypto coinage, you saved New York, and a ton of all creatures great and small. Why? Because you and everybody else now knows something that wasn’t known before. What is the order, what is the optimal target list? Is the best thing to do nothing? What would have happened if we did nothing after 911? Would it be better than what we did? We don’t know till the sims run. We just don’t know. And second order ignorance is an unhappy end, whether you’re brass or fruit salad or no one like me.
What is best thing [to do] after the First Strike?
That would make a nice name for Citizen SDI, but I’m sure it is already taken.
Is there even a world without New York? We tasted that one before…
- Van2022

Thursday, December 16, 2021

 

Personal Reflections on the Design of the Webb Space Telescope


L. Van Warren MS CS, AE, PhD. Candidate CS




4.4% Bounce Cost

The Webb Space Telescope has an optical path consisting of a concave primary mirror, a convex secondary mirror, a concave tertiary mirror, a flat steering mirror and a final focal surface. A ray of light impinging on the primary mirror thus has 4 reflections before final instrument entry on bounce 5. The reflectance of each mirrored surface has been measured to be within a neighborhood of 98.5%.  Multiplying the loss at each bounce gives us a final effective signal strength of 94.1%.

Contrast the current design with one that stations the instrument entry in pThis would enable an improvement of 98.5-94.1 =4.4% in signal strength corresponding to an equivalent effective surface area increase, or a corresponding weight reduction if the primary mirrors were scaled down. It would also result in a significant reduction in mechanical complexity and cost if the three intervening mirrors were eliminated, at some cost in versatility.lace of the secondary mirror.

The convexity of the secondary mirror implies that the primary would also have to be reground if it were to immediately feed an instrument focal plane, which itself could be a difficult endeavor. So that is item one.

Solar Panels vs. RTG

The Voyager spacecraft have exhibited longevity that exceeds 44 years, due in part to the use of Radioisotope Thermal Generators whose performance is not dependent on solar distance. The planned orbit of Webb around the L2 Lagrangian point is a million miles from the sun, ~1 percent further than that of the Earth itself, so the solar irradiance is like Earth’s. I wonder if solar panels will have the longevity of RTG’s. In any case fuel to remain on station about L2, and to unload the reaction wheels would seem to be the factor limiting telescope lifetime, rather than power source. I also wonder if an ion thruster could offset the need for expendable fuel, resulting in increased spacecraft lifetime.




Wednesday, June 24, 2020

Machine Learning is the New Timesharing or Give the Dog a Head




"We write about the past to discover the future"

Despite the ongoing tragedy of the pandemic, we continue to live in a period of remarkable technical advance. Fortunately our society has advanced to the point where even when confined to home, we can continue to innovate. If we let our minds run a little bit, I wonder what we can come up with. 

Any innovation these days is likely to involve both collaboration, and the most modern arrays of hardware that can be assembled. A certain proverb says, "Many hands make light the work". This is true for both processors and people. Both get viruses, but I digress.

For the sake of argument, a euphemism I use for stimulating discussion, let's assume someone has plunked down in front of us the fastest airplane money can buy. We immediately ask ourselves, "Where could we go, and more importantly, where should we go with it?"

John F. Kennedy said, “For of those to whom much is given much is required”, echoing the writings of Luke the Physician who said “For unto whomsoever much is given, of [them] shall be much required.”

The question is, "How can we bring the most benefit into the world, from the gifts we have been given?". Among these gifts is our ability to reason and communicate nearly instantly worldwide.

Three Observations Motivated by Personal History

1) Time Sharing is Back
I began my computing career with the help of Dr. Carl Sneed, an associate professor at the University of Missouri, one of five I attended over the years. I had signed up for an introductory computing course which was taught on the IBM 360 TSO mainframe in 1975.

IBM 360 with peripherals

Dr. Sneed was kind enough to walk me through the following process:
Dr. Carl Sneed, University of Missouri

a) Write one's Fortran IV program on 80-character paper.
IBM Coding Paper

b) Transfer each line on the paper to a punched card, using a punched card machine that announced each character with a kerchunk, like a sewing machine that has placed a stitch.
The IBM 029 Card Punch

c) Place the deck of cards on the card reader.
IBM System/360 Model 20 Card Reader

d) Press the button which made a Las Vegas card fanning sound as the deck was read.

Card Reader Panel Buttons

e) In those days, the size of one's deck was very much the status symbol, but I digress.
Card Deck


f) I specifically remember two programming assignments I had to get running:
      - The 3,4,5 triangle problem
     - The parabola problem

The parabola problem was the most important to me personally, having grown up in a family where such figures were important. The assignment did not ask for it, but I was compelled, even obsessed by the unassigned task of DRAWING the parabola whose roots were computed by the program. This drawing took place on a Calcomp plotter.
Calcomp 565 Plotter

Despite multiple attempts I never succeeded in accomplishing (on the IBM 360) the completion of this task, but the drive to do it never left me. It became a central focus of all future computing, and led me from aerospace engineering to computer science to computer graphics at the University of Utah.
It eventually would result in this, which you can click on if you like animations.



Various Animations

g) After the card deck was read, the next activity I clearly remember was the WAIT.

One had to wait to collect the printout that resulted from the execution of your program to find out if it had functioned correctly, or even at all.
Wide form computer printout

Like the Tom Petty song, waiting was the hardest part. This would range from 5 minutes on a good day, to 30 minutes or even, "Pick it up tomorrow" on a busy day.

h) In those days, the priority with which one's jobs ran was very much a status symbol, but I digress.

i) On obtaining the tree-consuming fan-folded printout of nearly poster size proportion, one would deduce, usually in seconds, any shortcoming the program had, which would lead to a repetition of the steps above.

Now why do I present, in such excruciating detail, the above series of steps? Because if we skip over the personal computing revolution to the current state of machine learning we find we have arrived at the same place again.

Enter Machine Learning
Fast forward 45 years. Besides all the mish-mash of algorithm design and coding, machine learning (ML) consists of three principal steps:
    1) Training the neural network from data
    2) Testing the neural network on data
    3) Deploying the resulting inference engine for general use

The most time-consuming step by far is training the network. The Waiting problem has reappeared. since for most problems of current interest, training networks cannot be realistically done on a user's personal computer in a reasonable amount of time. So it has to be farmed out to a CPU, TPU, GPU, or APU in the cloud via Microsoft Azure, IBM Cloud, Google Cloud, Amazon Web Services and the like. The machines that execute the jobs sit in racks and those racks sit in server farms.

They process our jobs and we wait and we pay. An example of a massively parallel job is GTP-3, a language inference engine that has 175 billion weights in its neural network and cost an estimated $12 million dollars to train.

So to follow Dr. Sneed's kind example, how do we make machine learning as easy as possible to learn and execute? How can we minimize the number of steps, the administrative and emotional overhead necessary to appropriate ML into our computational lives? ML is already available on demand using services like Google Home Assistant, Microsoft Cortana, Apple Siri, and Amazon Echo. These enable positively useful C3PO-like conversations with machines, whose only lack is a robotic delivery mechanism.

C3PO - Ready to Answer Questions

Transforming the current generation of personal assistants into more robotically enabled ones would seem to be a natural direction for growth and development. At this writing, one can already purchase a robotic canine from Boston Dynamics for $75,000 USD. A Google Assistant to use for a head is three orders of magnitude less expensive, $29 USD at this writing. So there is one idea.


FrankenSpot = Spot + Google Assistant


So that would be one interesting project, although I personally would prefer a more anthropomorphic version since hands come in handy for robotic assistants.

Thursday, July 11, 2019

Teleportation on ARRL Field Day 2019, a Recapitulation.



Introduction

Given the precarious nature of conditions in ourselves and the world, we never know when the current song is our swansong. Nonetheless we forge ahead, looking for interesting moments of personal discovery. In ham radio everyone is good at something and all the people I know in it are much more qualified in the many subspecialties than myself. One thing I like to do is listen. My wife might dispute this, understanding though she is.

Background

Last year I attempted to monitor our Field Day 2018 communications using a Software Defined Radio that was deployed on site. My hope was that by visualizing ham band traffic, we could make more contacts. Making more contacts is the principal figure of merit for Field Day performance. There is no particular depth of relationship between transcontinental operators. Let me momentarily illustrate the situation. The conversation typically goes like this:

Operator A: CQ Field Day, CQ Field Day, this is operator K1AA.
Operator B: CQ Field Day, CQ Field Day, this is operator K2BB.

Operator A: Roger operator K?BB, My callsign is K1AA, the class is 1A and my section is AR.
Operator B: Roger operator K2??. Could you please repeat your callsign, class and section?

Operator A: Yes, they are K1AA, 1A, Alpha Romeo.
Operator B: Thank you. Please copy K2BB, 2B, and TX. Have a good field day.

Operator A: Roger K2BB. (static and whistling noises)

This terse back and forth is usually the minimum necessary for a two-way contact to be logged and recorded. Because of the pure noise and vagaries of analog single sideband transmission, it usually takes several attempts for each party to provide their callsign, section and class for the logs, data now collected with "logging software". But it remains a century-old fascination that we can communicate over long distances without wires.

Let me amplify one point. In amateur radio Field Day activity is effectively zero relationship. There is no, "Are you okay?" or "How are you doing?" There is no, "Hey, what's the weather where you are?". No, this is the purely distilled state information has been completely sterilized from the contamination of social information. Contrast this with present day social media, which is nothing but personal information, and sometimes a bit too much of it!  Field Day is about Scalps without Faces. It is a Trophy Hunt for Hits, a Safari for Signs, a Contest where only the number of contacts matters, not what content was exchanged. The tacit contract is, "Well we could have done that if we wanted to, the radios, after all, are working! It is an Olympic effort where only numbers matter. As such there is a strange emptiness in the midst of all the chatter, like the weather itself.

Compensating for this angst are the coterie of operators who sit in a circle, away from their radios, eating, drinking and rag chewing about the news of the day, perhaps talking about rigs, antennas and the technology of radio. It is in these local circles that the deepest relationships are built.


Before Teleportation comes Strategy



There are two well-known strategies in Field Day communications. In the first, an operator sits on a frequency and lets people come to them. This is called, "Running". In the second the operator searches the bands for new contacts in a strategy called "Search and Pounce".

In prior times, this search for contacts was entirely "by ear". The operator would slowly scan a frequency band looking for people who happen to be transmitting at that moment AND who were also unoccupied. If they were busy in a conversation, a waiting line of those ready to pounce would form, adding to the cacophony of noise.

The advent of Software Defined Radio is changing this hit and miss style of brute force search into a more deterministic process, at least, that is my current working belief.

The waterfall displays of SDR (shown above) enables us, at a glance, to observe all the traffic that is in play, "by eye". We can then select potential contacts using criteria such as signal strength and operator consistency. But for operators who have spent years training their ears, change can be hard!

Besides showing all available traffic instantaneously, waterfall displays show how busy the band is, and also how good signal propagation conditions are. These conditions can vary significantly over the course of the day, and over longer time frames due to solar effects on the ionosphere which has its own "weather".

Limitations of Previous Attempts

In my efforts last year to implement SDR Signal Spotting or "fish finding" four limitations were encountered.

  1. The SDR hardware required an additional computer and source of power.
  2. The SDR loop antenna characteristics did not match those of the main radios which used an antenna called a G5RV. This meant the SDR operator and the contact operators were hearing and seeing completely different views of the signal space. Using a duplicate G5RV's for the SDR was likely to fry the SDR receiver and possibly its host computer due to the strength of nearby transmissions.
  3. Even with a different antenna, transmission by any local Field Day operator would completely swamp the SDR input during the call. For those operating in digital modes, their local output obliterated all incoming signals about half the time, significantly limiting the usefulness of the signal spotter to less than a 50% duty cycle.
  4. Traditional operators like to search "by ear" rather than using the SDR to search "by eye". Initially operators would be enthusiastic, but under the grind of a long field day, they would revert to habits honed over long years of operating.

A New Approach Based on WebSDR

In an attempt to overcome these limitations a different approach was taken this year. This attempt was greatly facilitated by Tim Lee and John Nordlund. Tim is a Marlboro man of amateur radio appearing at the most helpful of moments. John is an encyclopedia of radio and electronics knowledge, which is dispensed on demand to those like me who were under the mistaken belief that they already knew something. Most of the time some unanticipated nuance is learned.  I had initially planned on running the SDR from my home, and remotely monitoring the signal landscape using an internet link. But Tim and John had suggested I could use WebSDR which would enable me to monitor our signals from several geographically distributed locations. It took me awhile to parse their suggestion into a form I could use, but this approach proved most effective and possibly revolutionary. Further, it has significant headroom for future improvement. It could change amateur radio.

Using WebSDR as the fulcrum for this year's experiment I established five concurrent sessions with Software Defined Radios in:

  • Double Oak, Texas, operated by Larry Story, W5CQU
  • Washington DC Area operated by Mehmet Ozcan, NA5B
  • Dahlonega, GA operated Phil Heaton, NY4Q
  • Corinne, Utah, operated by Clinton Turner, KA7OEI
  • Half Moon Bay, California USA, operated by Craig McCartney, W6DRZ

The open source WebSDR software can be set up by anyone using inexpensive hardware and a web server. It can also be customized for specific locations and radio configurations. Here is a typical installation operating on the 40-meter band:




The Teleportation Part

During the course of operations, it was easy to tell if we were heard at the remote locations, and whether operators at those locations were audible by us. This moment of determination resulted in a weird cognitive experience that I can only describe as a sense of teleportation, similar to that which has been described in binaural radios. I could experience the activity of two operators at two locations simultaneously in a peculiar form of binocular vision/audition that is easier and more spectacular to experience than explain. Tweaking the remote vs. local experience is a bit of art I have only begun to explore. It is like being two different people at the same instant, in two different places, hearing two other people and their respective renderings in the fog of noise and single sideband. One distinctly interesting aspect of this experience is that one can recognize a familiar voice even if it has been distorted in tone, timber and frequency. This distortion comes from the signal being encoded, transmitted and decoded by an analog system. The simultaneity of this creates a peculiar cognitive dissonance, the source of the deja vu feelings of teleportation suggested previously.

Let me describe the hardware of this experience. While I, the teloportee, am observing the visual signals of the conversation on the SDR waterfall display, I am wearing headphones with one ear up, like a Cessna pilot who is monitoring an aviation radio simultaneously with conversations in the aircraft. So, in one ear I am hearing our local operator locally, and in the other ear I am hearing them remotely. Then, in complement I hear the remote operator locally on the local operators rig and the remote operator remotely on the software defined radio. All this while tuned to their 3 kHz wide chunk of radio frequency. This whole sonic and visual arrangement pivots back and forth where local and remote conversation halves are experienced from both the local and remote points of view simultaneously!

For our work this weekend, I was surprised by the low-latency of the communication. Conversations quite nearly matched in time, but not in tone, timbre and frequency. It was interesting that despite the aural distortion, you could recognize a familiar operator. The low-latency was especially surprising given the hack job of an internet connection I had cobbled together on short notice. Our ham club facilitator Dick Wallace made efforts to gain a proper wireless connection to a neighboring agency who was sponsoring the event. But due to various (and perfectly reasonable!) security restrictions, no wireless internet connections were available.

The search however did allow us the benefit of a half-hour of undeserved air-conditioning on a hot and muggy Arkansas day. In desperation I finally set up my iPhone as a personal hotspot, using Cellular Data to accomplish the web connection. This web connection enabled the five concurrent SDR sessions with few if any drop-outs of audio or waterfall display. As the afternoon turned to night, I imagined that I would return home to an unhappy wife holding a cell phone data bill of gargantuan proportions. Instead I was pleasantly surprised to find out that after all this DXing and teleportation I was still within the confines of my data plan allowance. My wife is very understanding of my experimentation and I would be remiss in not saying so.



Summary

In summary, this year's approach of multiple, remote SDR's accessed via a web link solved and eliminated the three of the four limitations encountered in the previous year's exploration.

It partially reduced the fourth limitation as well.

This problem was solved to the point of nearly eliminating the fourth limitation by collocating the SDR fishfinder op and the SSB operator (several) at the same table with our respective hardware. This eliminated the need for 2-meter coordination chatter on the handheld radios and allowed for the simultaneity of the audio that produced the teleportation experience. This also solved the see versus hear problem.

There is more potential for automation of SDR enhanced signal spotting, also known as let the machine log the big numbers.   We might want to consider a local WebSDR installation.

Footnotes and Acknowledgements

There were many contributions made by others that made this Field Day 2019 a success. One that affected me personally was that Roger Fidler provided fabric floors for the tents that provided excellent insect abatement, specifically chiggers. The Carolina Windom antenna was spectacular in the way it was deployed with a portable mast and a flagpole. Roger provided hydration that prevented heat stroke and Tim Lee provided pizza that prevented collapse.

At one point we had a telecommunications wonderland in play. We had the operators and their operations on 7 and 14 Megahertz bands (40 and 20 meters). We had handhelds at 146 Megahertz (2 meters) for tent-to-tent SDR-spotting communication. We had at 2 and 4 Gigahertz bands (15 and 7.5 centimeters) running the internet wireless part of the operation. All these were running simultaneously to accomplish our first signal spotting of this year.

Disclaimer

These comments reflect the experiences, viewpoints and opinions of the author only and not necessarily those of any agency or individual connected with the event. They are offered without warranty of merchantability or fitness for any purpose expressed or implied.

Sunday, July 07, 2019

A Fresh New Web Site in 96 lines of Python

It was time to rebuild my website to make it, "responsive", which is code for, "works on phone and desktop".

My site uses a simple structure of three buttons in sequential rows. Three is a good number for remembering and choosing things. It works well for both mobile and desktop applications.

I began forging ahead on this, by accident, after I saw that Adobe Dreamweaver supports bootstrap.

So I started the one hour course on bootstrap, but didn't finish it BECAUSE:

While I was watching scrimba (which is fantastic by the way, because you can stop the video and edit the code they are showing you in real time on the screen) I thought, "What if bootstrap isn't the best way to do this?"

So then I googled some reviews and found the scrimba course on CSS Grid, which is even more fantastic - for my purposes it was cleaner than bootstrap. Thank you Per Harald Borgen Co-founder of Scrimba! He is very clear.

In the meantime, I asked some computer experts, who are also my kids, whether I was headed in the right direction.

By the time they got back to me that same evening, I had already learned CSS Grid on Scrimba and was immediately starting to code up a prototype. I took the course in the listed 62 minutes, but was already producing useable code in 20 minutes. This of course blew my mind.


Then I found out I could transfer my scrimba notes into this JSFiddle and get a working version while viewing the HTML and CSS at the same time. Because CSS Grid is so powerful I didn't need any JavaScript in this case. JSFiddle enabled me to preview my hacks in real time. When I got that running I could just dump the prototype code into Dreamweaver and try it out. (Sad story: Dreamweaver tried to rewrite my code and broke it in the process so I told it not to do that by setting the "Rewrite my code" preferences to "Off"!)

As the complexity of my new site escalated I needed to generate more HTML automatically. The CSS didn't need to be changed at all. To generate said HTML I used Python running in a Jupyter Notebook. This decision was really an important one because I could have written a 'C', Java, or JavaScript program, or I could have written a Unix shell script, or a sed/awk job and so forth. Python in Jupyter was the right decision.

I was able to build a little hack previewer into Jupyter because the libraries for Python are so rich.



So in short the project went a little faster than I expected. My new site, https://wdv.com, is up and running if you want to try it. You can stop reading now unless you like details and statistics.

By way of complexity here are the word, line and character counts:
Old site html file:  134 words     878 lines   14468 characters
New site html file: 237 words    465 lines     8244 characters
New site  CSS file: 120 words   276 lines     2533 characters

Thus the new site is quite a bit simpler if we're counting lines and characters. It loads faster too.

The Python Jupyter Notebook "make-web-site" generates the site from scratch using simple rules:

make-web-site does an external ls, and enumerates a list of directories that contain content.
It prints them in triples, and these can be compared to the current version of the site.



The enumerated list is used to build the index.html for that directory using a couple of Python functions:


These functions are then called to generate each row of buttons for the site:



The resulting html can be copied straight out of the notebook and dumped into Dreamweaver or any other tool. Then the HTML and CSS files are uploaded to the server.



The look and feel of the website uses CSS Grid and is contained in the file index.css.
Watch the scrimba to figure that out.

Assumptions:
1) That each folder is the location of content and contains and index file
2) That there is an images directory that contains an image file for each folder
3) We will assume that each content directory starts with an uppercase letter
4) We will assume that the images directory starts with a lowercase letter and is named, "images".

These assumptions can be violated by giving different arguments to the functions:

That's it!

Now this was just the index file of a thousand page website, so obviously there is more to do, but I was happy with this approach.

Saturday, May 11, 2019

Friction In Space: The Statistical Mechanics of a Certain Missing Quora Question

I spent a good chunk of this evening around eight o'clock answer a question on Quora. Quora is a source of a host of bizarre and interesting questions. After answering the question with the most clarity I could, I thought I posted it, but it was sucked into a vacuum, never to be found through any amount of indexing or searching.

The question was, "Is there friction in space?"

I really liked the question, so much that I am devoting a blog entry to it.

The first task is to reconstruct my original answer, which goes something like:

Yes, there is friction in space because space is not empty. There is about 1 hydrogen atom per cubic centimeter of space, more or less.

When a hydrogen atom collides with an object there is an exchange of kinetic energy that obeys the rules of conservation of momentum. Part of this energy is radiated away as heat, as photons whose wavelength, frequency or color is distributed according to the energy associated with the collision. Part of this energy results in a change in direction, however slight of the two parties involved in the collision. The destiny of collision can be represented as a pair vectors with a direction and magnitude.

There is also a 'relative wind' effect. The ensemble of hydrogen atoms that the object collides with has an average velocity that can be moving along with object (running in nautical terms), against the object (upwind), crosswind, or some combination. This relative wind will affect the spectrum of the photons radiated by the travel of the object, red-shifting it when the object is running, blue-shifting it upwind or some combination for the crosswind.

When an object reenters the atmosphere the density of the gases it encounters increases rapidly and heat is dissipated as the, 'searing heat of reentry'.

If the object was initially launched from earth, it must dissipate as heat, the kinetic energy associated with putting it in orbit in the first place. Fire on the way out, fire on the way back.

Curiously only about a tenth of the energy of placing an object in orbit has to do with it achieving the height of its orbit. The other nine-tenths is expended in achieving the velocity of its orbit. This is why if you look at the exhaust plume of a rocket that it gently curves. Early in the launch it is desirable to reduce aerodynamic drag by going straight up. Later in flight the rocket has to tilt towards the horizon to achieve orbital velocity parallel to the surface of the earth at its orbital altitude.

Image Credit

There is a term, "Max Q" which translates "Maximum Dynamic Pressure" on the vehicle that you may have heard in connection with rocket launches. As a rocket ascends vertically two things are happening - the atmosphere is rapidly getting thinner, that is the density is decreasing rapidly, and the speed of the rocket with respect to that atmosphere is increasing rapidly.

There comes a point at which the product of these two quantities reaches its maximum and the structural forces on the rocket are at their maximum. This is the Max Q point. Q stands for dynamic pressure which the half the product of velocity squared times density.

The aft end of the rocket has a characteristic spectrum or heat signature and the friction between the surface of the rocket and the atmosphere can heat it considerably.

Because of staging, much of the energy associated with delivering the spacecraft to orbit is lost to the stages that got it there. The spacecraft only retains the kinetic energy associated with its own mass, and the potential energy associated with its mass and altitude. Typically only 1 to 3% of the mass of the original rocket, the 'throw weight' makes it to orbit. But there is still a lot of energy that has to be radiated as heat when it returns, thus, 'the searing heat of reentry'.

Because of aerodynamic drag, or the 'friction' of the original question, everything is destined to reenter after some period of time.

There is a rule in physics that any time a charged particle is accelerated, it is obligated to emit a photon corresponding to the energy that caused the acceleration.

This means that spacecraft flying through the ionosphere (and nearly all of them are since it extends to 1000 miles above the surface of the earth) are causing emissions of thermal radiation corresponding to the kinetic energy of the collisions or 'friction' occurring at their specific altitude and velocity.



Thursday, April 25, 2019

Computing and the Future HW 12 - Final Presentation and Book Review

1. Create a first version of your presentation. The presentation will be 20 minutes. 

Done! The complete video is on YouTube.




2. Grad students only: continue with the book you obtained. Read the next 20 pages. State what page numbers you have read and provide a reminder of the title of the book. Then, discuss those 20 pages. Explain what you agree with, disagree with, and how your views compare with those of other reviewers on Amazon or elsewhere.

Reviewed This Week: 
  • Chapter 24 - Sic Transit Humanitas - The Transcent of Man
  • Chapter 25Floating Prairies of the Seas
The comparison to other Amazon reviewers was done in the last section of this previous homework. I ranked this book highly myself.

I have moved the details answer to my ongoing review of the book, "The Human Race to the Future" a single curated document that is here. It is very detailed with over 400 remarks, many with references and some with illustrations.

Tuesday, April 16, 2019

Computing and the Future HW 11 - Intelligent Life

1) Do you think we will find intelligent life in the universe? Why or why not?

There are really two questions embedded in this one. The first is, "Is there intelligent life?", the second is, "Will we find it?". I think the answer to the first one is maybe, and the answer to the second one is maybe. 

Quickly sketching - Let's say that the word "maybe", in the absence of better information means, "A 50 percent chance" at each node in the decision tree. My reason for using the 50 percent figure was revealed in a previous assignment where I discovered the perils of false precision. Anyway, running the calculation this means, that there is a 1 in 4 chance that the answer to both questions is, "Yes". There is also a 1 in 4 chance that life is out there but we don't find it. There is a 1 in 2 chance that the answer to both questions is, "No", since if it doesn't exist, we can't find it! There is also an imaginary component to this, where it doesn't exist, but we claim to find it. There is a whole cottage industry devoted to this.




Diving Deeper we could talk about the Drake equation:


Image Credit - Universe Today


or more recently the advent of exoplanet discovery, first by the Kepler spacecraft:




and now by Tess:




With 4023 exoplanets discovered so far, it is clear that exoplanets are abundant but we couldn't see them with land-based telescopes of the past. Most exoplanets are not in a habitable zone that would support, "Life as we know it". But we have discovered several candidate planets that are Earthlike, meaning their mass, and their relative distance from their parent star support liquid water. Given that there are billions of such stars, each with multiple planets, then it becomes more likely than not, that the answer to the first question is, "Yes". But due to the enormous stellar distances, the second question may remain unanswered for some time to come. Also we have to consider that any life that may have existed and tried to communicate with us in the past, may no longer exist. This suggests the follow-up question: "Could intelligent life have existed and now be extinct?" This because we can never see the stars as they are now, only as they were when the light from them left on its journey to Earth. If you haven't tried Galaxy Zoo Citizen Science I highly recommend it.

2) Suppose you had a coupon for a free robot. The catch is it can only do one thing. But you can get a robot that will do whatever one thing you like, just not anything else. What would you want your robot to do, and why?

There is some "wiggle room" in this question depending on what "one thing" means. Consider a Roomba. The "one thing" it does is vacuum the floor, but it executes many actions in order to accomplish that one thing: It docks with its power station, it translates in x and y. It rotates. It goes from room to room. It returns to its dock. It alerts the owner when something is wrong. Recently someone called the police about an intruder and it turned out to be a Roomba. So a Roomba can scare people also.


Image Credit - iRobot


I have a robot called a "Ring Doorbell". It does one thing. It watches my door, 24/7/365. It is my favorite robot because it does that job extremely well, taking video of all comers, and placing that video in the cloud. This prevents anyone from stealing it. Should they try anyway, it bricks itself and calls home when an installation is attempted. It has triggered a new family passtime called, "who came to the door today and what did they do?", a constant source of entertainment with all the draw of a Nature program. It was fairly inexpensive as robots go also:


For me, a robot, is any motorized mechanical device that executes a stored program in any form. From this definition some of the devices in my house are robots and some are not. A dishwasher, clothes-washer, dryer, and microwave all execute stored programs. My refrigerator does not, but the ice maker does. Recently washer and dryers have become smarter, and more autonomous. I can diagnose my dryer from my smartphone. If we lost any single robot in our house, the washer would be the most serious.

But the spirit of the question looks more to the future to a robot I do not yet have. The robot I would really want is a sentry robot. It would screen if a person was friend or foe, and deal with them accordingly. It would summon help if there was danger and it would fix a beverage if things were okay. It would perform the role of a benevolent security guard at the gate to block those whose intentions are harmful and to greet those whose intentions are good. It would use Machine Learning and expression analysis, including facial expressions, voice tone and pattern analysis, and movement patterns such as gait to develop an impression of the intentions of visitors.


The SGR-A1 - Image Credit

Of course if I had one of these for my home, I would also want one for my car. It would ride along in an unoccupied passenger seat taking stock of the traffic, and the people in the vehicles around me. If someone came up playing loud deafening bass tones, it would octave multiply the tones into the pain region and transmit them back to the source. If the source turned down the music, the robot would instantly turn down the transmission. So if such a driver turned down their bass and rolled down the window to ask, "What is that sound", I could just say, "What sound?". I would also want my sentry robot to recommend evasive maneuvers to avoid hazardous conditions and annunciate the presence of bicyclists and pedestrians as a redundant safety check. I would call it, "Back-Seat-Sentry". It would also have an off switch.

3) Imagine a robotic future. Would it possible in such a future for labor to be free? For example, suppose there was a law prohibiting anyone from being paid to do work. Could the human race survive in the face of such a law?

Absolutely. Revenge is a dish best served cold. Work is a task best performed by robots. The appliance singularity has already happened and it is fantastic.

No I'm wrong. For robotic labor in the future to be free, you would have to consent to listen to an advertisement dictated to your robot, or it would not be recharged. After awhile piles of discarded Freebots would accumulate and fill the dumpsters and landfills due to the "Amazon Effect". Hackers would overcome the limitation of forced advertising to build a new race of Hacked Freebots. This would cause Freebot corporation to go out of business and the Freebot to become extinct... or would they?


Image Credit - The Telegraph

The human race has done a good job surviving all kinds of strange laws, so there is not reason to think that anything would change on that front.

4) Comment on the movie Transcendent Man. What do you agree with, disagree with, what do you look forward to, are apprehensive about, etc.

I liked it alot. I made several notes, the gist of which are included below. I had some impressions before I watched the movie and some impressions after so I want to contrast those as well.

Before
  • My sense of "The Singularity" that we have been hearing about is that it is like the Hubbert Theory of Peak Oil. A catastrophe of the future that seems like it always will be. With any of the current claims of a technical singularity, there are always moderating, mitigating factors as I wrote about in my first Computing and the Future Assignment. In that assignment I argue against a programmer productivity singularity on the basis of "too many choices". I will say that in my lifetime the appearance of computing has been a singularity and one that I like very much. My relationship with computers is now so long and so deep that I cannot imagine life without them. They are an extension of my brain, my body, and my persona.
  • Kurzweil started Singularity University. I wanted to know if this was going to be more like Amway, or more like SIGGRAPH. His claim that, "The Singularity is Near" does feel a bit culty, doesn't it? In class this led us to identify the Hype Cycle - a lifecycle for the appearance of new technologies. Some are fads, some persist.

After
  • In class as part of the post-discussion, we identified a 'clone vs. original' principle that emerges from the fact that we can eventually reverse engineer any biological process that we want given enough time and resources. So say that it is possible for us to completely duplicate the functionality of our body. Even if we do that, that does not make this clone, the second-instance of us, the same as the first instance of us? It is not clear how consciousness would be uploaded even if the perfect clone existed, although a hint appeared in the movie. Kevin Warwick in the UK implanted a device in his arm that produced sensations in his brain actions in the real world. He then used that device to move and feel the actions of a remote hand a world away. So he demonstrated that some aspects of volition, of motor effect and sensation CAN be reproduced in the second-instance of ourselves. The question is, in the limit, can all our volition and sensation be thus reproduced and more importantly, is that sum equal to the total of who we are? This is deep.
  • Kurzweil articulates the Law of Accelerating Returns which states that the current generation of automation is used to create the next generation and this has a tremendously compounding effect on accelerating technical development. He speaks of, "A billionfold increase in computation in 40 years." That seems singular to me.
  • Watching this 2011 movie I noticed how old some of the computers and technology looked, even though the movie is only 9 years old. It reminded me of the fact that humans keep improving things, along a narrow tract until a paradigm change forces them to do something else. An example of this was the advances in Yankee Clipper ships. They kept getting bigger and faster, until the steam engine was invented. Paradigm change.
  • Kurzweil identifies the GNR core as Genetics, Nanotechnology, and Robotics. This reminded me of a friend who went to work for Nanotech messiah Eric Drexler. Nanotech was not ready for prime time, and appears to follow a linear rather than exponential growth law. This spells disaster for those who were hoping for more. Machine Learning may change that, especially when combined with robotics, but that could be my own personal bias showing. Machine Learning does seem to benefit more from Moore's Law, the question is will its deployment into the real 4D world reflect that?
  • Kurzweil's fight with death caused me to write:

    "If you don't accept it you are doomed to fight an unwinnable fight"

  • This made me feel sad for Kurzweil, in that he is wasting time that could be used to do things more similar to those he has enjoyed great success in. He seems to have fallen in the same trap as time traveling physicist Ron Mallett, and for identical reasons. Of course these 'traps' can lead to incredible technical progress, but they can also be a source of great personal disappointment. But what if Dr. Mallett succeeded - would he tell anyone or would it be too dangerous to do so?

5) Create at least half of a first draft of your presentation. For example, you could create some slides.

I have created a half first draft using PowerPoint. Many people belittle or criticize the use of PowerPoint but I find it an excellent storyboarding tool for designing and guiding a lecture in a visual format. Almost any kind of media can be included. I suppose any tool can be used to create a boring lecture, or contrariwise an interesting lecture. For me a well prepared presentation prompts my favorite activity, free-associating and brainstorming over an idea, project or topic.

I have finished construction of both vTMS™ units and tested them against their respective power supplies. I have nearly finished the vBrain™ simulation unit, except for the eyes which are drying while I write this. This process has been fun, but far more labor-intensive than I imagined when I thought it up.




6) (Grad students only) continue with the book you obtained. Read the next 20 pages. State the book title, author, and page numbers you have read. Then, discuss those pages. Explain what you agree with, disagree with, and how your views compare with those of other reviewers on Amazon or elsewhere.


Reviewed This Week: 
  • Chapter 22 - New Plant Paradigms
  • Chapter 23Asteroid Apocalypse

I have moved this answer to my ongoing review of the book, "The Human Race to the Future" a single curated document that is here.

Now this week I have a slight disclaimer - my suggestions are offered without merchantability or fitness for any purpose expressed or implied. I found the chapter on new plant paradigms very engaging - to the point I would immediately start to engineer them in my head. This would immediately lead me to some kind of difficulty or glitch that might appear in accomplishing the goal. This may caused my comment to imply, "that would be difficult", when what I'm really saying is, "This is where I would get stuck."