CTA Train Tracker – Part 2

“Sweet dreams and flying machines in pieces on the ground”

Taylor, J. (1970). Fire and Rain [Lyrics]

In my previous post related to this project (CTA Realtime Train Tracker), I explored the Chicago Transit Authority’s REST API from a software standpoint — what data is available, how do I access it, how can I display it. In this post, I present my efforts to build a illuminated map of the CTA rail system showing the actual locations of trains in real time.

Design Goals

Small Overall Dimensions and Light Weight17×22 inches wide at most
Geographically Suggestive👎🏻
👍🏻
Stop LabelsEach stop should be labeled with its official station name.
Bi-directional Traffic FlowThere should be something that shows the direction each train is moving.
Color CodedThe CTA Train Lines are called Red, Blue, Green, Orange, Pink, Purple, Yellow and Brown. The lights on a Line should reflect the Line’s name.
Minimal Light BleedThe glow from a light should only illuminate one stop.
Visible When Lights OffThe map needs to be printed onto the board so that it’s recognizable even if the lights are turned off.
Lights between StopsThere should be a light between each stop, to indicate that a train has left its previous station, but has not yet arrived at its next station stop.
Inline Loop
👎🏻
Easy Wiring
👎🏻

Construction

While researching this build, I came across a map of the CTA Rail System on the Chicago Tribune web site (pictured above in the Design Goals table). Every stop was labeled, the Loop was not in a separate call-out, and the placement of the Lines bears a strong suggestion of a geographically accurate map. The straight lines with minimal corners lend themselves to the use of NeoPixel strips. NeoPixel strips are easy to wire, easy to program and color coded.

NeoPixel strips come in several sizes. I found a set that included 150 pixels per meter – that’s 6.6mm spacing between them. A single strip that runs from Linden in Wilmette to 95th St in Chicago would require approximately 44 pixels for each named stop plus one pixel for the in-between stops. That translates into a map that’s 22 inches tall.

The strips are 4mm wide, which would also allow for two strips to be placed side-by-side. One strip for the “Southbound” trains and another for the “Northbound” trains. It would be clear which direction a train is moving by noting which strip its LED is on — just like cars on a road.

Multiple strips would be needed — roughly one for each of the eight Lines. Some stops are serviced by multiple Lines, so a given pixel might illuminate as red, brown, purple, etc. depending on which train was passing through. The strips can be cut, repositioned appropriately and rewired.

Programming

With an Adafruit RP2040 Scorpio, 8 LED strips can be driven from a single control board running CircuitPython. The Scorpio would require a Feather add-on to give it Wi-Fi capability.

The REST API has two key pieces of information to place a train — the station ID which the train is traveling towards and whether arrival at that station is imminent (<60 seconds out) or not. So, for each train running, look up the station ID in a table/dictionary. This table will contain the strip and pixel number for that station. If arrival is imminent, light that pixel with the color of the train. If arrival is not imminent, light the pixel one less than the number in the table.

This simple rule assumes the “Southbound” strip joins back up with the “Northbound” strip at the “southern” end forming one continuous strip of consecutively numbered pixels. The rule breaks down at the intersection of two (or more) Lines that share the same tracks. Luckily, the number of exceptions is manageable.

Display

With the electronics decided, the next decision is to address the look of the map. I envision a thin, wooden board that is laser etched with station names and with holes for the LEDs to shine through. The LED strips will be attached to the back of the board. The holes will keep the light focused. A layer of diffusing material could be placed to soften the lights. Colored lines can be painted on the board for each train Line to satisfy the “lights off” goal. An appropriate logo would be etched into the board as well.

Laser etching

The Space’s laser cutter is conveniently set up with Inkscape, which reads SVG files. SVG files are just plain text files formatted in XML. I could use Notepad to type an SVG file by hand and get that file cut on the laser.

My first experiment was to write a program to create an SVG to cut labels and holes on scrap matte board. I iterated over all the stations and used their lat/lons to place a hole and label it. For the labels, I went with a single-stroke font. This way, I could get a crisp, tiny vector etch of the labels rather than a smudgey raster burn. Inkscape 1.2 has an extension called Hershey Text which creates labels using Hershey Fonts. Hershey fonts are a collection of vector fonts developed c. 1967 by Dr. Allen Vincent Hershey at the Naval Weapons Laboratory, originally designed to be rendered using vectors on early cathode ray tube displays. While the Space’s laser cutter laptop doesn’t have an up-to-date copy of Inkscape, it will accept SVGs containing Hershey Fonts created elsewhere and render them on the laser.

Here is the results of my first test at creating an SVG programmatically.

As you can see, the Loop is hopelessly over etched and the station labels are very close together. Even if I used the largest board the laser’s bed could accommodate, I would not be able to eliminate the overcrowding.

Here’s a sample of Hershey Fonts on real wood. I ran the vector etch next to the raster etch to show the contrasting results of the two methods. I think vector wins hands down. The slight charring on the vector etch can be mediated with blue painter’s tape – although that would require a fair amount of weeding.

I’d convinced myself that I could, in fact, programmatically create an SVG file to draw the map. So I wrote a program that placed the stations of the Red Line in a straight line — spaced exactly to correspond to the distance between LEDs on a strip and I printed that SVG file on the laser.

Equally Spaced Stations

I attached an LED strip to the back of this board and animated it with some real-time train data. I only had one LED strip in inventory, so I illuminated southbound trains in Red and northbound trains in Blue. This Timelapse video shows the result.

The Hard Truth

It was at this point that I took stock of what I’d achieved and what remained to do. I realized that the project was going to be very expensive to complete. LED strips cost a lot and I’d need quite a few of them. Even with all my successes so far, the success of the final project was not assured. I was also disappointed that the partial prototype was a bit boring to watch in real-time. Only in time lapse does it approach being interesting. It just doesn’t seem worth it to proceed.

Alternative Designs

I could relax my design goals, e.g. one strip per Line instead of two and no in-between stops. But I think that will diminish the look of the project too much. Therefore, I’m moth-balling the project.

Still, I would like to see the entire rail network in operation. I looked for an alternative way to display the data. I settled on a small device called the PyPortal. I’d written about the PyPortal in this post, “The Adafruit PyPortal and Modifying Adafruit Libraries“. I ported my code to the PyPortal but rather than light up NeoPixels, I drew a colored dot on the screen to represent the position of each train.

Basic Program Flow

Here’s the guts of the program:

  1. Query the CTA for the latitude/longitude of all trains
  2. Map the lat/lon into screen coordinates
  3. Clear the screen
  4. Turn on the pixels at the screen coordinates
  5. Rinse, repeat

Step 3 caused an annoying flicker when the screen clears during each iteration.

I solved that problem by saving a list of all the pixels that were lit up. During the next iteration, I’d compare the list of prior screen coordinates with the list of new screen coordinates. If a pixel was in the prior list but is not in the current list — I’d turn the pixel off. Then I’d iterate through all the current pixels and turn them on. If the pixel is already on, it stays on — no harm no foul. If the pixel is new, it gets turned on. This is very fast and there is no flicker.

The Entire Network

Here is a time lapse of the PyPortal system running. The video is only 5 seconds, but may loop continuously.

The video is short for two reasons — 1) you get the point from the short video and 2) after just a few iterations, the program crashes. Other tests of my code running on macOS and a Raspberry Pi Zero did not abort. They run perfectly fine all through the night. After several weeks of very frustrating debugging, I’ve concluded there’s too much data for the PyPortal hardware and/or the CircuitPython libraries to process and the libraries do a poor job of handling the exceptions.

Stubborn Determination

I abandoned the PyPortal and made another attempt by attaching a tiny, TFT display onto a Raspberry Pi Zero.

(yeah, it really is that small.)

I rewrote the code (again) to draw onto that display. As I’d hoped, that program ran for many, many iterations without failing. The fact that the program runs continuously allowed me to experiment with tweaking the display.

One change I made was to the way I refreshed the screen at each iteration. Rather than clearing the pixels and drawing the updated positions, I just left all the prior iterations’ pixels turned on. After only a few iterations, the rail lines began to appear as thin lines. The end result is a glowy map of the entire CTA rail network — geographically accurate. (This is similar to what I did in Part 1 when I tracked individual trains and built a Google Earth map from the data.)

The downside of this is that once the lines are fully drawn, you can no longer dicern the current locations of the trains.

To fix that, I decided rather than turn pixels off or leave them on, I would instead turn the prior pixels dark grey. This made it easier to see both the current train locations as well as the line it was running on.

Taking it one step further, rather than grey for all lines, I set a train’s old pixel to a dimmed version of that color. Now all the trains (bright dots) are running on color coded lines (dim dots).

Finally, the TFT has two built-in buttons. I programmed them to toggle between two different versions of the data — one is the real-time train positions map, and the other is a bar chart showing how many trains are running on each line.

(Interesting aside: When photographed straight on, the camera sees the TFT colors as totally washed out and indistinguishable from one another. However, at a steep angle, the colors are saturated and vivid.)

End of the Line

While this TFT version of the map misses many of the design goals, I am at least able to see the entire CTA rail network in real time.

Many thanks to the members of Workshop 88 who shared their thoughts with me throughout the project. Without their involvement, this project might never have gotten as far as it did.

Final project

And with that… I’m done with this project (until I can find cheap, tightly spaced, LED strips).

CTA Realtime Train Tracker

I have spent many hours riding CTA elevated trains. Most notably from my home on the NW side of Chicago to Loyola University’s Downtown campus — Logan Square on the Blue Line to Washington, walk through the pedestrian tunnel to the Red Line, to Chicago Avenue. After graduation, I got an apartment in Evanston and took the Purple line from Linden in Wilmette to the Merchandise Mart where I worked.

So I took a great interest in an ad for this product:

It’s a map of the CTA lines with LEDs that light to indicate where the trains currently are — in real time. I wanted it. But at $300, it was out of reach. So, I’ll make one for myself.

The blog post is about my efforts to create a proof of concept in software before I consider a hardware solution.

The first step is to source the data. It turns out the CTA has a free, public REST API regarding where its trains are. There are multiple endpoints to the API. The most important to this project is ttpositions that shows the location of every active train on a particular line – Red, Blue, Orange, etc…

http://lapi.transitchicago.com/api/1.0/ttpositions.aspx?rt=red

and returns the following interesting values:

Train No.Next StopApproaching?LatitudeLongitudeHeading
811Chicago Ave.No41.903-87.63189
813WilsonYes41.964-41.964344

A little explanation will help here. Each active train has a unique identifying number. When a train reaches a terminus, the number is recycled.

Next stop shows the station name at which the train will make its next stop. Even if a train is sitting stopped at some station, it will be the next station down the line that will be returned. A train might also be between two stops.

If the train is just about to pull into a station, Approaching is “yes”.

Latitude and Longitude are the map coordinates of the train.

Heading is the compass direction for the train’s motion.

It’s simple to take these lat/lon data and create a KML (Keyhole Markup Language) file and feed that file into Google Earth Pro. KML files are XML. A KML file will contain XML elements to specify colors, font size, icon, and the position and orientation of the viewpoint — the point in space that determines what you’re looking at, etc.

The most critical element of the KML is a place marker for each train. The place marker looks like this:

<Placemark id="7">
  <name>605 Ridgeland 89°</name>
  <styleUrl>#g</styleUrl>
  <Point id="7">
    <coordinates>-87.79378,41.88699,300</coordinates>
  </Point>
</Placemark>

In this example, the train number, next station name and compass heading are used to label each dot on the map. styleUrl references a previously defined style that specifies the formatting of the place marker. The element coordinates specifies the lon, lat.

My minimally viable product runs a query for each CTA Line, outputs the data to the KML file, and opens the file in Google Earth.

If this is done repeatedly every 15, 30 or 60 seconds, you’ll be able to watch each train move through the system.

MACE, You’re Off the Rails!

A glaring hole in this version is the lack of context. We can’t see the elevated tracks.

The API can again help us with that. Another API endpoint is ttfollow which gives details about a specific train — including its coordinates

http://lapi.transitchicago.com/api/1.0/ttfollow.aspx?runnumber=811

My strategy to plot the train tracks was to find a train that was just leaving a terminus and repeatedly query that run number (train number) and collect lat/lon until that train reached the opposite terminus. That set of points can be coded into a KML file as a PolyLine drawn from point to point.

Sounds simple, but that’s when my frustrations began to grow.

Challenges

Zeroth, The documentation for the API says that when you query a line for all trains, it returns an array of train dictionaries. But if there is only one active train on the line, rather than a one element array, it’s a scalar dictionary. And if no trains are running on the line, the “trains” JSON key doesn’t even exist. This wasn’t a problem earlier until I did the Purple line during non-rush hour and the Yellow (Skokie Swift) line. I found solutions, but it makes the code kludgey.

First, it’s difficult to catch a train at a terminus. It’s not like the train is sitting at the O’Hare terminal, broadcasting its GPS and a station name of O’Hare. The best I could do was randomly catch a train soon after it left a terminus moving toward its next stop – Rosemont. So I had to frequently query the Blue Line hoping to see a Rosemont train HEADING EAST (as opposed to a Rosement train heading west from Cumberland!). Once I saw a train starting a run, I had to take its number and train queries.

OK — annoying, but then, patience is a virtue.

Second, as I was monitoring the progress of a train it got to somewhere near the Belmont station when POOF — It drops off the radar never to return. I have no idea why that train disappeared and I have seen this anomaly multiple times since.

OK — annoying, but…

Third, after I’d collected about 60 data points, I plotted them and saw a most bizarre path.

The train appeared to jump forward then back then forward again. I observed this behavior many times and I didn’t believe this is what the train was actually doing. I concluded there is something wrong with the data being returned from the API. Letters to the API manager have gone unanswered.

I ended up developing a workflow where I could identify these spacetime anomalies and correct them. If you look at the illustration above, if you get rid of point #3 (which is the exact same coordinates of point #7), the data looks realistic and reasonable. I imported the data points into Excel and used conditional formatting to highlight duplicates:

Then I simply deleted the first duplicate (which unformats that dup’s twin) and then moved on to the next “first” duplicate. Once no duplicates are present, I put the datapoints into the KML file to plot the line.

I went through this process for each of the CTA Lines.

The end result of all this is the following graphic:

When the train positions are overlayed onto the colored Lines, you get:

If you repeatedly plot the trains, you get a timelapse of the movement of the trains through the system.

https://youtu.be/C2WNu9fGgoc

This exercise shows the hardware version of this project is entirely possible. A prototype of the Purple Line seems very doable. It will need 9 LEDs for each stop plus 8 more for between-station lights. I want a string of LEDs for Southbound trains and a second string for Northbound trains. That makes 36 LEDs total. The processing flow would be:

Query the Purple line and get its trains  
For each train in AllTrains
   Look at the Next Stop and direction of each train
   If Approaching is Yes, 
      light up the station
   Else
   If Approaching is No, 
      light the between-station-LED just prior to Next Stop
Rinse, Repeat

There is still the issue of time warping, but I no longer care. I’ll just assume that if a train is reported at a given lon/lat, then that’s where it is.

Next Stop

The next step is to build a prototype using LED lights and migrate the software to a Raspberry Pi.

Using ChatGPT to Help Locate My Birthstar

I’m 68.56 years old today. My birthstar is that star which is 68.56 light-years away from Earth. The light produced by that star on my birth date is just reaching Earth now! Later, when I’m older, my birthstar will be different.

I wanted to write a program that would list my birthstar(s). I asked my astrophysist friend where I could find an easily queried, public access, database that contained stellar distances. She suggested the Gaia Archive. https://gea.esac.esa.int/archive/

I soon learned that the Gaia Archive contains a table called gaiadr3.gaia_source. This table contains data such as

FieldMeaning
source_id Unique ID assigned to this star by the Gaia project
raBarycentric right ascension 𝛼 of the source in ICRS at the reference epoch ref_epoch (degrees)
decBarycentric declination 𝛿 of the source in ICRS at the reference epoch ref_epoch (degrees)
ipd_gof_harmonic_phaseThe phase of the variation of the IPD GoF (reduced chi-square) as function of the position angle of the scan direction. (degrees) Clear as mud.

I knew that right ascension and declination are important for locating stellar objects. (Who can forget the scene in Contact when Jodie Foster first hears alien signals from space and relays the coordinates of those signals to her team in the control room of the VLA radio telescope. https://youtu.be/5WSqMGJpAZk)

But I couldn’t find anything in the table called distance.

ChatGPT

ChatGPT is an artificial intelligence project that has made a lot of news recently. ChatGPT is currently available for free at https://chat.openai.com. I thought I’d use it to help me figure out if the Gaia Archive can tell me my birthstar.

When you launch a ChatGPT session, called a chat, you start with a simple text box. I started my chat with

me: how can I use the Gaia archive to give me a list of stars that are a specific distance away

(The responses from ChatGPT can be verbose, so I’ll redact some of them.)

ChatGPT: …you can perform a query using the Gaia archive’s query interface. The query would likely involve specifying a distance range and using the parallax measurement to calculate the distance to each star. …you can use “ADQL” (Archive Data Query Language) which is similar to SQL…

That sounds promising. I know SQL and I can use parallax to get the distance.

me: how do you compute distance from parallax

ChatGPT: Distance (in parsecs) = 1 / (parallax angle in arcseconds)

That’s a pretty easy formula. I’ll be able to program that into other applications related to astronomy. Now to get a list of stars and their parallax values.

me: write a Gaia archive adql query to list stars that are 69 light years away

I kid you not. I asked ChatGPT to write an ADQL query that I could just paste into the web page that ChatGPT referred to in the first answer. And ChatGPT did it.

ChatGPT: Here is an example of an ADQL query that can be used to list stars that are 69 light years away using the Gaia archive:

SELECT 
    source_id, ra, dec, parallax
FROM 
    gaia_dr3.gaia_source
WHERE 
    parallax >= (3.08567758e-05 - 0.00005) AND
    parallax <= (3.08567758e-05 + 0.00005)

I pasted the code into Gaia’s query interface and got syntax errors. What followed was some back and forth with ChatGPT regarding its bogus advice.

me: that query has unresolved identifiers
it: I apologize for any confusion, you are correct that the query I provided in my previous response would not work as written…
me: the query you gave me should use gaiadr3.gaia_source as the from parameter
it: I apologize for the mistake in my previous response

Once ChatGPT’s syntax errors were corrected, the query ran for 30 minutes and returned more than 3,000,000 stars that are (supposedly) between 60 and 70 light years away. I kept narrowing down the range and eventually got the results down to 37,000 stars. I don’t know if that’s the correct number so I tried to determine the parallax measurement for a 69 light-year distant star using paper and pencil.

It turns out that while parallax is normally measured in arcseconds, Gaia records milli-arcseconds. ChatGPT didn’t seem to know that. I eventually came up with the correct values to plug into a Gaia query.

REST API

The queries up to this point were very web-interactive with a lot of keyboard and mouse clicks. I wanted to use a command line tool. So I asked,

me: does the Gaia archive have a REST API

ChatGPT: Gaia archive does have a REST API… Currently, the main endpoint of the Gaia archive REST API is: https://gea.esac.esa.int/tap-server/tap

Reading the REST documentation, there’s a Python module that can be installed to make queries easy. I installed it and asked

me: write a python to query the Gaia archive that lists ra, dec and parallax for stars that are 69 light years distant and use the astroquery.gaia module

ChatGPT:

from astroquery.gaia import Gaia

# Define the query
query = """
SELECT ra, dec, parallax
FROM gaiadr2.gaia_source
WHERE 1 / parallax * 0.001 * 3.26 = 69
"""

# Execute the query
job = Gaia.launch_job_async(query)
result = job.get_results()

# Print the result
print(result)

AND IT WORKS! I threw some bells and whistles into the Python and can now get anyone’s birthstar by supplying a birthdate on the command line.

> p3 birthstar.py -h
birthstar.py [-b:][-n][-h]
-b birthdate yyyymmdd  default: 19540627
-n Northern Hemisphere default: False
-h print usage

> p3 birthstar.py -b 19570411 -n
The person was born 4/11/1957
They are 65.77 years old today
Looking for stars 65.76986301369857 light-years distant (+/- 7 light-days)
Target parallax is 49.5906
Only stars in the Northern hemisphere will be listed.
Here's the Gaia ADQL query

SELECT
    source_id, ra, dec, parallax
FROM 
    gaiadr3.gaia_source_lite
WHERE 
    parallax <= 49.60507969737007 AND 
    parallax >= 49.5761590662218 
    AND dec > 0
ORDER BY parallax DESC

Output saved to: 1674002905756O-result.csv

source_id	        ra         dec	   parallax
1858219151114880000  312.54398   29.38393  49.59437
3270079526697710000   56.77948    1.64475  49.58366
3914019231742220000  170.93122    8.56434  49.58089

Conclusion

ChatGPT was an absolute wonder to work with. It understood what I wanted. And despite the misinformation it gave me and its use of magic numbers in its code, I don’t think I could have finished this project as quickly as I did without consulting it.

The Adafruit PyPortal and Modifying Adafruit Libraries

Adafruit.com has a large selection of electronic components to buy. They also have a very active YouTube channel, a large number of Learning Guides, and software libraries to control all that hardware. Adafruit’s PyPortal dev board is a wi-fi enabled device with a small, color, touch-enabled screen. The PyPortal is programmed using CircuitPython and Adafruit has published several learning guides for it.

Example projects

Here are just a few of the PyPortal projects that I’ve downloaded and modified:


  1. Event Countdown Timer
    Modification: A vacation countdown timer that cycles through graphics representing my next four vacation destinations and the number of days until that vacation begins.

  2. ISS Tracker
    Modification: Lengthened the satellite’s trail to show the entire orbit just completed.

  3. Hurricane Tracker
    Modification: Added trails to show where the hurricanes has been, not just where they are now.

  4. Cleveland Museum of Art Display Frame
    Modification: wait for it…

These applications all have one thing in common. They all reach out to the Internet to grab data and then display that data on the screen. When multiple applications follow the same general pattern, then a lot of effort can be saved by using a well written software library — and Adafruit has written one for the PyPortal.

Example code

To get a feel for the PyPortal’s power and ease-of-use, let’s look at the Quote of the Day project. When you point your browser to https://www.adafruit.com/api/quotes.php, you’ll get back something like this:

[
 {
 "text": "Somewhere, something incredible is waiting to be known.", 
 "author": "Sharon Begley",
 }
]

This JSON response from the web site has two pieces of text that we’d like to display on the screen – the quote and the author’s name. To do that using the Adafruit PyPortal Library, the program must create a PyPortal object and initialize it with all the relevant information:

portal = PyPortal(
         url='https://www.adafruit.com/api/quotes.php',
         json_path=([0, "text"], 
                    [0, "author"]),
         text_position=((20, 120),  # screen location for quote 
                        ( 5, 210)), # screen location for author 
         text_color=(0xFFFFFF,      # quote text color 
                     0x8080FF),     # author text color 
         text_wrap=(35,             # characters to wrap for quote 
                     0),            # no wrap for author 
         text_maxlen=(180,          # max length for quote
                       30),         # max length for author
         text_font="/fonts/Arial-ItalicMT-17.bdf",
  )
while True:
     portal.fetch()
     time.sleep(60)

The loop refreshes the screen with a new quote every 60 seconds using the PyPortal’s fetch() method.

The PyPortal library does all the heavy lifting of connecting to your LAN, browsing to the site’s url, converting the returned JSON into a Python dictionary, and displaying the text on screen. Loads of applications follow this same fetch/display model — current weather data, current number of “likes” on my latest YouTube post(s), latest bitcoin price, local gas prices, current DuPage County covid case count, etc.

Displaying Images

The Cleveland Museum of Art Display Frame Learning Guide follows the same general pattern, but includes the the download and display of an image as well as text. (All the previous examples use a static background image.). Let’s see how PyPortal library handles dynamic images. Here’s the new code:

pyportal = PyPortal(
       json_path=["data", 0, "title"],
       text_position=(4, 231),
       text_color=0xFFFFFF,
       text_font="/fonts/OpenSans-9.bdf",                    
       
       image_json_path=["data", 0, "images", "web", "url"],
       image_dim_json_path=(["data",0,"images","web","width"],
                            ["data",0,"images","web","height"])
       image_resize=(320, 225),
       image_position=(0, 0),
 )

The text parameters are still there. So what’s different? For one thing, there is no URL. We’ll deal with that later. And there are new parameters to deal with the image:

  • image_json_path – points to the URL of the downloadable image
  • image_dim_json_path – points to the dimensions of the downloadable image
  • image_resize – fit the image into these dimensions
  • image_position – indicates where on the screen to place the resized image

Now, what about the URL for the Museum? In the Quote project, the URL never changed. Each time you .fetch()’d that URL, a different quote was returned. But for the Museum project, you have to provide a different URL for each piece of art in the Museum’s collection. There are 31954 pieces in the Museum collection. The URL for the 54th piece would include an &skip=53 parameter as part of the URL. The main loop below will do that — starting with the 1st piece and sequentially displaying all the pieces.

skipcount=0
while True:
   pyportal.fetch(f'http://.....&skip={skipcount}')
   skipcount+=1
   time.sleep(60)

Art Institute of Chicago (ARTIC)

Having recently visited the Art Institute of Chicago, I got to thinking — can I modify the CMA project to display the art from the ARTIC. Does the ARTIC even have a website like the CMA? YES it does – https://www.artic.edu/open-access/public-api! So I started coding up the PyPortal parameters and ran into a BIG roadblock. The ARTIC database does not list the full path to a piece of art’s downloadable image. The URL must be constructed from several bits of data. Here’s a highly redacted JSON response for a specific piece of ARTIC art:

{
  "data": [
    {      
      "thumbnail": {
          "width": 3000,
          "height": 1502
      },
      "image_id": "cb34b0a8-bc51-d063-aab1-47c7debf3a7b",
      "title": "Ballet at the Paris Opéra"
    }
  ],
  "config": {
    "iiif_url": "https://www.artic.edu/iiif/2",
  }
}

The title of the artwork is straightforward.

The image’s URL can be computed with the following code:

[“config”,”iiif_url”]+”/”+[“data”,0,”image_id’]+”/full/!320,240/0/default.jpg”

I.e.,

https://www.artic.edu/iiif/2/cb34b0a8-bc51-d063-aab1-47c7debf3a7b/full/!320,240/0/default.jpg

but that doesn’t fit the PyPortal library’s assumption that the JSON will contain the downloadable image’s URL in a single key:value pair.

Open Source

Then at 3:00 AM, I had the idea of editing the PyPortal Library Open Source. So I downloaded the adafruit_ciruitpython_pyportal/adafruit_pyportal library from Github and started searching through the code. I found the right spot in the fetch() method, inserted a patch to build the image url and add that URL back into the JSON using a new key named ‘artic_image_URL’. The PyPortal has no idea that the image_url key is something I created and slipped in rather than coming from the Museum.

I also had to add code to stuff the image’s dimensions into the JSON. I started with [“thumbnail”][“width”] and [“thumbnail”][“height”] from the ARTIC JSON. These are the dimensions of the master, thumbnail image. But I requested !320,240 in the image_url which means the ARTIC would send me an image that fits within a 320×240 bounding box. The actual, downloaded image size might be 166×240 or 320×185 — to preserve the aspect ratio. So I had to compute the dimensions that the PyPortal library was actually going to encounter.

Here’s the complete patch that I made to the Adafruit library:

            # Build the image URL
            PREFIX = json_out["config"]["iiif_url"] + "/"
            IMAGE = json_out["data"][0]["image_id"]
            OPTIONS = '/full/!320,240/0/default.jpg'
            json_out["artic_image_URL"] = PREFIX + IMAGE + OPTIONS
            
            # Compute the aspect of the Library's artwork
            image_Width = json_out['data'][0]['thumbnail']['width']
            image_Height = json_out['data'][0]['thumbnail']['height']
            aspect = image_Width / image_Height

            # Resize the artwork to fit in a 320 x 240 frame
            if aspect > 320.0/240.0:
                 thumb_width = 320
                 thumb_height = int(320 * image_Height / image_Width)
            elif aspect < 320.0/240.0:
                 thumb_height = 240
                 thumb_width = int(240 * image_Width / image_Height)
            else:
                 thumb_width = 320
                 thumb_height = 240
            json_out["thumb_width"] = thumb_width
            json_out["thumb_height"] =  thumb_height

And here is the PyPortal code that retrieves ARTIC images:

portal = PyPortal(image_json_path=["artic_image_URL"],
                  image_dim_json_path=(["thumbnail_width"],
                                       ["thumbnail_height"]),
                 ...)

To get the PyPortal device to run my patched code instead of the original, “compiled” library code, I simply had to replace the /lib/adafruit_circuitpython_pyportal/__init__.mpy file with my modified __init__.py. My .py file happily coexists along side of all the “compiled byte code” .mpy library files.

And this is what appears on the PyPortal:

ARTIC Splash Screen
Ballet at the Paris Opera
Yellow Dancers (In the Wings)

etc.

Downsides

My solution isn’t flawless. For some reason, landscape oriented thumbnails are not getting resized correctly, but portrait oriented thumbnails are. Secondly, my patch isn’t robust. If I try to run the Hurricane Tracker project and my patch is in play, then the app will crash. I’ve created a Github issue asking that a JSON transform function be created to allow pre-processing of the JSON. I’m waiting to see if anyone takes an interest. https://github.com/adafruit/Adafruit_CircuitPython_PyPortal/issues/126 if you want to “Like and Subscribe” the issue.

How to make 360° VR photo panoramas, and you can too!

It all started when I wanted to make a “photo field” of the local Aurora Barnstormers airfield for the Real Flight RC flight simulator (interestingly, internally Real Flight converts the image into a cube map). Photo fields are 360° photo panoramas similar to old QuickTime VR photos. They are perfectly suited to for the radio control simulator application because the pilot typically stands in one place while operating the aircraft or vehicle. A 360° photo panorama for this application is an equirectangular projection, a photo that is twice as wide as it is tall where each column represents a longitude 0° to 360° from left to right, and each row represents a latitude -90° to 90° from looking straight down to looking straight up, with the horizon being in the middle of the picture like the equator is half way between the bottom and top of a globe.

(The black area at the bottom is due to not including some source photos in this panorama to hide the tripod and my feet)

Note: Unfortunately this viewer does not work on mobile devices.

Capture

There are cameras that can take these pictures, but I do not own one of these cameras. I do own a smartphone and a tripod.

Actually, at first I thought I would try to 3D print and build an existing design for an automated solution like this one from Thingiverse

Clearly, the device worked for the creator but I was not so lucky. The print-in-place gears were all welded solid and the fitting parts didn’t fit. This happens.

I struggled for a long time before I realized that for as often as I would use the rig I could probably just take the photos using a tripod if I knew which photos to take and had a suitable guide.

The first thing I needed to do was determine how many photos in which directions I would need. For this I found panoplanner.

By entering the camera sensor data and focal length of my phone’s primary camera (this required a non-trivial amount of web searching), and adjusting acceptable overlap, I was able to generate a table like this.

More overlap is better when making panoramic images, it helps connect photos together, and provides more data to compensate for the different projections of each image, so I increased the center row number of photos to 16, kept the second row 8, and increased the high and low number of pictures to 4. All being powers of 2 simplified the process as you will see.

First, I need to create an elevation index with the 5 elevations needed highlighted: 72°, 36°, 0°, -36°, -72°. To do this I cut a square piece of paper roughly the size of the size of my tripod knob, folded it on the diagonal 4 successive times, cut the outer edge into a pie slice shape to make the paper circular and unfolded it. Now I have a circle of paper the size of my tripod knob that has creases every 22.5° (360°/16). I darkened the markings and marked a zero marking in red before re-folding the paper and cutting the point off to remove the center.

In a similar fashion I cut a thin strip of paper to go around the equatorial pivot of my tripod (twisty part at the bottom on the pole), wrapped the paper around the bottom cylinder and marked where the end of the paper overlapped itself. Then I folded the end of the strip of paper to the marked line, then in half again, again, and again, creating 16 divisions similar to how I created the circle in the previous step. I unfolded the paper and darkened each crease again at 22.5° increments. (sorry, no pictures of this process)

I taped these to my tripod as pictured and added a piece of red electrical tape cut into a triangle to serve as an elevation indicator on the upper dial.

Finally, I added color coded markings to the dials to indicate which angles are needed for each of the 5 elevations. The elevation dial is marked red at 0° when the tripod is level, blue at approximately ±36° and green at ±65°.
Note: I had to use the shallower 65° rather than 72° because my tripod is limited. For my application straight up and down are less important parts of the image but the areas are still captured in the camera’s field of view (FOV).

I have a bracket on the top of the tripod that I strap my phone to when taking photos.

The procedure involves taking 40 photos:

  • 16 equatorial photos (red, black): level the camera and taking a photo for every black mark on the lower band.
  • The next 16 photos are two sets of 8 photos of each ±36°:
    • Pitch the camera up until a blue mark is pointed down: take 8 photos, on every other black mark highlighted blue.
    • Pitch the camera down until the other blue mark is pointed down: take 8 photos, on every other black mark highlighted blue.
  • The final 8 photos are two sets of 4 photos of each ±72° (±65° in my case):
    • Pitch the camera up until a green mark is pointed down: take 4 photos, on every fourth black mark highlighted green .
    • Pitch the camera down until the other green mark is pointed down: take 4 photos, on every fourth black mark highlighted green.

The tripod guarantees the photos are all taken from the same position, the markings ensure they cover the entire 360° field of view. The process is easy and foolproof, allowing the 40 photos to be taken in rapid succession which I found can be important to minimize the movement of clouds and lighting changes during the capture.

Here are photos taken in the main Workshop 88 meeting area.

I’ve decorated a tripod and taken a pile of pictures, now what?

The absolute best 360 panorama software I tried was PTGui, it’s fast, effortless, and has the best stitcher for seamless panoramic photos. It also has lots of watermarks if you use the free version, you have to pay for the Pro version to render un-watermarked panoramas.

The next best software I found is Hugin Panorama photo stitcher which I regularly use now. It takes a little longer, the UI can be a little more cumbersome than PTGui and the results aren’t always flawless but it’s still really good and best of all FREE.
This is what it looks like when you first open it. We’ll use the simplified work flow in the Assistant tab. You can click 1. Load Images or just drag and drop your 40 (or more) photos onto the application.

When the pictures load it will probably look something like this.

Next click 2. Align.

The assistant will run for a long time. It analyzes each photo locating image features, then connects features between photos and finally optimizes the matches for stitching. Here are some snapshots of the assistant in action.

After Alignment your image may look something like this

Notice the picture is upside down and the horizon line is a little wavy. You may also want to adjust the overall image to be right side up or pointing in a particular direction. The Move/Drag tab contains the tools needed for these manipulations.

First I want to make the panorama right side up. To do this I set the Pitch: to 180 and click Apply.

Then I want to face into the wood shop with the main entrance behind the viewer. To do this I make sure Drag mode: is set to normal and I left click and drag the image to the desired orientation.

Finally, I clicked Straighten to automatically fix up the horizon line.

At this point I’ll typically go straight to exporting the panorama, but if it’s not perfect you can refine the position and orientation of photos. To adjust individual or groups of images change the Drag mode: from normal to normal, individual. The interface from here on is clunky and I will only superficially cover it here.

The displayed images row will show all the images loaded. The visible images will appear with their number highlighted a blue color, and those not visible will appear grey. The check-marks above the image numbers in Drag mode: normal, individual indicate which images are selected for dragging. The All and None buttons will toggle visibility of all the photos. The image numbers may extend off to the right so it’s best to maximize the window so you can see the status of as many photos as possible and remember to scroll if needed to make sure unwanted photos are not selected (in this example we imported 40 photos, and only 0-27 are visible without scrolling).

You can set the Overview window Mode: to Panosphere (inside) to better preview your 360 photo sphere. What you see in both windows will be a collage of the images drawn in numerical order without blending, your resulting panorama will look better so best not to obsess too much over tiny details until you have some experience to predict what the output might look like.

Hold Control and mouse over the image to color code and highlight images under the mouse cursor.

You can select images with Control Right Click, which will select all the images under the mouse cursor. You need to click the check-boxes to un-select them. Once selected, you probably want to hide a bunch of images that overlap the image so you can see how you are moving it. In this example you can see I’ve selected image 5 and only are showing images 4, 5, and 6 so I can manipulate photo 5.

Left button drag translates the image, Right button drag rotates the image around the center of the panorama. I’ve grossly moved and rotated image 5 as an example here.

These operations may not behave as you predict due to the panoramic projection but they are indeed translations and rotations.

Control Z, as always is undo and is your best friend here, as is Control S for save.

Once you are satisfied the photo layout you are ready to create a panorama. Click Assistant to return to the assistant workflow and click 3. Create panorama… then you will see this window.

You will probably want to set smaller width or height, and the aspect ratio will be preserved. I use 8192×4096 typically.

There are several options for LDR Format, I use JPEG, but TIFF and PNG are also available.

I generally leave the Output Exposure settings as they seem to be set by the program possibly based on the photo/panorama quality (I think).

I only check Keep intermediate images if I plan to iterate on exposure settings which is pretty much never.

Then click OK.

Here is the resulting panoramic image of the Workshop 88 main meeting space.

And here is what it looks like in 360 VR. Zoom in/out and click and drag to look around Workshop 88!

Note: Unfortunately this viewer does not work on mobile devices.

Click here to see a virtual tour of all of Workshop 88.

We’re using WP Photo Sphere Word Press plugin to display these images on our site. It’s free and works well, but does not support mobile devices.

To use these photos in Real Flight, I just import the Raw Panoramic Image as you can see below. Then you can either immediately fly in that environment or construct a virtual airfield where you can model 3D obstacles in the photo environment, add virtual elements like a wind sock, etc.

This was a high level overview of my simple workflow to generate these images. Hugin Panorama photo stitcher has many more features allowing one to refine the process, be sure to review the documentation.

Now I can easily bring my tripod to any location, use my phone to take pictures that I load into Hugin Panorama photo stitcher to create 360° photo panoramas that can be used for virtual tours or virtual airfields in Real Flight

And you can too!

Welcome to the Workshop 88 makerspace, a 360° photo-panoramic VR virtual tour.

Images captured Tuesday November 29, 2022

(Note: This viewer does not work on mobile devices)

Watch for a future post on how I made these, and you can too!

Roland JV-30 Garage Sale Find and Restoration

This Roland JV-30 16 part multi timbral synthesizer was purchased at a garage sale for $5.

No power supply was included, it requires 9V 800mA (at least) on a center negative barrel connector. To test the unit a new barrel connector was used with the current limited DC power supply set to 9V.

No life.
Yikes! current clamped, 1.55A at .7V!

Time to look inside. Someone’s been in here before, half the screws are missing.

Nothing exploded, no smoke, passes the sniff test. Literally, no acrid burnt electronics smell.

Wait a minute! A blown and cracked SMD cap right behind the power plug!
It’s unmarked but looks like a bypass/filtering capacitor so I replaced with a similar 1uf SMD capacitor and retried…

Same result, the capacitor was a red herring.

Time to break out the FLIR infrared camera. If something is taking 1.5A it’s got to be getting hot.

Found a suspect component in the power supply section right by the power switch (lower right corner).

This reverse polarity protection diode is shorted. I suspect lightning damage, reverse polarity or an inappropriate voltage power supply.

Success!

The JV-30 powered on at 9V with a nominal current draw under 500mA.

Used Roland JV-30’s are routinely sold for over $300 on eBay, and this one was had for $5 plus a capacitor, diode, some know how, and a little time.

Bonus Content

It was missing a slider so I whipped one up in OpenSCAD and 3D printed a couple in black PETG at 0.1mm layer height.

3D printing tip: When printing small parts it is often beneficial to print them in separated pairs or with other parts so the part has time to cool before the next layer.

They turned out great and fit perfectly.

Here is the OpenSCAD source code:

/* 
    Roland JV-30 volume slider cap
    by D. Scott Williamson 
    May 16, 2022
*/

$fn=32;
// dimensions
w1=7;
w2=6.5;
l1=13.9;
l2=13.4;
h1=7.16;
h2=6.21;
steps=5;
stepl1=2.8;
stepl2=10.68;
stepl=((stepl2-stepl1)/2)/steps;
steph=(h1-h2)/(steps+1);
rad=.25;
holeh=5;
holew1=2;
holel1=5;
holew2=3.9;
holel2=2.2;

// part
color([.2,.2,.2])
difference()
{
    // basic slider shape
    hull() 
    {
        // rounded footprint
        for(x=[-w1/2+rad,w1/2-rad]) for(y=[-l1/2+rad,l1/2-rad]) t([x,y,0]) cylinder(r=rad,h=.01);
        // rounded top corners
        for(x=[-w2/2+rad,w2/2-rad]) for(y=[-l2/2+rad,l2/2-rad]) t([x,y,h1-rad]) sphere(r=rad);
    }
    
    // steps
    for(i=[0:steps]) 
    {
        t([-w1/2,-stepl1/2-i*stepl, h2+i*steph]) cube([w1,stepl1+i*stepl*2,steph+.1]);
    }
    // + shaped hole in bottom
    cube([holew1,holel1,holeh*2],center=true);
    cube([holew2,holel2,holeh*2],center=true);
}

// Shortcut methods
module t(t) {translate(t) children();}
module tx(t) {translate([t,0,0]) children();}
module ty(t) {translate([0,t,0]) children();}
module tz(t) {translate([0,0,t]) children();}
module r(r) {rotate(r) children();}
module rx(r) {rotate([r,0,0]) children();}
module ry(r) {rotate([0,r,0]) children();}
module rz(r) {rotate([0,0,r]) children();}
module s(t) {scale(t) children();}
module sx(t) {scale([t,1,1]) children();}
module sy(t) {scale([1,t,1]) children();}
module sz(t) {scale([1,1,t]) children();}
module c(c) {color(c) children();}

Upcycled LED Illumination for 3D Printing Area

I was given some salvaged LED lights which became a worthwhile improvement to my 3D printing area.

This is what the 3D printing area looked like before the upcycled lighting. Since all of my prints are automatically video recorded, I either had to leave all the shop lights on or plug in a small lamp when printing in the evening. Results were less than ideal.

There were two types of lights, one unit from illuminated Exit signs was fully self contained, and the other designer lamps had separate power supplies. There were two of each plus some additional power supplies.

The Exit sign units would have been easier to install but the light was too cool and had a harsh unnatural fluorescent look.

It took a little more work to learn more about he designer lamps and verify I had the correct transformers for them.

I initially powered them from My DC bench power supply, discovering they take somewhere over 30 volts DC. Then wired the lights to the transformer that appeared to match them and verified the output current did not exceed the 700ma rating on the transformer. Finally I powered them up for a while to make sure neither the lights nor the transformer got too hot. Note the metal frame is not grounded at this time so is isolated from the CNC rail using a magazine.

The designer lamps have a nice warm light and the lights are clustered but spread over a wider area which helps soften shadows.

Here you see the space where I will install the lights. I decided to hang the irregular shaped lights from the bottom of the top shelf using wire through drilled holes at the three narrow points on each light. I measured and centered the lights and transformers then marked all the holes for drilling. I drilled each of the holes off the edge of the bench with a piece of scrap behind the particle board shelf to minimize blow out and to make sure I didn’t accidentally drill holes into my bench.

Since I had a quart of old white paint on the bench I decided to paint the shelf to further brighten the area. I used florists wire to mount the lights and #8 1/2″ screws and washers to mount the power supplies.

I added some electrical reference to my shop decor. When repurposing used cords, be sure the wire is in good condition and rated for the currnent load of your application. Also be sure to identify and verify the hot and neutral wires. The neutral wire will have the wider blade on the plug and ridges on the wire while the hot lead has a narrower blade and smooth wire. I also learned closing a switch housing with a vice is a bad idea. Fortunately I had a second switch.

Here is the final wiring and test. Note the addition of a green grounding wire screwed to each fixture. All the wiring is done with wire nuts and is secured with electrical tape. There is a wire strain relief mount nailed to the board so the potentially live wires are not hanging from the transformers.

I’m super happy with the results, here is what the 3D printers look like from their cameras, and how illuminated the 3D printing are is in the otherwise dark shop.

Watching the James Webb Space Telescope Cool

For those of you who like to watch grass grow or paint dry, here’s another option: watching the James Webb Space Telescope cool.

The JWST was successfully launched, deployed, and achieved stable orbit. But it’s not yet ready to perform science. It first has to cool down to a few degrees above absolute zero.

There are nine temperature sensors in total. Two spacecraft sensors on the sunny side of the craft:

  • Sunshield UPS Average Temperature (hot)
  • Spacecraft Equipment Panel Average Temperature (hot)

two on the shady side:

  • Primary Mirror Average Temperature (cold)
  • Instrument Radiator Temperature (cold)

and five on the science instruments panel:

  • MIRI Mid InfraRed Instrument
  • NIRCam Near InfraRed Camera
  • NIRSpec Near InfraRed Spectrometer
  • FGS/NIRISS Fine Guidance Sensor / Near InfraRed Imager and Slitless Spectrograph
  • FSM Fine Steering Mirror’

NASA publishes the current temperature data at https://www.jwst.nasa.gov/content/webbLaunch/whereIsWebb.html?units=metric.

To watch how quickly the JWST is cooling, you need to see the historical temperatures. I’ve put together
a dashboard to do just that. I wrote a Python app to scrape the JWST website and record the data in an Adafruit database. And I use Adafruit IO’s dashboard building blocks to display graphs and gauges for each sensor.

You can find that dashboard at https://io.adafruit.com/MrsMace/dashboards/jwst.
My Python code is at https://github.com/w8HAQRHkTx7r/jwst