Revisiting the Art Institute of Chicago Project

I’d written previously about my project to display images of artworks owned by the Art Institute of Chicago (ARTIC) The challenge was that, unlike the project that inspired it, the ARTIC’s REST API does not return the full URL path to the image files. I solved it by patching the source code for the PyPortal library. It was clunky and stopped me from using that PyPortal for any other project.


Then, inspiration struck. I had an under utilized Raspberry Pi W and decided to implement a web server on it that could take the response from the ARTIC, massage it and forward it on to the PyPortal. So rather than talking directly to ARTIC, the PyPortal talks to the Raspi, which talks to ARTIC. I used Bottle to build the web server.

Enhance the original

To spice up this second version, I implemented a “Happy Birthday, <artist>” system. The PyPortal asks the Raspi for the URL of the nth image to be displayed. The Raspi looks up which artists have birthdays today and asks ARTIC if it owns any pieces by those artists. From that search results list, the Raspi constructs the URL to the nth image. It puts the image URL, the artist’s name, the artwork’s title, and the dimensions of the image into a dictionary structure and sends a JSON response back to the PyPortal. The PyPortal downloads the image and displays it on the screen.

New things to learn

For this project to work, Raspi had to compose a query for ARTIC that returned only pieces by the artists desired. It would be poor form to display Pearson’s works on Beardson’s birthday. The prior project didn’t require such high precision. “/artworks/search/?q=impressionism” was adequate to get a lot of very beautiful imagess. For greater precision, I was going to have to learn complex queries using Elasticsearch’s Query DSL. The engineering staff at the ARTIC was very responsive in providing support, but I still struggled. I eventually found an on-line course on for the complete Elasticsearch stack. Section 3 of the course was DSL. I signed up for a free trial and completed the training.

Learning DSL allowed me to develop the following process:

  1. PyPortal request to Raspi: Get the nth image’s URL
  2. Raspi table lookup: What artists were born today?
  3. Raspi request to ARTIC: Perform a query to get those artists’ ARTIC IDs
  4. Raspi to ARTIC: Perform a query to get the URL of the nth image by those artists
  5. Raspi to PyPortal: Massage ARTIC’s response and send the modified image URL to PyPortal
  6. PyPortal: Download and display the image
  7. Rinse/Repeat

Here is what the Raspi uses to find artists IDs:

criteria = {
 "query": {
  "bool": {
   "should": [
    {"match_phrase":{"title":     {"query":artist_name,"slop":1}}},
    ], # should
   "minimum_should_match": 1,
  }, # bool
 }, # query
} # criteria

The query takes the artist’s name and tries to find that name in the title field. It also looks for that name in the alt_titles fields. The ‘slop:1’ specifies that ‘Louis Sullivan’ (the name in the birthday database) is a close enough match to ‘Louis H. Sullivan’ (the name used by the ARTIC).

The alt_titles field is necessary because the artist Maurice Quentin de La Tour ( has all of the following aliases:

      "Maurice Q. de Latour",
      "Maurice Quentin La Tour",
      "Maurice-Quentin de La Tour",
      "Maurice-Quentin de La Tour",
      "Maurice-Quentin de la Tour",
      "Maurice Quentin De La Tour",
      "Maurice Quentin Delatour",
      "Maurice Quentin de Latour",
      "Maurice Quentin de La Tour"

Misspellings are also a risk. Taddeo Zuccari is also known as Taddeo Zuccaro. Hopefully, the alt_titles will catch this. If it doesn’t, there is still a “fuzziness” factor that can be added to the query. Fuzziness can handle individual letter insertions, deletions and transpositions. The greater the fuzziness factor, the more errors it can tolerate — at the risk of including someone who should not be included. Combining fuzziness and slop makes for a very complex query and since alt_titles seems to be working, I only use 1 degree of slop and skip fuzziness.

If Raspi is successful finding artist_name, it extracts that artist’s unique ARTIC ID. I do the above query for each artist having a birthday today. I put all the found IDs into a list.

I then loop over all the IDs and build something like the following query to find all the ARTIC artworks. The filter clauses ensure that the ARTIC has an image of the artwork and that there is a title and artist name associated to it. In the following example, I use IDs to find all artworks by two artists — Maurice LaTour and Casper Friedrich.

criteria = {"query": 
   {"bool": {	
      "should": [
         {"match": {"artist_id": 34563}},
         {"match": {"artist_id": 34185}}],
      "minimum_should_match": 1,
      "filter": [
         {"exists": {"field" : "image_id"}},
         {"exists": {"field" : "artist_title"}}, 
         {"exists": {"field" : "title"}}, ], }, }, } 

Every time the PyPortal asks for the nth image, the Raspi sends back data from the nth image in this list.

While there are plenty of opportunities for refactoring and bullet proofing the code, I consider the project complete and a success. It is running continuously in my workshop — Happy Birthday, artists!

Final thoughts

The brains of the project were moved from the PyPortal into the Bottle web server. I can change the server and add any number of parallel query schemes — all the works in the Narcissa Niblack Thorne miniatures gallery, all artwork associated with the Caravaggio exhibit, images of cats, stained glass, architectural models, etc. What would you like to see?

Fireworks Deck Umbrella: Have a Safe and Happy Fourth of July

When I was first learning about microcontrollers and blinking lights, I’d seen a YouTube video by Dave Plummer, a retired system engineer at Microsoft (he wrote Task Manager). In that video, he creates what he called a Tiki Umbrella. It’s a deck umbrella tricked out with LED strips along the ribs that produce various animation effects.

I thought that would be a nice project to do for the 4th of July.

My deck umbrella has eight ribs — so, an LED strip for each rib. My original plan was to have a ninth strip running up the pole — to mimic the launch of the firework shell into the air before it explodes.

As with many of my projects, I procrastinated and ran short of time. July 4th was a hard stop date. I also lacked inventory on all the appropriate components. So the project that I implemented was not all that I envisioned.

Because the LED strips would be somewhat sheltered from the elements under the umbrella, I didn’t feel the need for IP65 or higher (IP stands for ingress protection). I settled on two roles of Inland brand, 5-meter, WS2812B, 60 LEDs/meter strips. Given the length of the umbrella ribs, this works out to 76 pixels per rib.

For the microcontroller, I used an RP2040 Scorpio, running CircuitPython and the Adafruit LED Animation library. Here’s the wiring diagram.

Rather than a breadboard for an outdoor project, I decided to swallow my fear of soldering and used a permanent proto-board as the diagram suggests. That’s about 50 solders. Face your fear and push through it, MACE! Surprisingly, it was much easier that I’d expected. Flux and good, Pb solder makes a big difference.

I’d done all the soldering on my workbench and tested the animation software as I went — solder a strip, test the software, rinse, repeat.

Once all the strips were working, it was time to attach it all to the umbrella. The problem is that the whole circuit was one huge, heavy, awkward-to-handle octopus. The protective gel covering the LEDs loved to stick to everything it came in contact with — the table, other strips, the tags I’d attached to identify each strand, the umbrella ribs, etc. I would have liked to have 10 extra hands during install.

A much better approach would have been to build components, attach them to the umbrella, and then connect them together. Rather than solder the LEDs to the proto-board, a pair of JST 3-pin connectors with long leads would have done nicely. That way, I could have positioned the microcontroller on the umbrella, then positioned a strip on a rib, then connected the two.

This would make repairs easier also. Sadly, the solder on one of the strips failed during install and had to be repaired. It was next to impossible to bring a soldering iron up to the top of the umbrella. Luckily, I was able to take down the strip (it was no longer connected to the proto-board), solder some STRANDED wire (for increased flexibility), add heat-shrink to the pad (for greater strain relief), and clip on a few Wago connectors. Then I just re-attached the strip to the rib, and snappend the proto-board’s dangling wires into the Wagos. That’s the way I wish I’d done ALL the strips, but I didn’t have any Wagos and wasn’t sure if I could get them by the 4th.

The microcontroller and proto-board had their own installation challenges. I thought the pole and ribs were plastic. It turns out they’re metal. That’s a big short-circuit risk. I got around the problem by taping the proto-board to a piece of styrofoam to insulate it from the pole.

The placement of the microcontroller at the top of the pole makes programming changes an issue too! Here’s a picture of me having to put a music stand on top of the patio table in order to put the laptop within reach of the microcontroller USB cable. This is how I added additional effects post-install.

One final shortcoming of my install — you can’t close the umbrella in a storm. And I never animated the pole.

So…let’s just call this a prototype — better luck next holiday!

Enjoy the show!

P.S. The hot, humid weather of the 4th caused the adhesive on the back of the LED strips to fail. The strips started to hang straight down putting a lot of stress on solder connections. I ended up fixing this by using clear, packing tape. I’d lay a piece onto the LED side of the strip and then fold it around the edges until it grabbed the umbrella rib. So far, so good.

Color coded hex driver hack tool-tip

Organize your hex drivers using color

I use metric hex drivers to work on drones, 3D Printers, CNC machines, and a lot of other projects. My hex drivers have cylindrical handles with the metric size printed on one side so most of the time the sizes are not visible. Here are some random pictures of my desktop during projects with the drivers circled.


My drivers are the common 1.5mm, 2.0mm, 2.5mm, and 3.0mm sizes typically used for M2, M2.5, and M3 hardware. You can see that even when the drivers are oriented correctly the straps in my RaceDayQuads drone field kit cover the dimensions.

Color Codes

I thought it would be great to color code them using the grooves in the handles. I already know the resistor color code standard and thought it would be natural to use the same colors to represent driver sizes.

I decided to use the color coding for the two digit size but skip the third exponent band as I don’t have 10mm+ drivers. I use a fixed “a.b” encoding where “a” is millimeters, and “b” is tenths of a millimeter. For example, the 2.5mm driver would have a red stripe for 2 followed by a green stripe for 5.

Colorful Rubber Bands

I was going to use enamel paint to manually paint the driver handles, but that would be arduous and potentially messy when it occurred to me I could use rubber bands. There are a lot of colorful pony tail and craft rubber band kits on the internet but it took a lot of searching to find a rubber band kit that had all of the resistor color code colors, which include a full rainbow plus black, grey, white, and brown. Then I found the Funtopia 1500 Pcs Small Elastic Hair Ties with Organizer Box Colorful for Girls, Kids, Hair Elastics for Thin or Thick Hair (24 Colors). For less than $10 I can label drivers for the rest of my natural life.

It did not disappoint.


Now I can clearly see which driver to select and get the right one every time even without my glasses.
(The unlabeled driver in my RaceDayQuads kit is a Phillips screwdriver)

It’s not what I made, it’s how I made it — Scripting Inkscape

Over the last couple of years, my wife and I have accumulated several small ceramic tiles (and other similarly shaped items). Rather than lying them flat, it’s good to prop them up on a stand. I made a stand by laser cutting out pieces from 1/8″ wood scraps, glueing the pieces together and painting the result. Here’s a finished stand propping up a tile mounted to a cork backing.

The stand is a rectangular piece that’s the same size as (or a little smaller than) the tile. I call this the PLATE. I decided to tilt the plate 15 degrees off vertical. Glued to the back of the plate are two triangles that hold the plate at 15 deg. These are the LEGS. The legs have to be sized large enough that the plate can’t fall over backwards — because, center of gravity — but not so large as to extend above the plate.

The dimensions of the legs are easy to calculate with a little trigonometry (thank you,
Fr. Edwin Karlovich, C.R.).

The hypotenuse, h, of the triangle is the same length as the plate’s height. A 15 deg. angle from vertical means a 75 deg. angle from horizontal. The bottom back of the leg is 90 deg. We know 3 facts about the triangle – 1 length and 2 angles. That’s enough for us to determine the lengths of the remaining two sides. The width (or “base”) of the triangle is

and the height is

Simple calculations for just about any calculator.

The cut diagram shows how simple the geometry is – a rectangle for the plate, and a second rectangle split on a bias for the two legs. This is very easy to create in Inkscape.

But, I had several to do and more to do in the future. So I set about automating the process of drawing the parts.

Inkscape is extensible, and Scott Pakin wrote an extension called Simple Inkscape Scripting that allows you to write Python scripts to automate the drawing.★simple-inkscape-scripting

After installing the extension, here’s the script I wrote to draw a cut-sheet for my stands. It’s a .py file created with a simple text editor. It exists as a file outside of Inkscape.

# Set the dimensions of the plate
plateWidth = 119
plateHeight = 97

# Do some trig to calculate the leg dimensions
angleDegrees = 75.0
angleRadians = angleDegrees * 0.01745329
legBase = cos(angleRadians) * plateHeight
legHeight = sin(angleRadians) * plateHeight
strokeWidth = 0.15

# Draw a rectangle
legs = rect((0, 0), (legBase, legHeight), stroke='#ff0000', stroke_width=strokeWidth)

# Draw a line from one corner of the rectangle to the opposite
diagonal = path(['m', 0,0, legBase, legHeight], stroke='#ff0000',
stroke_linecap='butt', stroke_linejoin='miter', stroke_opacity=1)

# Group the rectangle and line into a single object and move it off to the side
group([legs, diagonal], 'transform=translate(%.5f,%.5f)' % (plateWidth+10,0))

# Draw the plate rectangle
backplate = rect((0, 0), (plateWidth, plateHeight),
stroke='#ff0000', stroke_width=strokeWidth)

To use this script, edit the plate width and height and save the file. Inside of Inkscape, click Extensions, Render, Simple Inkscape Scripting… to open the dialog.

Browse to (or type) the name of the Python script file and click the APPLY button.

Now just send it off to the laser.

Rinse and repeat for any other stands with other dimensions.

The are several dozens of functions available within the scripting syntax to draw very complex images.

CTA Train Tracker – Part 2

“Sweet dreams and flying machines in pieces on the ground”

Taylor, J. (1970). Fire and Rain [Lyrics]

In my previous post related to this project (CTA Realtime Train Tracker), I explored the Chicago Transit Authority’s REST API from a software standpoint — what data is available, how do I access it, how can I display it. In this post, I present my efforts to build a illuminated map of the CTA rail system showing the actual locations of trains in real time.

Design Goals

Small Overall Dimensions and Light Weight17×22 inches wide at most
Geographically Suggestive👎🏻
Stop LabelsEach stop should be labeled with its official station name.
Bi-directional Traffic FlowThere should be something that shows the direction each train is moving.
Color CodedThe CTA Train Lines are called Red, Blue, Green, Orange, Pink, Purple, Yellow and Brown. The lights on a Line should reflect the Line’s name.
Minimal Light BleedThe glow from a light should only illuminate one stop.
Visible When Lights OffThe map needs to be printed onto the board so that it’s recognizable even if the lights are turned off.
Lights between StopsThere should be a light between each stop, to indicate that a train has left its previous station, but has not yet arrived at its next station stop.
Inline Loop
Easy Wiring


While researching this build, I came across a map of the CTA Rail System on the Chicago Tribune web site (pictured above in the Design Goals table). Every stop was labeled, the Loop was not in a separate call-out, and the placement of the Lines bears a strong suggestion of a geographically accurate map. The straight lines with minimal corners lend themselves to the use of NeoPixel strips. NeoPixel strips are easy to wire, easy to program and color coded.

NeoPixel strips come in several sizes. I found a set that included 150 pixels per meter – that’s 6.6mm spacing between them. A single strip that runs from Linden in Wilmette to 95th St in Chicago would require approximately 44 pixels for each named stop plus one pixel for the in-between stops. That translates into a map that’s 22 inches tall.

The strips are 4mm wide, which would also allow for two strips to be placed side-by-side. One strip for the “Southbound” trains and another for the “Northbound” trains. It would be clear which direction a train is moving by noting which strip its LED is on — just like cars on a road.

Multiple strips would be needed — roughly one for each of the eight Lines. Some stops are serviced by multiple Lines, so a given pixel might illuminate as red, brown, purple, etc. depending on which train was passing through. The strips can be cut, repositioned appropriately and rewired.


With an Adafruit RP2040 Scorpio, 8 LED strips can be driven from a single control board running CircuitPython. The Scorpio would require a Feather add-on to give it Wi-Fi capability.

The REST API has two key pieces of information to place a train — the station ID which the train is traveling towards and whether arrival at that station is imminent (<60 seconds out) or not. So, for each train running, look up the station ID in a table/dictionary. This table will contain the strip and pixel number for that station. If arrival is imminent, light that pixel with the color of the train. If arrival is not imminent, light the pixel one less than the number in the table.

This simple rule assumes the “Southbound” strip joins back up with the “Northbound” strip at the “southern” end forming one continuous strip of consecutively numbered pixels. The rule breaks down at the intersection of two (or more) Lines that share the same tracks. Luckily, the number of exceptions is manageable.


With the electronics decided, the next decision is to address the look of the map. I envision a thin, wooden board that is laser etched with station names and with holes for the LEDs to shine through. The LED strips will be attached to the back of the board. The holes will keep the light focused. A layer of diffusing material could be placed to soften the lights. Colored lines can be painted on the board for each train Line to satisfy the “lights off” goal. An appropriate logo would be etched into the board as well.

Laser etching

The Space’s laser cutter is conveniently set up with Inkscape, which reads SVG files. SVG files are just plain text files formatted in XML. I could use Notepad to type an SVG file by hand and get that file cut on the laser.

My first experiment was to write a program to create an SVG to cut labels and holes on scrap matte board. I iterated over all the stations and used their lat/lons to place a hole and label it. For the labels, I went with a single-stroke font. This way, I could get a crisp, tiny vector etch of the labels rather than a smudgey raster burn. Inkscape 1.2 has an extension called Hershey Text which creates labels using Hershey Fonts. Hershey fonts are a collection of vector fonts developed c. 1967 by Dr. Allen Vincent Hershey at the Naval Weapons Laboratory, originally designed to be rendered using vectors on early cathode ray tube displays. While the Space’s laser cutter laptop doesn’t have an up-to-date copy of Inkscape, it will accept SVGs containing Hershey Fonts created elsewhere and render them on the laser.

Here is the results of my first test at creating an SVG programmatically.

As you can see, the Loop is hopelessly over etched and the station labels are very close together. Even if I used the largest board the laser’s bed could accommodate, I would not be able to eliminate the overcrowding.

Here’s a sample of Hershey Fonts on real wood. I ran the vector etch next to the raster etch to show the contrasting results of the two methods. I think vector wins hands down. The slight charring on the vector etch can be mediated with blue painter’s tape – although that would require a fair amount of weeding.

I’d convinced myself that I could, in fact, programmatically create an SVG file to draw the map. So I wrote a program that placed the stations of the Red Line in a straight line — spaced exactly to correspond to the distance between LEDs on a strip and I printed that SVG file on the laser.

Equally Spaced Stations

I attached an LED strip to the back of this board and animated it with some real-time train data. I only had one LED strip in inventory, so I illuminated southbound trains in Red and northbound trains in Blue. This Timelapse video shows the result.

The Hard Truth

It was at this point that I took stock of what I’d achieved and what remained to do. I realized that the project was going to be very expensive to complete. LED strips cost a lot and I’d need quite a few of them. Even with all my successes so far, the success of the final project was not assured. I was also disappointed that the partial prototype was a bit boring to watch in real-time. Only in time lapse does it approach being interesting. It just doesn’t seem worth it to proceed.

Alternative Designs

I could relax my design goals, e.g. one strip per Line instead of two and no in-between stops. But I think that will diminish the look of the project too much. Therefore, I’m moth-balling the project.

Still, I would like to see the entire rail network in operation. I looked for an alternative way to display the data. I settled on a small device called the PyPortal. I’d written about the PyPortal in this post, “The Adafruit PyPortal and Modifying Adafruit Libraries“. I ported my code to the PyPortal but rather than light up NeoPixels, I drew a colored dot on the screen to represent the position of each train.

Basic Program Flow

Here’s the guts of the program:

  1. Query the CTA for the latitude/longitude of all trains
  2. Map the lat/lon into screen coordinates
  3. Clear the screen
  4. Turn on the pixels at the screen coordinates
  5. Rinse, repeat

Step 3 caused an annoying flicker when the screen clears during each iteration.

I solved that problem by saving a list of all the pixels that were lit up. During the next iteration, I’d compare the list of prior screen coordinates with the list of new screen coordinates. If a pixel was in the prior list but is not in the current list — I’d turn the pixel off. Then I’d iterate through all the current pixels and turn them on. If the pixel is already on, it stays on — no harm no foul. If the pixel is new, it gets turned on. This is very fast and there is no flicker.

The Entire Network

Here is a time lapse of the PyPortal system running. The video is only 5 seconds, but may loop continuously.

The video is short for two reasons — 1) you get the point from the short video and 2) after just a few iterations, the program crashes. Other tests of my code running on macOS and a Raspberry Pi Zero did not abort. They run perfectly fine all through the night. After several weeks of very frustrating debugging, I’ve concluded there’s too much data for the PyPortal hardware and/or the CircuitPython libraries to process and the libraries do a poor job of handling the exceptions.

Stubborn Determination

I abandoned the PyPortal and made another attempt by attaching a tiny, TFT display onto a Raspberry Pi Zero.

(yeah, it really is that small.)

I rewrote the code (again) to draw onto that display. As I’d hoped, that program ran for many, many iterations without failing. The fact that the program runs continuously allowed me to experiment with tweaking the display.

One change I made was to the way I refreshed the screen at each iteration. Rather than clearing the pixels and drawing the updated positions, I just left all the prior iterations’ pixels turned on. After only a few iterations, the rail lines began to appear as thin lines. The end result is a glowy map of the entire CTA rail network — geographically accurate. (This is similar to what I did in Part 1 when I tracked individual trains and built a Google Earth map from the data.)

The downside of this is that once the lines are fully drawn, you can no longer dicern the current locations of the trains.

To fix that, I decided rather than turn pixels off or leave them on, I would instead turn the prior pixels dark grey. This made it easier to see both the current train locations as well as the line it was running on.

Taking it one step further, rather than grey for all lines, I set a train’s old pixel to a dimmed version of that color. Now all the trains (bright dots) are running on color coded lines (dim dots).

Finally, the TFT has two built-in buttons. I programmed them to toggle between two different versions of the data — one is the real-time train positions map, and the other is a bar chart showing how many trains are running on each line.

(Interesting aside: When photographed straight on, the camera sees the TFT colors as totally washed out and indistinguishable from one another. However, at a steep angle, the colors are saturated and vivid.)

End of the Line

While this TFT version of the map misses many of the design goals, I am at least able to see the entire CTA rail network in real time.

Many thanks to the members of Workshop 88 who shared their thoughts with me throughout the project. Without their involvement, this project might never have gotten as far as it did.

Final project

And with that… I’m done with this project (until I can find cheap, tightly spaced, LED strips).

CTA Realtime Train Tracker

I have spent many hours riding CTA elevated trains. Most notably from my home on the NW side of Chicago to Loyola University’s Downtown campus — Logan Square on the Blue Line to Washington, walk through the pedestrian tunnel to the Red Line, to Chicago Avenue. After graduation, I got an apartment in Evanston and took the Purple line from Linden in Wilmette to the Merchandise Mart where I worked.

So I took a great interest in an ad for this product:

It’s a map of the CTA lines with LEDs that light to indicate where the trains currently are — in real time. I wanted it. But at $300, it was out of reach. So, I’ll make one for myself.

The blog post is about my efforts to create a proof of concept in software before I consider a hardware solution.

The first step is to source the data. It turns out the CTA has a free, public REST API regarding where its trains are. There are multiple endpoints to the API. The most important to this project is ttpositions that shows the location of every active train on a particular line – Red, Blue, Orange, etc…

and returns the following interesting values:

Train No.Next StopApproaching?LatitudeLongitudeHeading
811Chicago Ave.No41.903-87.63189

A little explanation will help here. Each active train has a unique identifying number. When a train reaches a terminus, the number is recycled.

Next stop shows the station name at which the train will make its next stop. Even if a train is sitting stopped at some station, it will be the next station down the line that will be returned. A train might also be between two stops.

If the train is just about to pull into a station, Approaching is “yes”.

Latitude and Longitude are the map coordinates of the train.

Heading is the compass direction for the train’s motion.

It’s simple to take these lat/lon data and create a KML (Keyhole Markup Language) file and feed that file into Google Earth Pro. KML files are XML. A KML file will contain XML elements to specify colors, font size, icon, and the position and orientation of the viewpoint — the point in space that determines what you’re looking at, etc.

The most critical element of the KML is a place marker for each train. The place marker looks like this:

<Placemark id="7">
  <name>605 Ridgeland 89°</name>
  <Point id="7">

In this example, the train number, next station name and compass heading are used to label each dot on the map. styleUrl references a previously defined style that specifies the formatting of the place marker. The element coordinates specifies the lon, lat.

My minimally viable product runs a query for each CTA Line, outputs the data to the KML file, and opens the file in Google Earth.

If this is done repeatedly every 15, 30 or 60 seconds, you’ll be able to watch each train move through the system.

MACE, You’re Off the Rails!

A glaring hole in this version is the lack of context. We can’t see the elevated tracks.

The API can again help us with that. Another API endpoint is ttfollow which gives details about a specific train — including its coordinates

My strategy to plot the train tracks was to find a train that was just leaving a terminus and repeatedly query that run number (train number) and collect lat/lon until that train reached the opposite terminus. That set of points can be coded into a KML file as a PolyLine drawn from point to point.

Sounds simple, but that’s when my frustrations began to grow.


Zeroth, The documentation for the API says that when you query a line for all trains, it returns an array of train dictionaries. But if there is only one active train on the line, rather than a one element array, it’s a scalar dictionary. And if no trains are running on the line, the “trains” JSON key doesn’t even exist. This wasn’t a problem earlier until I did the Purple line during non-rush hour and the Yellow (Skokie Swift) line. I found solutions, but it makes the code kludgey.

First, it’s difficult to catch a train at a terminus. It’s not like the train is sitting at the O’Hare terminal, broadcasting its GPS and a station name of O’Hare. The best I could do was randomly catch a train soon after it left a terminus moving toward its next stop – Rosemont. So I had to frequently query the Blue Line hoping to see a Rosemont train HEADING EAST (as opposed to a Rosement train heading west from Cumberland!). Once I saw a train starting a run, I had to take its number and train queries.

OK — annoying, but then, patience is a virtue.

Second, as I was monitoring the progress of a train it got to somewhere near the Belmont station when POOF — It drops off the radar never to return. I have no idea why that train disappeared and I have seen this anomaly multiple times since.

OK — annoying, but…

Third, after I’d collected about 60 data points, I plotted them and saw a most bizarre path.

The train appeared to jump forward then back then forward again. I observed this behavior many times and I didn’t believe this is what the train was actually doing. I concluded there is something wrong with the data being returned from the API. Letters to the API manager have gone unanswered.

I ended up developing a workflow where I could identify these spacetime anomalies and correct them. If you look at the illustration above, if you get rid of point #3 (which is the exact same coordinates of point #7), the data looks realistic and reasonable. I imported the data points into Excel and used conditional formatting to highlight duplicates:

Then I simply deleted the first duplicate (which unformats that dup’s twin) and then moved on to the next “first” duplicate. Once no duplicates are present, I put the datapoints into the KML file to plot the line.

I went through this process for each of the CTA Lines.

The end result of all this is the following graphic:

When the train positions are overlayed onto the colored Lines, you get:

If you repeatedly plot the trains, you get a timelapse of the movement of the trains through the system.

This exercise shows the hardware version of this project is entirely possible. A prototype of the Purple Line seems very doable. It will need 9 LEDs for each stop plus 8 more for between-station lights. I want a string of LEDs for Southbound trains and a second string for Northbound trains. That makes 36 LEDs total. The processing flow would be:

Query the Purple line and get its trains  
For each train in AllTrains
   Look at the Next Stop and direction of each train
   If Approaching is Yes, 
      light up the station
   If Approaching is No, 
      light the between-station-LED just prior to Next Stop
Rinse, Repeat

There is still the issue of time warping, but I no longer care. I’ll just assume that if a train is reported at a given lon/lat, then that’s where it is.

Next Stop

The next step is to build a prototype using LED lights and migrate the software to a Raspberry Pi.

Using ChatGPT to Help Locate My Birthstar

I’m 68.56 years old today. My birthstar is that star which is 68.56 light-years away from Earth. The light produced by that star on my birth date is just reaching Earth now! Later, when I’m older, my birthstar will be different.

I wanted to write a program that would list my birthstar(s). I asked my astrophysist friend where I could find an easily queried, public access, database that contained stellar distances. She suggested the Gaia Archive.

I soon learned that the Gaia Archive contains a table called gaiadr3.gaia_source. This table contains data such as

source_id Unique ID assigned to this star by the Gaia project
raBarycentric right ascension 𝛼 of the source in ICRS at the reference epoch ref_epoch (degrees)
decBarycentric declination 𝛿 of the source in ICRS at the reference epoch ref_epoch (degrees)
ipd_gof_harmonic_phaseThe phase of the variation of the IPD GoF (reduced chi-square) as function of the position angle of the scan direction. (degrees) Clear as mud.

I knew that right ascension and declination are important for locating stellar objects. (Who can forget the scene in Contact when Jodie Foster first hears alien signals from space and relays the coordinates of those signals to her team in the control room of the VLA radio telescope.

But I couldn’t find anything in the table called distance.


ChatGPT is an artificial intelligence project that has made a lot of news recently. ChatGPT is currently available for free at I thought I’d use it to help me figure out if the Gaia Archive can tell me my birthstar.

When you launch a ChatGPT session, called a chat, you start with a simple text box. I started my chat with

me: how can I use the Gaia archive to give me a list of stars that are a specific distance away

(The responses from ChatGPT can be verbose, so I’ll redact some of them.)

ChatGPT: …you can perform a query using the Gaia archive’s query interface. The query would likely involve specifying a distance range and using the parallax measurement to calculate the distance to each star. …you can use “ADQL” (Archive Data Query Language) which is similar to SQL…

That sounds promising. I know SQL and I can use parallax to get the distance.

me: how do you compute distance from parallax

ChatGPT: Distance (in parsecs) = 1 / (parallax angle in arcseconds)

That’s a pretty easy formula. I’ll be able to program that into other applications related to astronomy. Now to get a list of stars and their parallax values.

me: write a Gaia archive adql query to list stars that are 69 light years away

I kid you not. I asked ChatGPT to write an ADQL query that I could just paste into the web page that ChatGPT referred to in the first answer. And ChatGPT did it.

ChatGPT: Here is an example of an ADQL query that can be used to list stars that are 69 light years away using the Gaia archive:

    source_id, ra, dec, parallax
    parallax >= (3.08567758e-05 - 0.00005) AND
    parallax <= (3.08567758e-05 + 0.00005)

I pasted the code into Gaia’s query interface and got syntax errors. What followed was some back and forth with ChatGPT regarding its bogus advice.

me: that query has unresolved identifiers
it: I apologize for any confusion, you are correct that the query I provided in my previous response would not work as written…
me: the query you gave me should use gaiadr3.gaia_source as the from parameter
it: I apologize for the mistake in my previous response

Once ChatGPT’s syntax errors were corrected, the query ran for 30 minutes and returned more than 3,000,000 stars that are (supposedly) between 60 and 70 light years away. I kept narrowing down the range and eventually got the results down to 37,000 stars. I don’t know if that’s the correct number so I tried to determine the parallax measurement for a 69 light-year distant star using paper and pencil.

It turns out that while parallax is normally measured in arcseconds, Gaia records milli-arcseconds. ChatGPT didn’t seem to know that. I eventually came up with the correct values to plug into a Gaia query.


The queries up to this point were very web-interactive with a lot of keyboard and mouse clicks. I wanted to use a command line tool. So I asked,

me: does the Gaia archive have a REST API

ChatGPT: Gaia archive does have a REST API… Currently, the main endpoint of the Gaia archive REST API is:

Reading the REST documentation, there’s a Python module that can be installed to make queries easy. I installed it and asked

me: write a python to query the Gaia archive that lists ra, dec and parallax for stars that are 69 light years distant and use the astroquery.gaia module


from astroquery.gaia import Gaia

# Define the query
query = """
SELECT ra, dec, parallax
FROM gaiadr2.gaia_source
WHERE 1 / parallax * 0.001 * 3.26 = 69

# Execute the query
job = Gaia.launch_job_async(query)
result = job.get_results()

# Print the result

AND IT WORKS! I threw some bells and whistles into the Python and can now get anyone’s birthstar by supplying a birthdate on the command line.

> p3 -h [-b:][-n][-h]
-b birthdate yyyymmdd  default: 19540627
-n Northern Hemisphere default: False
-h print usage

> p3 -b 19570411 -n
The person was born 4/11/1957
They are 65.77 years old today
Looking for stars 65.76986301369857 light-years distant (+/- 7 light-days)
Target parallax is 49.5906
Only stars in the Northern hemisphere will be listed.
Here's the Gaia ADQL query

    source_id, ra, dec, parallax
    parallax <= 49.60507969737007 AND 
    parallax >= 49.5761590662218 
    AND dec > 0
ORDER BY parallax DESC

Output saved to: 1674002905756O-result.csv

source_id	        ra         dec	   parallax
1858219151114880000  312.54398   29.38393  49.59437
3270079526697710000   56.77948    1.64475  49.58366
3914019231742220000  170.93122    8.56434  49.58089


ChatGPT was an absolute wonder to work with. It understood what I wanted. And despite the misinformation it gave me and its use of magic numbers in its code, I don’t think I could have finished this project as quickly as I did without consulting it.

The Adafruit PyPortal and Modifying Adafruit Libraries has a large selection of electronic components to buy. They also have a very active YouTube channel, a large number of Learning Guides, and software libraries to control all that hardware. Adafruit’s PyPortal dev board is a wi-fi enabled device with a small, color, touch-enabled screen. The PyPortal is programmed using CircuitPython and Adafruit has published several learning guides for it.

Example projects

Here are just a few of the PyPortal projects that I’ve downloaded and modified:

  1. Event Countdown Timer
    Modification: A vacation countdown timer that cycles through graphics representing my next four vacation destinations and the number of days until that vacation begins.

  2. ISS Tracker
    Modification: Lengthened the satellite’s trail to show the entire orbit just completed.

  3. Hurricane Tracker
    Modification: Added trails to show where the hurricanes has been, not just where they are now.

  4. Cleveland Museum of Art Display Frame
    Modification: wait for it…

These applications all have one thing in common. They all reach out to the Internet to grab data and then display that data on the screen. When multiple applications follow the same general pattern, then a lot of effort can be saved by using a well written software library — and Adafruit has written one for the PyPortal.

Example code

To get a feel for the PyPortal’s power and ease-of-use, let’s look at the Quote of the Day project. When you point your browser to, you’ll get back something like this:

 "text": "Somewhere, something incredible is waiting to be known.", 
 "author": "Sharon Begley",

This JSON response from the web site has two pieces of text that we’d like to display on the screen – the quote and the author’s name. To do that using the Adafruit PyPortal Library, the program must create a PyPortal object and initialize it with all the relevant information:

portal = PyPortal(
         json_path=([0, "text"], 
                    [0, "author"]),
         text_position=((20, 120),  # screen location for quote 
                        ( 5, 210)), # screen location for author 
         text_color=(0xFFFFFF,      # quote text color 
                     0x8080FF),     # author text color 
         text_wrap=(35,             # characters to wrap for quote 
                     0),            # no wrap for author 
         text_maxlen=(180,          # max length for quote
                       30),         # max length for author
while True:

The loop refreshes the screen with a new quote every 60 seconds using the PyPortal’s fetch() method.

The PyPortal library does all the heavy lifting of connecting to your LAN, browsing to the site’s url, converting the returned JSON into a Python dictionary, and displaying the text on screen. Loads of applications follow this same fetch/display model — current weather data, current number of “likes” on my latest YouTube post(s), latest bitcoin price, local gas prices, current DuPage County covid case count, etc.

Displaying Images

The Cleveland Museum of Art Display Frame Learning Guide follows the same general pattern, but includes the the download and display of an image as well as text. (All the previous examples use a static background image.). Let’s see how PyPortal library handles dynamic images. Here’s the new code:

pyportal = PyPortal(
       json_path=["data", 0, "title"],
       text_position=(4, 231),
       image_json_path=["data", 0, "images", "web", "url"],
       image_resize=(320, 225),
       image_position=(0, 0),

The text parameters are still there. So what’s different? For one thing, there is no URL. We’ll deal with that later. And there are new parameters to deal with the image:

  • image_json_path – points to the URL of the downloadable image
  • image_dim_json_path – points to the dimensions of the downloadable image
  • image_resize – fit the image into these dimensions
  • image_position – indicates where on the screen to place the resized image

Now, what about the URL for the Museum? In the Quote project, the URL never changed. Each time you .fetch()’d that URL, a different quote was returned. But for the Museum project, you have to provide a different URL for each piece of art in the Museum’s collection. There are 31954 pieces in the Museum collection. The URL for the 54th piece would include an &skip=53 parameter as part of the URL. The main loop below will do that — starting with the 1st piece and sequentially displaying all the pieces.

while True:

Art Institute of Chicago (ARTIC)

Having recently visited the Art Institute of Chicago, I got to thinking — can I modify the CMA project to display the art from the ARTIC. Does the ARTIC even have a website like the CMA? YES it does –! So I started coding up the PyPortal parameters and ran into a BIG roadblock. The ARTIC database does not list the full path to a piece of art’s downloadable image. The URL must be constructed from several bits of data. Here’s a highly redacted JSON response for a specific piece of ARTIC art:

  "data": [
      "thumbnail": {
          "width": 3000,
          "height": 1502
      "image_id": "cb34b0a8-bc51-d063-aab1-47c7debf3a7b",
      "title": "Ballet at the Paris Opéra"
  "config": {
    "iiif_url": "",

The title of the artwork is straightforward.

The image’s URL can be computed with the following code:



but that doesn’t fit the PyPortal library’s assumption that the JSON will contain the downloadable image’s URL in a single key:value pair.

Open Source

Then at 3:00 AM, I had the idea of editing the PyPortal Library Open Source. So I downloaded the adafruit_ciruitpython_pyportal/adafruit_pyportal library from Github and started searching through the code. I found the right spot in the fetch() method, inserted a patch to build the image url and add that URL back into the JSON using a new key named ‘artic_image_URL’. The PyPortal has no idea that the image_url key is something I created and slipped in rather than coming from the Museum.

I also had to add code to stuff the image’s dimensions into the JSON. I started with [“thumbnail”][“width”] and [“thumbnail”][“height”] from the ARTIC JSON. These are the dimensions of the master, thumbnail image. But I requested !320,240 in the image_url which means the ARTIC would send me an image that fits within a 320×240 bounding box. The actual, downloaded image size might be 166×240 or 320×185 — to preserve the aspect ratio. So I had to compute the dimensions that the PyPortal library was actually going to encounter.

Here’s the complete patch that I made to the Adafruit library:

            # Build the image URL
            PREFIX = json_out["config"]["iiif_url"] + "/"
            IMAGE = json_out["data"][0]["image_id"]
            OPTIONS = '/full/!320,240/0/default.jpg'
            json_out["artic_image_URL"] = PREFIX + IMAGE + OPTIONS
            # Compute the aspect of the Library's artwork
            image_Width = json_out['data'][0]['thumbnail']['width']
            image_Height = json_out['data'][0]['thumbnail']['height']
            aspect = image_Width / image_Height

            # Resize the artwork to fit in a 320 x 240 frame
            if aspect > 320.0/240.0:
                 thumb_width = 320
                 thumb_height = int(320 * image_Height / image_Width)
            elif aspect < 320.0/240.0:
                 thumb_height = 240
                 thumb_width = int(240 * image_Width / image_Height)
                 thumb_width = 320
                 thumb_height = 240
            json_out["thumb_width"] = thumb_width
            json_out["thumb_height"] =  thumb_height

And here is the PyPortal code that retrieves ARTIC images:

portal = PyPortal(image_json_path=["artic_image_URL"],

To get the PyPortal device to run my patched code instead of the original, “compiled” library code, I simply had to replace the /lib/adafruit_circuitpython_pyportal/__init__.mpy file with my modified My .py file happily coexists along side of all the “compiled byte code” .mpy library files.

And this is what appears on the PyPortal:

ARTIC Splash Screen
Ballet at the Paris Opera
Yellow Dancers (In the Wings)



My solution isn’t flawless. For some reason, landscape oriented thumbnails are not getting resized correctly, but portrait oriented thumbnails are. Secondly, my patch isn’t robust. If I try to run the Hurricane Tracker project and my patch is in play, then the app will crash. I’ve created a Github issue asking that a JSON transform function be created to allow pre-processing of the JSON. I’m waiting to see if anyone takes an interest. if you want to “Like and Subscribe” the issue.

How to make 360° VR photo panoramas, and you can too!

It all started when I wanted to make a “photo field” of the local Aurora Barnstormers airfield for the Real Flight RC flight simulator (interestingly, internally Real Flight converts the image into a cube map). Photo fields are 360° photo panoramas similar to old QuickTime VR photos. They are perfectly suited to for the radio control simulator application because the pilot typically stands in one place while operating the aircraft or vehicle. A 360° photo panorama for this application is an equirectangular projection, a photo that is twice as wide as it is tall where each column represents a longitude 0° to 360° from left to right, and each row represents a latitude -90° to 90° from looking straight down to looking straight up, with the horizon being in the middle of the picture like the equator is half way between the bottom and top of a globe.

(The black area at the bottom is due to not including some source photos in this panorama to hide the tripod and my feet)

Note: Unfortunately this viewer does not work on mobile devices.


There are cameras that can take these pictures, but I do not own one of these cameras. I do own a smartphone and a tripod.

Actually, at first I thought I would try to 3D print and build an existing design for an automated solution like this one from Thingiverse

Clearly, the device worked for the creator but I was not so lucky. The print-in-place gears were all welded solid and the fitting parts didn’t fit. This happens.

I struggled for a long time before I realized that for as often as I would use the rig I could probably just take the photos using a tripod if I knew which photos to take and had a suitable guide.

The first thing I needed to do was determine how many photos in which directions I would need. For this I found panoplanner.

By entering the camera sensor data and focal length of my phone’s primary camera (this required a non-trivial amount of web searching), and adjusting acceptable overlap, I was able to generate a table like this.

More overlap is better when making panoramic images, it helps connect photos together, and provides more data to compensate for the different projections of each image, so I increased the center row number of photos to 16, kept the second row 8, and increased the high and low number of pictures to 4. All being powers of 2 simplified the process as you will see.

First, I need to create an elevation index with the 5 elevations needed highlighted: 72°, 36°, 0°, -36°, -72°. To do this I cut a square piece of paper roughly the size of the size of my tripod knob, folded it on the diagonal 4 successive times, cut the outer edge into a pie slice shape to make the paper circular and unfolded it. Now I have a circle of paper the size of my tripod knob that has creases every 22.5° (360°/16). I darkened the markings and marked a zero marking in red before re-folding the paper and cutting the point off to remove the center.

In a similar fashion I cut a thin strip of paper to go around the equatorial pivot of my tripod (twisty part at the bottom on the pole), wrapped the paper around the bottom cylinder and marked where the end of the paper overlapped itself. Then I folded the end of the strip of paper to the marked line, then in half again, again, and again, creating 16 divisions similar to how I created the circle in the previous step. I unfolded the paper and darkened each crease again at 22.5° increments. (sorry, no pictures of this process)

I taped these to my tripod as pictured and added a piece of red electrical tape cut into a triangle to serve as an elevation indicator on the upper dial.

Finally, I added color coded markings to the dials to indicate which angles are needed for each of the 5 elevations. The elevation dial is marked red at 0° when the tripod is level, blue at approximately ±36° and green at ±65°.
Note: I had to use the shallower 65° rather than 72° because my tripod is limited. For my application straight up and down are less important parts of the image but the areas are still captured in the camera’s field of view (FOV).

I have a bracket on the top of the tripod that I strap my phone to when taking photos.

The procedure involves taking 40 photos:

  • 16 equatorial photos (red, black): level the camera and taking a photo for every black mark on the lower band.
  • The next 16 photos are two sets of 8 photos of each ±36°:
    • Pitch the camera up until a blue mark is pointed down: take 8 photos, on every other black mark highlighted blue.
    • Pitch the camera down until the other blue mark is pointed down: take 8 photos, on every other black mark highlighted blue.
  • The final 8 photos are two sets of 4 photos of each ±72° (±65° in my case):
    • Pitch the camera up until a green mark is pointed down: take 4 photos, on every fourth black mark highlighted green .
    • Pitch the camera down until the other green mark is pointed down: take 4 photos, on every fourth black mark highlighted green.

The tripod guarantees the photos are all taken from the same position, the markings ensure they cover the entire 360° field of view. The process is easy and foolproof, allowing the 40 photos to be taken in rapid succession which I found can be important to minimize the movement of clouds and lighting changes during the capture.

Here are photos taken in the main Workshop 88 meeting area.

I’ve decorated a tripod and taken a pile of pictures, now what?

The absolute best 360 panorama software I tried was PTGui, it’s fast, effortless, and has the best stitcher for seamless panoramic photos. It also has lots of watermarks if you use the free version, you have to pay for the Pro version to render un-watermarked panoramas.

The next best software I found is Hugin Panorama photo stitcher which I regularly use now. It takes a little longer, the UI can be a little more cumbersome than PTGui and the results aren’t always flawless but it’s still really good and best of all FREE.
This is what it looks like when you first open it. We’ll use the simplified work flow in the Assistant tab. You can click 1. Load Images or just drag and drop your 40 (or more) photos onto the application.

When the pictures load it will probably look something like this.

Next click 2. Align.

The assistant will run for a long time. It analyzes each photo locating image features, then connects features between photos and finally optimizes the matches for stitching. Here are some snapshots of the assistant in action.

After Alignment your image may look something like this

Notice the picture is upside down and the horizon line is a little wavy. You may also want to adjust the overall image to be right side up or pointing in a particular direction. The Move/Drag tab contains the tools needed for these manipulations.

First I want to make the panorama right side up. To do this I set the Pitch: to 180 and click Apply.

Then I want to face into the wood shop with the main entrance behind the viewer. To do this I make sure Drag mode: is set to normal and I left click and drag the image to the desired orientation.

Finally, I clicked Straighten to automatically fix up the horizon line.

At this point I’ll typically go straight to exporting the panorama, but if it’s not perfect you can refine the position and orientation of photos. To adjust individual or groups of images change the Drag mode: from normal to normal, individual. The interface from here on is clunky and I will only superficially cover it here.

The displayed images row will show all the images loaded. The visible images will appear with their number highlighted a blue color, and those not visible will appear grey. The check-marks above the image numbers in Drag mode: normal, individual indicate which images are selected for dragging. The All and None buttons will toggle visibility of all the photos. The image numbers may extend off to the right so it’s best to maximize the window so you can see the status of as many photos as possible and remember to scroll if needed to make sure unwanted photos are not selected (in this example we imported 40 photos, and only 0-27 are visible without scrolling).

You can set the Overview window Mode: to Panosphere (inside) to better preview your 360 photo sphere. What you see in both windows will be a collage of the images drawn in numerical order without blending, your resulting panorama will look better so best not to obsess too much over tiny details until you have some experience to predict what the output might look like.

Hold Control and mouse over the image to color code and highlight images under the mouse cursor.

You can select images with Control Right Click, which will select all the images under the mouse cursor. You need to click the check-boxes to un-select them. Once selected, you probably want to hide a bunch of images that overlap the image so you can see how you are moving it. In this example you can see I’ve selected image 5 and only are showing images 4, 5, and 6 so I can manipulate photo 5.

Left button drag translates the image, Right button drag rotates the image around the center of the panorama. I’ve grossly moved and rotated image 5 as an example here.

These operations may not behave as you predict due to the panoramic projection but they are indeed translations and rotations.

Control Z, as always is undo and is your best friend here, as is Control S for save.

Once you are satisfied the photo layout you are ready to create a panorama. Click Assistant to return to the assistant workflow and click 3. Create panorama… then you will see this window.

You will probably want to set smaller width or height, and the aspect ratio will be preserved. I use 8192×4096 typically.

There are several options for LDR Format, I use JPEG, but TIFF and PNG are also available.

I generally leave the Output Exposure settings as they seem to be set by the program possibly based on the photo/panorama quality (I think).

I only check Keep intermediate images if I plan to iterate on exposure settings which is pretty much never.

Then click OK.

Here is the resulting panoramic image of the Workshop 88 main meeting space.

And here is what it looks like in 360 VR. Zoom in/out and click and drag to look around Workshop 88!

Note: Unfortunately this viewer does not work on mobile devices.

Click here to see a virtual tour of all of Workshop 88.

We’re using WP Photo Sphere Word Press plugin to display these images on our site. It’s free and works well, but does not support mobile devices.

To use these photos in Real Flight, I just import the Raw Panoramic Image as you can see below. Then you can either immediately fly in that environment or construct a virtual airfield where you can model 3D obstacles in the photo environment, add virtual elements like a wind sock, etc.

This was a high level overview of my simple workflow to generate these images. Hugin Panorama photo stitcher has many more features allowing one to refine the process, be sure to review the documentation.

Now I can easily bring my tripod to any location, use my phone to take pictures that I load into Hugin Panorama photo stitcher to create 360° photo panoramas that can be used for virtual tours or virtual airfields in Real Flight

And you can too!

Welcome to the Workshop 88 makerspace, a 360° photo-panoramic VR virtual tour.

Images captured Tuesday November 29, 2022

(Note: This viewer does not work on mobile devices)

Watch for a future post on how I made these, and you can too!