Hacking a toolchain to make Atari 8 bit YouTube Mandelbrot Zoom Videos

In a previous blog post I explained all about the color cycling Mandelbrot Set explorer I wrote in 10 lines of Atari Basic for the BASIC 10 Liner contest (Second place!). While working on this project I thought it would be really cool to create a Mandelbrot Set Zoom video rendered on the Atari, which led me on an interesting journey…

I wanted to make something like this deep Mandelbrot Set zoom video

Or this video I found later that animates the color palette while zooming.

The Atari BASIC Mandelbrot Set renderer is not able to execute millions of calculations for every pixel, as a matter of fact it is configured to execute 81, at that depth the numerical precision of Atari BASIC’s floating point numbers appears to start breaking down, and with it’s 1.79MHz 6502 CPU, more cycles would start to take a very very long time.

The program can zoom in and out on 12 interesting preset locations on the complex plane of the Mandelbrot Set . I realized that if you captured a screen at each size I could scale the bitmap and make an image that had very high resolution (small pixels) in the center, and then could zoom in on it. I also thought if I captured video of the color cycling I could scale and synchronize the videos to zoom in while color cycling, and by offsetting the color cycling by a fixed amount with each zoom level I could create synchronized color coded animated frames that better showed the zoom levels.

Having a concept or idea is one thing but the project was more complicated than expected and the final implementation involved virtual machines, emulators, hundreds of lines of code, a convoluted still and video tool chain, image, and video editing tools – entirely with free and open source software. It the end it is more of a convoluted hack than a toolchain but it worked and that’s what counts!

I won’t keep you in suspense forever

Here are 6 of the 24 resulting Atari BASIC Mandelbrot Set zoom videos, in the ones with frames each colored frame is half or twice as large as adjacent frames:

Mandelbrot Zoom Videos

Videos with colored frames

Here are complete YouTube playlists of all 24 of the Mandelbrot Set zoom videos: 12 locations rendered without and 12 with colored frames. The first 3 videos in each playlists are the ones above, so if you’ve already seen them you may want to skip ahead to see the remaining 9 videos in each playlist.

I reached out to fantastic Atari chiptunes artist Adam Sporka and he generously offered to share his music for use in these videos which is done on a real Atari 800 . I think it is totally awesome that all graphics and audio were created on Atari 8 bit computers.

Here’s how I did it…

This blog isn’t necessarily a “how-to” or “follow along” article as I assume you don’t need to make color synchronized videos from Atari BASIC, but I detail the challenges, remedies (hacks), and free and open source tools used to solve the problems encountered along the way and generate the final videos. I hope the troubleshooting strategies, solutions, and tool details are interesting and helpful to you.

In the Atari800Win-PLus emulator I ran the program, visited each location, zoomed all the way in and all the way out, each time taking ~30 minutes to render and then recording a ~20 second video. The 297 videos took most of a weekend to render and capture. I ran the emulator in 12 Windows XP Virtual machines in VirtualBox on my (KDE Kubuntu) laptop so was able to keep the process moving. This animated .gif was captured from the desktop using Peek while I was doing these captures.

When capturing video, the emulator has a limited set of video codec’s to choose from: Cinepack Codec by Radius, Microsoft Video, Microsoft RLE, and Intel IYUV. When testing, Cinepack did not seem to be compatible any more, and I decided to use Intel IYUV format because it was compact, and looked good. I did run into a horrifying problem: After all the video was captured I looked at in in VLC on the laptop and bizarrely the video was mirrored. This appears to be an issue with the Linux version of the playback codec, and to my relief did not pose a problem in Windows.
(* WHEW! *)

I used folder sharing in VirtualBox to capture the videos from all the emulators running in the virtual machines into a single shared folder, and then I copied them all to a folder shared on my Windows machine (via Samba).

Each of the 12 locations had several videos with numbered filenames like “Thorny Lightning 0.avi”, “Thorny Lightning in 2.avi” or “Thorny Lightning out 3.avi” where the “in” and “out” indicated the number of times it was zoomed from the default zoom level named “0“.

Catastrophe!

I thought I could use video editing software to simply composite the Mandelbrot Set zoom but I was wrong, very wrong. I though I would manually set the in and out point in each of the 297 video clips in video editing software, align them on the timeline, set some zoom interpolation, and voila! … but when I tried a test sample I discovered two massive problems:

  1. The video zoom effect is not linear, it is exponential, it doubles in size at regular time intervals. This is something that is much harder to do in video editing software, and is VERY hard to synchronize across several aligned composited color cycling videos.
  2. The color cycling was not synchronized between videos. Even though I could perfectly match the first and last frames of a color cycle sequence in each video and scaled the videos on the timeline to the same length, the middles of the color cycling sequence were often out of sync leading to flickering that ruined the quality of the video (and yet gave me the idea for the colored frame version of the videos).

Handling zoom rate (aka scripted scale and compositing hack)

The only way I thought to handle the zoom issue was to programmatically composite the frames. This would require exporting the frames from the .avi files, identifying the synchronized frames from each of the source avi’s, and compositing them together so all of the images were scaled and registered to one another perfectly while zooming exponentially.

For processing I extracted all the frames of each .avi video into numbered bitmaps in a _frames folder using FFmpeg. To do so I used an hybrid manual automated process, which is a fancy way of saying I kept editing a batch file until I got everything I needed. The batch file basically looked like this:

for /r %%i in ("..\Thorny Lightning*.avi") do (
ffmpeg -i "%%~pi%%~ni.avi" -filter:v "crop=320:192:10:24" "_frames\%%~ni %%04d.bmp"
)

This would process all the .avi files from a particular location, in this case “Thorny Lightning”, using a wildcard in a for loop. The script calls FFmpeg once on each .avi file that matches the wildcard, inputs the .avi, crops the black overscan border from the images and saves them as numbered bitmaps in the “_frames” folder. After processing all 297 videos I have 413,153 numbered .bmp files (70GB).

To composite the frames in an exponential zoom I wrote a Python script that uses ImageMagick to scale and composite the source .bmp frames into zooming video clip frames. The scaling ended up being simpler than expected. Since the image grows geometrically with time, the scale based on time t is 2t and since each video is half the size of the next largest I could divide the dimensions in half for successive frames, all centered at the center of the image. For each output image the program generates command lines similar to this one to execute ImageMagick:

magick convert -size 320x192 -gravity center ( "_frames\Thorny Lightning out 15 0133.bmp" -sample 329x198 ) ( "_frames\Thorny Lightning out 14 0144.bmp" -sample 165x100 ) -composite ( "_frames\Thorny Lightning out 13 0120.bmp" -sample 83x51 ) -composite ( "_frames\Thorny Lightning out 12 0140.bmp" -sample 42x26 ) -composite ( "_frames\Thorny Lightning out 11 0109.bmp" -sample 22x14 ) -composite ( "_frames\Thorny Lightning out 10 0116.bmp" -sample 12x8 ) -composite ( "_frames\Thorny Lightning out 9 0130.bmp" -sample 7x5 ) -composite ( "_frames\Thorny Lightning out 8 0128.bmp" -sample 4x3 ) -composite ( "_frames\Thorny Lightning out 7 0121.bmp" -sample 3x2 ) -composite ( "_frames\Thorny Lightning out 6 0120.bmp" -sample 2x2 ) -composite ( "_frames\Thorny Lightning out 5 0129.bmp" -sample 2x2 ) -composite -crop 320x192+0+0 +repage "Thorny lightning\frame_ioi_00000005.bmp"

This line makes a 320×192 bitmap composited of 11 scaled source bitmaps. Finding the right settings to get ImageMagick to composite the videos the way I wanted with cropping and point sampling (instead of anti-aliasing) was a challenge. It is a very powerful tool that can process images in a multitude of ways, often offering many ways to accomplish the same or similar results (ImageMagick reference).

Color cycling synchronization (aka manual image index hack)

The source color cycling videos were manually captured and contain more than a single clean synchronized color cycle loop (extra frames at the beginning and at the end of the video), so I manually looked at the bitmaps to find the index of the first and last frames of the color cycle and included those into the program to create a test video. The resulting video finally zoomed perfectly but was still ruined by flickering due to the un-synchronized color cycling.

Digging a little deeper…

Atari BASIC is not frame synced, meaning it does things continuously and without synchronization with the TV display signal. The Atari computer emulator on the other hand renders video at 60Hz, one frame for every NTSC field emulated.

I started performing analysis of the bitmap sequences in Python using Pillow (fork of the Python Image Library or PIL) for image processing. Initially I tried to detect the number of colors in an image to decide when the color cycling was changing but that was vexed by the scan lines effect I had enabled in the emulator which caused anti-aliasing of some of the lines and a lot more than the 9 Atari graphics mode 10 colors I was expecting (I still think it looks cool). While performing additional experimentation I noticed that successive video frames were changing while color cycling was copying one register to the next and calculating a new color, but there were many duplicate frames while the rest of the Atari BASIC program executed its processing loop. Additionally I was aware that at one point in the color cycling all of the onscreen colors would be gray scale (actually close but not perfectly RGB gray).

The winning strategy (aka image processing frame sync hack)

I scan each bitmap sequence from a video from the beginning to find the first frame that is entirely gray scale, then I scan from the back for the first frame of the same gray sequence at the end, this defines the range of frames for one full color cycle. Within that range of frames I compare each frame with the next until I find a set of duplicate frames, and store the first duplicate frame’s index. There are exactly 128 frames in a color cycling sequence: 8 brightness cycles of 16 hues. Adding a multiple of 8 will synchronize brightness but will offset hue, this is used for the colored frames effect. When the process is done it should have discovered the exact 128 frames representing one color cycle for that bitmap sequence (from the .avi) at that zoom level.

More complexity

This almost worked perfectly, except Atari BASIC is not perfect and I am not perfect. Atari BASIC would occasionally let an extra duplicate frame occur, and I occasionally recorded two or three color cycles that would result in 256 or 384 (or more) frames instead of the expected 128 frames: in either case it messed things up.

More fixes (aka extra images Gimp hack)

The unwanted duplicate frames were pretty easy to find; usually duplicate frames were 9 or 10 frames apart, but when an extra is inserted, they will be only 4 to 6 frames apart. When the program detects anything other than 128 frames in a cycle it dumps all the frame offsets and the number of frames between offsets. I manually tracked down the extra frames and defaced them in Gimp making them no longer duplicates. I re-recorded the videos I screwed up (and deleted the bitmaps I made, and made new bitmaps with FFmpeg), and was back in business.

What now?

By now Adam had agreed to share his music and I had to figure out which songs I thought matched with each video. To do that I created some more batch files this time using FFmpeg to create .mp4 video files from the exported .bmp frames and the .mp3 music Adam provided. I used command lines like this to generate test videos with music:

ffmpeg -y -r 60 -i "Thorny Lightning\frame_ioi_%%08d.bmp" -i "music\Stack 'Em up Adam Sporka.mpeg" -c copy -map 0:v:0 -map 1:a:0 -r 60 -c:v libx264 -pix_fmt yuv420p -preset slow -crf 19 -vf scale=iw*4:ih*4 -sws_flags bitexact "_out\Thorny Lightning ioi x4.mp4"

This reads the bitmap sequence at 60fps and the music Adam provided, mapping the bitmap sequence channel 0 to video, and the music channel 1 to audio at an output framerate of 60fps using the h264 codec with very low loss, at 4 times the resolution (if I uploaded the video to YouTube rumor has it increasing the resolution will increase the quality) with the bitexact flag telling it to resample the image rather than use a smoothing enlarging filter.

Here is a similar simpler command line to generate only a video with no sound.

ffmpeg -y -r 60 -i "Thorny Lightning\frame_ioi_%%08d.bmp" -r 60 -c:v libx264 -pix_fmt yuv420p -preset slow -crf 19 -vf scale=iw*4:ih*4 -sws_flags bitexact "_out\Thorny Lightning ioi q x4.mp4"

These were excellent tests to adjust the color cycling and zoom rates and see which of Adams songs matched 12 Mandelbrot Set locations, but the videos had no titles, credits, or transitions. They were previews but they lacked finished polish.

Are we there yet? (aka final composition)

I used HitFilm Express to edit the 12 video projects and render 24 videos. I imported the bitmap sequences as clips (didn’t use the preview compressed .mp4’s), created intro and outro titles, composited the video with the awesome music provided by Adam Sporka, and synced up all the fades. I exported two versions of each video, one with colored frames, one without. Then uploaded them to YouTube, made the descriptions, and end cards, and etc. You know the rest.

In conclusion

In the end I love the way the videos turned out, it’s amazing to think that everything you see and hear in these videos was created on Atari computer technology that was invented before Benoit Mandelbrot visualized the Mandelbrot Set at IBM March 1st, 1981. The Atari BASIC script that rendered every frame was only 10 lines long but it took a lot of creativity to hack together a free and open source toolchain involving VirtualBox running 12 Atari800Win-PLus emulators, half a dozen batch files, FFmpeg, ImageMagick, Gimp, 560 lines of Python using Pillow, and finally HitFilm Express to generate the final videos.

Stay creative, and support your creative communities!