Norton White Tile Principal Component Method

This method builds on the Norton White Tile Method (NWT) which fundamentally has been based on using Titanium Dioxide (TiO2) in white paint products to create indelible markings on the surface of tiles. The principal component method removes the need for a paint product and provides a method to directly apply TiO2 to the tile for engraving.

The benefits to this method are once the TiO2 has been applied, the tile can be immediately put under the laser, there are significantly fewer fumes, and no harsh solvents required to clean the residual TiO2 from the surface of the tile.

Materials Required: Powdered TiO2, ethanol, kitchen scale, airbrush, air tight containers that will not be degraded by ethanol.

Preparation Method

The primary ratio that was used was a 1:3.5, TiO2 to ethanol solution. This ratio was chosen based on the suggestion in this thread: https://www.researchgate.net/post/I-have-TiO2-powder-and-want-to-dissolve-it-Which-solvent-can-play-the-role

I found that adding an additional splash or two of ethanol helped reduce the clogging and splattering with the airbrush.

To initially mix the the TiO2 and ethanol I poured them both into a wide mouth mason jar and used a pestle to crush any clumps while stirring the solution. The resulting solution should have an opaque white hue with no visible granules and a very low viscosity.

To prepare the tile for the application of the TiO2 solution I wiped the tile off with a dry cloth.

Application

Once the solution has been mixed it is ready for application to the tile with the airbrush. If you have stored extra solution after using some, give the container a good shake before use to make sure that any sediment is fully dissolved in the solution.

I used this airbrush and applied three full hoppers of of the TiO2 solution to the tile using the lowest pressure setting on my compressor (15psi). *This should be done in a well ventilated area (See note at bottom about TiO2 toxicity)*. due to the high evaporation rate of ethanol, when using the airbrush it dries almost immediately after landing on the tile. For spraying the tile I held the airbrush ~6-8 inches (15-20cm) from the surface of the tile moving in sweeping or concentric motions being sure to cover the edges thoroughly. Be sure to take steps to reduce splatters, as an unfortunately placed splatter can ruin an engraving if you are unlucky. Cleaning the nozzle regularly, running lower psi, and thinning the solution with more ethanol should help reduced splatters. In addition to those steps also make sure dried bits and flakes are for the most part removed from the solution, as they will clog the airbrush. After the airbrush has developed a somewhat substantial crust of dried TiO2 it should be rinsed out, because as mentioned just above the bits and pieces tend to clog the airbrush. I have found that keeping a paper towel or rag on hand, and while taking short breaks from spraying, simply wiping the nozzle of the airbrush significantly helps to reduce splattering. Also being cognizant and wiping away any dripping from the hopper before it drips off the airbrush.

Once the tile is coated it will have a noticeable matte look to it and there may be some small visible granulation’s on the surface of the tile. This is expected and ok. The ethanol should be entirely evaporated leaving only a very fine powder coating of TiO2 on the surface of the tile. The TiO2 should have a decent hold on the surface of the tile and not blow off without some considerable effort. I have found that blowing off the tile using the airbrush on the lowest air pressure setting, while the hopper is empty helps to further reduce the granulation on the tile. There should be no loose powder on the surface of the tile. The powder does wipe off very easily however and the tile should be handled carefully by the edges as a slight brush has the ability to take the coating off.

Laser Engraving

There are few to no differences in the actual engraving of the NWT principal component method, and the NWT traditional method. Be sure your drag chain does not brush the surface of the tile during engraving as it will take the powder coating off and likely ruin the engraving. I found that my settings for the NWT traditional method worked well with the principal component method. Running an Ortur “15w” (~4.5w) LM2 at 750mm/min 80% power I was able to get very good results. *Note vaporizing TiO2 with a laser may produce “Hazardous decomposition products:Carbon oxides (CO, CO2)” https://beta-static.fishersci.com/content/dam/fishersci/en_US/documents/programs/education/regulatory-documents/sds/chemicals/chemicals-t/S25818.pdf and you will want to have your ventilation system running like you would while running the NWT traditional method.*

This tile was ran with two hopper fulls and was the first to produce results comparable to spray paint. The image was chosen as its a finely detailed vector and allowed me to see how consistent the application was over the whole tile.

While the results with two hopper fulls were good, I found that three is more consistent and produces just a bit darker fuller black.

Limited raster engraving has been attempted and the results showed promise and will be the next stage of serious testing now that a solid consistent method has been developed.

Once the engraving is finished the tile can be taken to the sink and rinsed off under the tap with some light rubbing by hand or cloth. This should remove all of the excess TiO2 leaving you with a finished work piece. If the back of the tile gets wet it should be left to air dry as it is a porous surface and will absorb some water.

Attempted Application Methods

Several methods of applying TiO2 were attempted before the airbrush was decided upon as the better method. Most of them worked to a varying degree but had strong enough drawbacks to make them less than ideal.

Attempted Application Method #1: Paint Brush

This was the first method I tried, when the coating was right the results were quite good, but getting it consistent over the entire tile proved difficult. The image below is of a tile where the TiO2 solution was brushed on, and the odd gradients can be pretty clearly seen. Also the blacks are not as dark as I would have likely them to be.

Attempted Application Method #2: Pouring

In order to get an even layer over the whole tile I tried taping the edges and pouring the solution onto the surface of the tile. This produced an ok result, however it tended to promote granulation. Which then led to a final result having a sort of fuzzy and unfocused look to them. In addition to the fuzzy look the areas of solid black there tended to be a “salt and pepper” effect where the larger granules seem to prevent the laser from hitting the surface of the tile. I attempted double poring as I hypothesized that it might not have been a thick enough layer to cover the surface of the tile adequately. This was wrong as it actually made the entire image worse .

This tile was ran with a second application of the pouring method, clearly it did not turn out to the same quality as many of the other tiles.

Attempted Application Method #3: Hand Spray Bottle

This method tended to produce more consistent results than the paint brush and a less fuzzy result from the pouring, but it did not eliminate the fuzzy or salt and pepper effects entirely.

Whats Next?

Since the release of the original version of this method I have received some really great input from the laser community at large and I will continue to update and refine the method as further experimentation happens. Some of the things that are currently on the forefront of the docket are trying to figure out good alternative application methods. Particularly using paint rollers, and spin coating. A member of the LightBurn forum identified this article with provides great in depth information on spin coating TiO2 onto ceramic wall tile. https://www.worldscientific.com/doi/pdf/10.1142/S2010194513009951

I have also begun the process of testing raster engraving and will be providing updates as I start to get some consistent results one way or the other.

TiO2 Toxicity

It has been historically accepted that TiO2 when inhaled is carcinogenic, some information was brought to my attention on the LightBurn forums that tends to contradict this notion. “The epidemiological investigations evaluated the mortality statistics at 11 European and 4 US TiO2 manufacturing plants. They concluded that there was no suggestion of any carcinogenic effect associated with workplace exposure to TiO2.” https://academic.oup.com/annweh/article/49/6/461/176940

While there may not be strong evidence at the moment that TiO2 is explicitly carcinogenic, that does not mean it is healthy to breathe. Any micro particulate is not good for the lungs, especially one that is riding on pure or nearly pure ethanol. Therefore I would still strongly suggest treating it like any other chemical or mineral spray and taking steps to reduce the exposure to the inhaled particulates created while working with this method.

Atari 8 Bit Mandelbrot Set Zoom Videos

In a previous blog post I explained all about the color cycling Mandelbrot Set explorer I wrote in 10 lines of Atari Basic for the BASIC 10 Liner contest (wish me luck!). While working on this project I thought it would be really cool to create a Mandelbrot Set Zoom video rendered on the Atari, which led me on an interesting journey…

I wanted to make something like this deep Mandelbrot Set zoom video

Or this video I found later that animates the color palette while zooming.

The Atari BASIC Mandelbrot Set renderer is not able to execute millions of calculations for every pixel, as a matter of fact it is configured to execute 81, at that depth the numerical precision of Atari BASIC’s floating point numbers appears to start breaking down, and with it’s 1.79MHz 6502 CPU, more cycles would start to take a very very long time.

The program is able to view 12 interesting preset locations on the complex plane of the Mandelbrot Set and is able to zoom in and out by doubling and halving the scale of the screen. I realized that if you captured a screen at each size I could scale the bitmap and make an image that had very high resolution (small pixels) in the center, and then could zoom in on it. I also thought if I captured video of the color cycling I could scale and synchronize the videos to zoom in while color cycling, and by offsetting the color cycling by a fixed amount with each zoom level I could create synchronized color coded animated frames that better showed the zoom levels.

I reached out to fantastic Atari chiptunes artist Adam Sporka and he generously offered to share his music for use in these videos which is done on a real Atari 800 . I think it is totally awesome that all graphics and audio were created on Atari 8 bit computers.

I won’t keep you in suspense forever

Here are 6 of the 24 resulting Atari BASIC Mandelbrot Set zoom videos, in the ones with frames each colored frame is half or twice as large as adjacent frames:

Mandelbrot Zoom Videos

Videos with colored frames

Here are complete YouTube playlists of all 24 of the Mandelbrot Set zoom videos: 12 locations rendered without and 12 with colored frames. The first 3 videos in each playlists are the ones above, so if you’ve already seen them you may want to skip ahead to see the remaining 9 videos in each playlist.

Here’s how I did it…

In the Atari800Win-PLus emulator I ran the program, visited each location, zoomed all the way in and all the way out, each time taking ~30 minutes to render and then recording a ~20 second video. The 297 videos took most of a weekend to render and capture. I ran the emulator in 12 Windows XP Virtual machines in VirtualBox on my (KDE Kubuntu) laptop so was able to keep the process moving. This animated .gif was captured from the desktop using Peek while I was doing these captures.

When capturing video, the emulator has a limited set of video codec’s to choose from: Cinepack Codec by Radius, Microsoft Video, Microsoft RLE, and Intel IYUV. When testing, Cinepack did not seem to be compatible any more, and I decided to use Intel IYUV format because it was compact, and looked good. I did run into a horrifying problem: After all the video was captured I looked at in in VLC on the laptop and bizarrely the video was mirrored. This appears to be an issue with the Linux version of the playback codec, and to my relief did not pose a problem in Windows.
(* WHEW! *)

I used folder sharing in VirtualBox to capture the videos from all the emulators running in the virtual machines into a single shared folder, and then I copied them all to a folder shared on my Windows machine (via Samba).

Each of the 12 locations had several videos with numbered filenames like “Thorny Lightning 0.avi”, “Thorny Lightning in 2.avi” or “Thorny Lightning out 3.avi” where the “in” and “out” indicated the number of times it was zoomed from the default zoom level named “0“.

Catastrophe!

I thought I could use video editing software to simply composite the Mandelbrot Set zoom but I was wrong, very wrong. I though I would manually set the in and out point in each of the 297 video clips in video editing software, align them on the timeline, set some zoom interpolation, and voila! … but when I tried a test sample I discovered two massive problems:

  1. The video zoom effect is not linear, it is exponential, it doubles in size at regular time intervals. This is something that is much harder to do in video editing software, and is VERY hard to synchronize across several aligned composited color cycling videos.
  2. The color cycling was not synchronized between videos. Even though I could perfectly match the first and last frames of a color cycle sequence in each video and scaled the videos on the timeline to the same length, the middles of the color cycling sequence were often out of sync leading to flickering that ruined the quality of the video (and yet gave me the idea for the colored frame version of the videos).

Handling zoom

The only way I thought to handle the zoom issue was to programmatically composite the frames. This would require exporting the frames from the .avi files, identifying the synchronized frames from each of the source avi’s, and compositing them together so all of the images were scaled and registered to one another perfectly while zooming exponentially.

For processing I extracted all the frames of each .avi video into numbered bitmaps in a _frames folder using FFmpeg. To do so I used an hybrid manual automated process, which is a fancy way of saying I kept editing a batch file until I got everything I needed. The batch file basically looked like this:

for /r %%i in ("..\Thorny Lightning*.avi") do (
ffmpeg -i "%%~pi%%~ni.avi" -filter:v "crop=320:192:10:24" "_frames\%%~ni %%04d.bmp"
)

This would process all the .avi files from a particular location, in this case “Thorny Lightning”, using a wildcard in a for loop. The script calls FFmpeg once on each .avi file that matches the wildcard, inputs the .avi, crops the black overscan border from the images and saves them as numbered bitmaps in the “_frames” folder. After processing all 297 videos I have 413,153 numbered .bmp files (70GB).

To composite the frames in an exponential zoom I wrote a Python script that uses ImageMagick to scale and composite the source .bmp frames into zooming video clip frames. The scaling ended up being simpler than expected. Since the image grows geometrically with time, the scale based on time t is 2t and since each video is half the size of the next largest I could divide the dimensions in half for successive frames, all centered at the center of the image. For each output image the program generates command lines similar to this one to execute ImageMagick:

magick convert -size 320x192 -gravity center ( "_frames\Thorny Lightning out 15 0133.bmp" -sample 329x198 ) ( "_frames\Thorny Lightning out 14 0144.bmp" -sample 165x100 ) -composite ( "_frames\Thorny Lightning out 13 0120.bmp" -sample 83x51 ) -composite ( "_frames\Thorny Lightning out 12 0140.bmp" -sample 42x26 ) -composite ( "_frames\Thorny Lightning out 11 0109.bmp" -sample 22x14 ) -composite ( "_frames\Thorny Lightning out 10 0116.bmp" -sample 12x8 ) -composite ( "_frames\Thorny Lightning out 9 0130.bmp" -sample 7x5 ) -composite ( "_frames\Thorny Lightning out 8 0128.bmp" -sample 4x3 ) -composite ( "_frames\Thorny Lightning out 7 0121.bmp" -sample 3x2 ) -composite ( "_frames\Thorny Lightning out 6 0120.bmp" -sample 2x2 ) -composite ( "_frames\Thorny Lightning out 5 0129.bmp" -sample 2x2 ) -composite -crop 320x192+0+0 +repage "Thorny lightning\frame_ioi_00000005.bmp"

This line makes a 320×192 bitmap composited of 11 scaled source bitmaps. Finding the right settings to get ImageMagick to composite the videos the way I wanted with cropping and point sampling (instead of anti-aliasing) was a challenge. It is a very powerful tool that can process images in a multitude of ways, often offering many ways to accomplish the same or similar results (ImageMagick reference).

The source color cycling videos were manually captured so contain more than a single clean synchronized color cycle loop (extra frames at the beginning and at the end of the video), so I manually looked at the bitmaps to find the index of the first and last frames of the color cycle and included those into the program to create a test video. The resulting video finally zoomed perfectly but was still ruined by flickering due to the unsynchronized color cycling.

Digging a little deeper…

Atari BASIC is not frame synced, meaning it does things continuously and without synchronization with the TV display signal. The Atari computer emulator on the other hand renders video at 60Hz, one frame for every NTSC field emulated.

I started performing analysis of the bitmap sequences in Python using Pillow (fork of the Python Image Library or PIL) for image processing. Initially I tried to detect the number of colors in an image to decide when the color cycling was changing but that was vexed by the scan lines effect I had enabled in the emulator which caused anti-aliasing of some of the lines and a lot more than the 9 Atari graphics mode 10 colors I was expecting (I still think it looks cool). While performing additional experimentation I noticed that successive video frames were changing while color cycling was copying one register to the next and calculating a new color, but there were many duplicate frames while the rest of the Atari BASIC program executed its processing loop. Additionally I was aware that at one point in the color cycling all of the onscreen colors would be gray scale (actually close but not perfectly RGB gray).

The winning strategy

I scan each bitmap sequence from a video from the beginning to find the first frame that is entirely gray scale, then I scan from the back for the first frame of the same gray sequence at the end, this defines the range of frames for one full color cycle. Within that range of frames I compare each frame with the next until I find a set of duplicate frames, and store the first duplicate frame’s index. There are exactly 128 frames in a color cycling sequence: 8 brightness cycles of 16 hues. Adding a multiple of 8 will synchronize brightness but will offset hue, this is used for the colored frames effect. When the process is done it should have discovered the exact 128 frames representing one color cycle for that bitmap sequence (from the .avi) at that zoom level.

More complexity

This almost worked perfectly, except Atari BASIC is not perfect and I am not perfect. Atari BASIC would occasionally let an extra duplicate frame occur, and I occasionally recorded two or three color cycles that would result in 256 or 384 (or more) frames instead of the expected 128 frames: in either case it messed things up.

More fixes

The unwanted duplicate frames were pretty easy to find; usually duplicate frames were 9 or 10 frames apart, but when an extra is inserted, they will be only 4 to 6 frames apart. When the program detects anything other than 128 frames in a cycle it dumps all the frame offsets and the number of frames between offsets. I manually tracked down the extra frames and defaced them in Gimp making them no longer duplicates. I re-recorded the videos I screwed up (and deleted the bitmaps I made, and made new bitmaps with FFmpeg), and was back in business.

What now?

By now Adam had agreed to share his music and I had to figure out which songs I thought matched with each video. To do that I created some more batch files this time using FFmpeg to create .mp4 video files from the exported .bmp frames and the .mp3 music Adam provided. I used command lines like this to generate test videos with music:

ffmpeg -y -r 60 -i "Thorny Lightning\frame_ioi_%%08d.bmp" -i "music\Stack 'Em up Adam Sporka.mpeg" -c copy -map 0:v:0 -map 1:a:0 -r 60 -c:v libx264 -pix_fmt yuv420p -preset slow -crf 19 -vf scale=iw*4:ih*4 -sws_flags bitexact "_out\Thorny Lightning ioi x4.mp4"

This reads the bitmap sequence at 60fps and the music Adam provided, mapping the bitmap sequence channel 0 to video, and the music channel 1 to audio at an output framerate of 60fps using the h264 codec with very low loss, at 4 times the resolution (if I uploaded the video to YouTube rumor has it increasing the resolution will increase the quality) with the bitexact flag telling it to resample the image rather than use a smoothing enlarging filter.

Here is a similar simpler command line to generate only a video with no sound.

ffmpeg -y -r 60 -i "Thorny Lightning\frame_ioi_%%08d.bmp" -r 60 -c:v libx264 -pix_fmt yuv420p -preset slow -crf 19 -vf scale=iw*4:ih*4 -sws_flags bitexact "_out\Thorny Lightning ioi q x4.mp4"

These were excellent tests to adjust the color cycling and zoom rates and see which of Adams songs matched 12 Mandelbrot Set locations, but the videos had no titles, credits, or transitions. They were previews but they lacked finished polish.

Are we there yet?

I used HitFilm Express to edit the 12 video projects and render 24 videos. I imported the bitmap sequences as clips (didn’t use the preview compressed .mp4’s), created intro and outro titles, composited the video with the awesome music provided by Adam Sporka, and synced up all the fades. I exported two versions of each video, one with colored frames, one without. Then uploaded them to YouTube, made the descriptions, and end cards, and etc. You know the rest.

In conclusion

In the end I love the way the videos (and this blog) turned out, it’s amazing to think that everything you see and hear in these videos was created on Atari computers. The Atari BASIC script that rendered every frame was only 10 lines long but it took VirtualBox running 12 Atari800Win-PLus emulators, half a dozen batch files, FFmpeg, ImageMagick, Gimp, 560 lines of Python using Pillow, and finally HitFilm Express to generate the final videos.

Stay creative, and support your creative communities!