An Introduction to Automated RTI

Let me preface this by saying:

1) Hi. I’m Kevin Falcetano and this is my first AISOS blog post. I am an undergraduate technician working for AISOS and have worked on the construction of our RTI equipment for almost two months.

2) This project was made far easier to complete because of Leszek Pawlowicz. His thorough documentation on the process of building an RTI dome and control system from consumer components as detailed on Hackaday was the reason for the successful and timely completion of AISOS’s very own RTI system. Another special thanks to the open software and materials from Cultural Heritage Imaging (CHI).

Okay, now that the introduction is out of the way, second things second.

What is RTI?

RTI stands for Reflectance Transformation Imaging. It is a method of digitizing/virtualizing the lighting characteristics of one face of an object by sampling multiple lighting angles from the same camera position over the object with known point light positions. The mathematical model involved produces a two dimensional image that can be relit from virtually any lighting angle, so that all of the surface detail is preserved on a per-pixel basis. The basic idea comes from the fact that if you have a surface, light reflects off of it differently and predictably depending on the angle of said surface. A visualization of surface normals, the vectors perpendicular to the surface at any given place, is provided below (credit CHI).

The information available is represented by yellow vectors, and the information we wish to calculate, the surface normals, is in red. So, given that we know the math behind how light can reflect from a surface (which we do), and we know the light path angle with respect to a fixed camera, we’re very close to calculating the normal vectors of the surface. We’re close because there are constants involved that are unknown, but if you’ve ever taken an algebra course, there is clearly a linear system that can be used to solve for those coefficients. We just need more data. That data comes in the form of images of lighting from more angles. This also helps to account for areas of an object where light from a certain angle may not hit due to occlusion by the object’s geometry, and is therefore incalculable. When all’s said and done, an RTI image is generated. Although it is two dimensional, an RTI image will be able to mimic the way the real object scatters light to a resolution that matches the source images. Below are three images as an example of how the lighting changes between each photo.

 

 Why RTI?

The openly available documentation for RTI explains its benefits better than I probably could, but put most simply, it’s like having an image, but the image knows how that object could be lit. This means that details that could not be revealed in any one possible image are shown in full in an RTI image. One of the pitfalls of photogrammetry, for instance, is it HATES reflective objects since smooth surfaces and specular/highlights make photogrammetry’s hallmark point tracking very difficult . RTI doesn’t care. RTI doesn’t need point overlap because the process asks that you eliminate many of the variables associated with photogrammetry, e.g. the camera and object do not move and the lighting angles are pre-calculated from a sphere of known geometry. The GIGAmacro can get up close, but it still produces images with single lighting angles, so, all else equal, less information per pixel. GIGAmacro, of course, has the advantage of being able to capture many camera positions very quickly, which results in many more pixels. RTI’s per-pixel information produces near perfect normal maps of the surface, which is represented with a false color standard. As an added benefit, our automated version of the workflow is blazing fast. Like, from start to RTI image file takes as little as five minutes, fast.

For a simple example of how RTI lets you relight an object, take a look at this coin.  Click the lightbulb icon and then click and drag your mouse to move the lighting.  This same data can be processed in a multitude of ways to reveal other details.

How RTI?

There are many ways to record data for RTI, including some very manually intensive methods that, for the sake of expedience, are definitely out of the question for AISOS. We operate under the assumption that researchers who use our space may have limited experience in any of their desired techniques, so the easier we can make powerful data acquisition and analysis methods the better. We decided to use Leszek’s method because it was both cost and time effective due to the way it can easily be automated. This method involves putting as many white LEDs as we would like data points, up to 64, inside an acrylic dome with a hole in the very top. The hole is there to allow a camera to look vertically down through it at an object centered under the dome. The dome is painted black on the inside to ensure internal reflections are minimized, making the only lighting source for the object the desired single LED inside the dome. Every LED is lit once for each image, and the data points may be processed and turned into an RTI image file.

This way, every LED can be turned on individually to take a picture of each lighting angle without moving or changing anything, and done so with an automatic shutter. This means that after setup, the image capture process is completely automated. Ultimately, the goal for this build was: Place dome over object, move in camera on our boom arm, focus lens, press button, RTI happens. Observe this accomplished goal in picture form:

You may notice in the above image that there are two domes of different sizes. We built two not only to accommodate different sizes of objects, but also to be able to use certain lenses, normally macro ones, that need to be closer to an object to be in focus. This allows us to pick a preferred lens for an object and then decide which dome to use with it so that the desired magnification and lens distance can be retained in most situations.

How all of this was built is outlined in a separate blog post.

Building an Automated RTI Dome

The RTI Build Experience

This is the second post in a series, written by  Kevin Falcetano.  See the first post for an introduction.

Though time-intensive, the RTI build was relatively simple. It was made so simple because the hard stuff; i.e. circuit design, dome construction, control box construction, and programming, were already done by Leszek Pawlowicz. This made the project more like a LEGO set than a massive undertaking.

I began with the control circuitry. The basic idea behind it is that it was necessary to control each LED individually, but it had to be a type of LED that was bright enough and had a quality output of consistent white light. This, unfortunately, meant that individually addressable LEDs (that may be controlled with a single data cable and an Arduino library for the control protocol) just would not do. So this is where Leszek’s build comes in. It calls for an eight by eight matrix of LED’s to be controlled by a special series of circuits connected to an Arduino Mega.

This allows for a total of 64 LEDs to be driven off of 16 digital pins, where the columns are the positive lead and the rows are the negative lead. The arduino’s power supply cannot drive this on its own, and that is why the circuitry is much more complicated. What is required is essentially the use of transistor-like gates on the positive and negative ends of the matrix so the current can be directed through one specified LED at a time.

One such set of gates is the highside MOSFET circuit, built on an Arduino Mega Shield:

An Arduino shield is a piece of hardware meant to be mounted on top of an Arduino microcontroller unit to interface with it. In this case, this was a blank board used to connect eight highside MOSFET circuits to the digital pins that would open the “gates” on the MOSFETs on the positive end to the desired column.

These components all had to be hand soldered to the shield board, but after quite a long period of time, everything was in its place without much of a hitch. The positive end of each MOSFET (the source) goes in parallel to the positive end of the power supply. Each gate (actually called the gate) gets connected to a corresponding arduino digital pin to be switched open as necessary, and finally, the negative ends of each MOSFET (the drain) get connected to corresponding pins of the positive ethernet cable that leads to the LEDs in the dome. The resistors and transistors shown on the board exist to convert the MOSFET for highside control. The other side of the shields connects the other eight Arduino digital pins to the row “gates” on the negative end, to make a complete circuit that controls each LED.

Those negative gates, controlled by CAT4101 LED drivers, were soldered onto a different board along with a beeper and USB control pins:

In this case, there is a normal resistor for baseline current limitation, which acts as a bottleneck for the maximum current through the LEDs. Alongside each resistor is a variable resistor to further reduce the current and vary/tune the intensity of each LED row. These CAT4s allow the negative side of the LED matrix to selectively connect to ground by way of the ground ethernet cable. Once all was soldered, it was a matter of mating this board, the Arduino Mega, and the MOSFET control shield.

It also helped to have a separate power distribution board, with positive and negative rails for the 5V Arduino power supply, and positive and negative rails for the 9V power supply that drives the LEDs.  After connecting the power leads up and stripping and adding pins to eight LEDs and two ethernet cables, it was time for the test: The moment of truth for my soldering and circuit building skills.

Aside from one cold solder joint that I had to fix, everything worked.

As you can probably see from the top left corner of the above image, there is a large pile of LEDs that all have wires soldered on, pins crimped, and heat shrink tubing applied to  them. This was a very fun intermediate step that requires no further explanation but will now be mildly complained about. That level of repetition is not actually fun, but very necessary.

Speaking of, the next thing on the list was painting the LEDs black. Since the goal of good RTI is to eliminate as many environmental variables as possible, the inside of the LED dome must be as non-reflective as possible, so as to eliminate internal reflections that could accidentally cause multiple other, albeit dimmer, light sources within. This means even the area around the actual LED chips had to be painted black, with the only other exception being a small portion of the positive pad so as not to mix up the polarity when it came time to wire everything together.

Now that this bit was out of the way, it was dome prep time. The domes we got were clear, which helped with marking the outside for LED positions and then transferring the marks to the inside. I did this using some clever geometry math mostly outlined by Leszek in his documents. I, however, had the idea to melt the marks into the inside of the dome using the soldering iron, which caused a rather unpleasant smell to fill up the room that may or may not have been an indicator of toxic fumes. This step was necessary so that the marks showed up even with the paint over them. Next was just that: painting both the small and big dome using exactly one entire can of matte black spray paint.

Above is an image from the spray booth inside Regis Center for Art, where I applied multiple coats of the paint in a highly impatient fashion. Everything turned out fine, since the drips in the small dome would be inconsequential to the build.

After everything was dry, it came time to mount the LEDs inside the big dome, since it was the first one we decided to have done to make sure everything was working as intended. Each LED was hot glued in place with the positive ends of all LEDs pointing in the counterclockwise direction.

The above photo shows the placement of the LEDs as well as part of the following step: the stripping and crimping of many chains of wires that connect up the LED matrix. Each LED has an attached male dupont pin head, whereas each chain of connectors has properly spaced female dupont pin heads. Every concentric circle of eight LEDs represents a negative row, and every vertical set of eight LEDs (offset to help with lighting coverage) represents a positive column.

With all of the rows and columns wired up, the open ends of each string of wires were to connect with the male dupont pins I crimped onto the ends of both the positive and negative ethernet cables’ individual wires. Every part of the control circuitry was made to correspond with matching pins 1 through 8 on the ethernet cables, such that it becomes incredibly easy to know which Arduino pin corresponds with which row and column number and which LED. The next step was then to clean up the wiring and turn the dome into a more self-contained piece.

Behind this piece of special matte tape (3M Chalkboard Tape) that has about as much stick as a thirty-year-old Post-It, are the connections to the ethernet cables pushed down and hot glued in place. With the finishing touches in place on the dome, a successful test was run and it was time to contain the control circuitry in a cozy project box.

The Arduino, MOSFET shield, CAT4101 board, and power board were all hot glued in place after drilling the requisite holes for the bells and whistles. The two potentiometers were wired in and bolted on to control the LED on time and delay, as this can vary between different shooting scenarios. A reset button, preview button, and action button are all shown above, and used to operate the reset, single light preview, and main action activation functions in the Arduino’s programming. This image was taken before I added the main 2.5mm jack used to snap the camera shutter and the switches that select the operation modes. At the right side of the box are the ethernet jacks for power to the dome (top is positive and bottom is negative), and the USB connection for cameras without a way of connecting to the 2.5mm shutter switch. Last to mention is the OLED screen that displays relevant information that helps with eliminating mistakes and debugging when repairs and/or changes must be made.

On the subject of the control box, I personally made changes to the construction in a few ways:

  • I accidentally busted a switch because the soldering iron was too hot, so I removed the sound on/off function which ended up being fine since I think I messed up the wiring of the beeper anyway.
  • I completely removed the USB shutter, servo shutter, and IR/Bluetooth shutter options because we only really need a 2.5mm auto shutter and a manual mode.
  • I added a small transistor circuit to short the 2.5mm jack to ground using an Arduino digital output in order to actually make the 2.5mm auto shutter function work.
  • I added a three way switch to reduce the number of switches, but increase the functions possible.

This is the front of the final form of the box:

After the box was wiring up the small dome. The steps were the same as the big one but just scaled down, so I’ll spare you the boring details other than the fact that this one only contains 48 LEDs, with eight columns and six rows.

With everything complete, I also had to change some of the Arduino code, which was provided by Leszek with his hackaday project. The changes to the code are outlined below:

  • Removed sound function and USB and IR/Bluetooth shutter functions
  • Added dome mode switch, so either dome can be used without reprogramming the control box
  • Added a new white balance preview function for calibrating the camera because the old one was tedious
  • Added new display functions to show new modes above
  • Added a new function to snap 2.5mm shutter
  • Added a function to skip intro screen because of impatience and changed the intro screen to display relevant version info of the new software build.

Oh, and one last minute addition before this goes up: I added a remote preview button so that the exposure settings and focus can be adjusted from the computer without getting up to turn the preview function on and off, making the setup process a bit less tedious. This feature is just a wired button that plugs into the control box at the front and is completely optional since I kept the original button in place.
The final post is a conclusion and thoughts on the lessons learned from this build.

Building an Automatic RTI Dome: Wrapping It Up

This is the third post in a series, written by  Kevin Falcetano.  See the first post for an introduction and the second post for the build process.

The RTI project was not without its share of problems, mistakes, and general annoyances, but it was ultimately both interesting and rewarding to go through. I’ve separated my concluding remarks into three distinct semi-chronological sections under which the trials, tribulations, and triumphs of this project are detailed.

Building

The construction of this project required many time-intensive repeated steps, but the hardest part, the planning, had already been worked out to a detailed part list and the very placement of the circuit components. Once again, a great many thanks to Leszek Pawlowicz for all of this work. In all of the soldering, striping, crimping, and wiring, I had made only a few easily correctable mistakes. That success is attributed to those thorough instructions and guiding images.

A few lessons were learned, still:

  • Helping hands are wonderful. Use them.
  • Lead free solder may or may not kill you more slowly, but it has a higher melting point. What that really means is it requires a delicate balance between “hot enough to heavily oxidize your soldering tip” and “too cool to flow the solder quickly.” Throw the balance too close to one side or the other and you get the same result: the heat spreads further over the extended heating period (by way of either too low a temperature or restricted conductivity through an oxide layer too thick to be affected by the flux core), melting plastics easily and possibly damaging sensitive components. I killed two switches and probably a piezo speaker this way. Be careful or use leaded solder.
  • Paint drips are fine if all you need is coverage.
  • 3M chalkboard tape may be matte black, which is wonderful for the inside of our domes, but it doesn’t stick. At all.
  • Hot glue is your friend. It’s not pretty or perfect but it’s accessable. What use is a build if you can’t repair it?
  • If one LED isn’t working in your matrix of LEDs, it is, logically, the LED that’s the problem. And by the LED being the problem I mean you are the problem. You put it in backwards but were too proud to check. The D in LED stands for diode – one way.
  • Be sure to have access to a dremel when trying to put large square holes in a hobby box.  Or have low standards.
  • When taking on a wiring project of this caliber, have an auto stripper handy. I may have lost my mind if I hadn’t used one.
  • Leave yourself some slack in the wires in case you break a dupont pin.
  • Don’t break dupont pins.

Calibrating and Processing

The calibration process was initially a bit of a headache, but I still must be grateful to Leszek and Cultural Heritage Imaging for their open software, without which more frustration would have ensued.

CHI provides a java applet that generates lp files. The file extension stands for light positions (or light points) and is pretty self explanatory: An lp file is a plain text file with the angles of the lights attributed to each of the lighting positions in order of first to last picture taken. This is done automatically with some clever math and dodgy code. CHI has told us they have a more stable version in the works, but it’s not quite done yet. Anyway, calibration is done using a black sphere (in this case also provided by CHI) that reflects each LED in the dome as a bright point. The sphere gets centered under the dome and a pass is run taking a photo for each light angle.  The rotational orientation of the dome relative to the camera is marked so as to be consistent for future use. This is because the dome is consistent so long as it it rotated to the correct position over the object, which means the lp may be reused.

The program uses an edge detection algorithm to figure out where the radius, and in turn, the center of the sphere is. After that, it uses the known geometry of the sphere and a highlight detection algorithm to figure out where the reflection point of each LED is for each picture, and then uses its position relative to the center and radius of the sphere to calculate the angle of each light point. For H-RTI, this is supposed to be done every time with a reference sphere next to every subject, but as I said before, we can reuse the lp because everything is fixed. This applet is glitchy and cumbersome, but it works when treated properly. It will run out of memory if we use full resolution photos from the Canon 5D, but downscaling them does not introduce any significant error into the calibration process. It also has very strict guidelines for filenames and a clunky default user interface, the indicators of purely functional software. It’s useful, but tedious.

Leszek has his own software that strips the lp file of its association with the calibration sphere image files and stores it for future use as a general lps file. This can then be used in his program to create a new lp for each RTI run associated with the new image files, and then it and the photos get run through various command line scripts for different RTI and PTM (polynomial texture map) processing methods. This ensures that, after initial calibration, subsequent workflow is very fast and efficient, with minimal fiddling required.

Customizing

Of course, there are some things specific to our use cases that a third party couldn’t anticipate, so some changes were made. The biggest change in functionality, but arguably the easiest change to make programming-wise, was the addition of a mode switch to toggle which dome is in use. This allows the automatic, manual, and preview functions of the control box to change depending on which dome is being used. There are more details on how I changed the hardware and code in the build blog post, but all in all it was relatively painless. Leszek’s code is very clean and manageable, so I was able to easily navigate and change what I needed (which was quite a bit). It worked perfectly from the beginning, but not for exactly what we needed it for.

Final Thoughts

My rough estimate puts a first-time build of this project from start to finish at around 80 hours. It was fun overall, and not terribly frustrating save a few parts. We are currently thinking through how to make a custom shield PCB to optimize the build process of this RTI dome’s control unit, since there may be demand for a pre-built rig from others at the University. It would be very difficult to streamline the dome building process considering it is such a custom endeavor. I could see a template being made for mounting the LEDs and placing them in the domes, possibly with ribbon cables, but it still seems pretty far off.

Although there are certain significant limitations, RTI appears, in our testing, to be a very versatile tool, especially when automated in the way we have it. We are working with researchers from across campus to explore the possibilities, and the speed and precision of these domes is a huge benefit to that work.  In the future, we hope to be able to combine RTI and Photogrammetry to produce models with surface detail we have yet to come close to being able to capture.

If you would like to take advantage of our ever-expanding imaging resources, or would simply like to chat with us about what we’re doing and how we’re doing it, feel free to contact aisos@umn.edu.

[elevator width=640 height=480 includelink=”on” includesummary=”on” fileobjectid=”58d2c8577d58aee00df62672″ objectid=”58d2c8587d58aee00df62673″ sourceurl=”http://aisos.elevator.umn.edu/asset/viewAsset/58d2c8587d58aee00df62673″]