Virtual MISLS

AISOS has been supporting Professor Kat Hayes on the production of “Virtual MISLS”, an iOS app that allows users to explore a historic building at Fort Snelling (Minnesota). It is designed for Google Cardboard and other mobile virtual reality headsets.

During World War II, the Military Intelligence Language School (MISLS) at Fort Snelling trained soldiers in the Japanese language to aid the war effort. Many of the buildings used for the MISLS are still standing, but are not open to the public. Because physical preservation and renovation are costly and take years to accomplish, researchers at the University of Minnesota have used a technique called photogrammetry to create a three-dimensional virtual reality (VR) model of one of the classroom buildings (Building 103) from Fort Snelling’s Upper Post. This app lets you experience the space of Building 103, where you can navigate through a guided virtual museum exhibit that uses text panels, photos, and audio clips of oral history interviews with Japanese American MISLS veterans.

Virtual reality is a powerful tool for public memory projects as it creates an immersive and interactive experience that allows users to imagine what it might have been like to be a soldier and student at the language school. This current app is a work in progress that seeks to provide alternative modes of access into spaces and histories that are under-represented at Historic Fort Snelling.

To use the app, you’ll need some sort of Google Cardboard viewer. We like the DSCVR viewer, but any viewer will work fine.  If you have a “plus” size iPhone, make sure your viewer supports that size.

The app begins with some tutorial information.  If you have further questions, please get in touch.

Automated Slide Capture Rig

Chances are pretty good that if you crack open a dusty storage closest at any large University, you’ll find an old slide projector, or an unlabeled box of 35mm slides.  For decades, University lectures and research work relied heavily on slides, created from photographs taken on reversal film or charts and diagrams transferred to slides for presentations.

Since 2006, the Digital Content Library within the College of Liberal Arts has been scanning slides collected by our faculty as part of their research and instruction.  Hundreds of thousands of slides have been scanned and cataloged, creating a unique collection of material from around the world and across the decades.  Each of these slides was scanned one at a time, on a scanner like the Nikon CoolScan.  These produce a quality image, but are slow and require an operator to load each slide.  With hundreds of thousands more slides in the “someday” pile, one-at-a-time meant a lot of slides would probably never get scanned.

Inspired by some commercial products and a hacker spirit, we decided to build our own automated slide capture unit, using a classic Kodak Ektagraphic carousel projector, combined with a Canon 5D Mark III digital camera.

The basic theory of operation is to leverage the automated loading and advancing capibilities of the projector, and not much else.  The lens is removed, and the illumination bulb is replaced with a lower output (and much cooler) LED.  The camera, with a macro lens, focuses on the slide sitting inside the projector. A small Arduino controller manages the whole apparatus, advancing the projector and triggering the camera.

Because this was a hobby project, and not a core part of our jobs, it evolved very gradually over the fall of 2017.   We started by placing a large LED light panel behind the projector, with the lamp drawer removed, to get a sense of the image quality.  Some basic comparisons against our Nikon scanners hinted that the quality was as good or better as the dedicated scanners.

Because we don’t run a full-time slide scanning facility, we wanted to keep our scanning station relatively portable.  For that reason, it was important that the projector stay self contained, rather than relying on an external light source. For longevity and consistency, we wanted an LED.  We knew we needed very even illumination, which meant we would likely be using diffusion.  That meant starting with a high output light source.  If you’re looking for a high output LED, you’ve got a couple options.  You can get a fancy branded LED with a quality driver, or you can get a slighty sketchy LED directly from China, via Ebay or Alibaba.  We went with the latter.  As a bonus, it came with a basic LED driver and heatsink.


With the LED in hand and firmly mounted to its heatsink, we started figuring out how to mount it in the projector.  We decided to mount the LED in the same place as the original bulb, utilizing the existing removable light tray.  This means the light mechanism can be moved to another projector easily.  The heatsink was ever-so-slightly larger than the original lamp, so a few pieces of the mechanism had to be cut away.  For testing, the heatsink was mounted with some zip ties.

The LED runs on DC power.  Rather than trying to pack a transformer into the projector (which is exclusively AC), we simply ran the power cables out through a slot in the housing.

With the LED in place, we did some more testing to see about achieving even lighting.  Our friends in the TV studios gave us some diffusion material, so we were able to stack sheets of diffusion until we could no longer perceive any hot spots on our test slides.

Once we had the LED in place, we were pretty confident the project would work out.  The remaining bits, involving some basic fabrication and automation, were things we’d tackled with past projects like our automated photogrammetry rig and our RTI domes.  We repurposed a spare Arduino Uno to control everything.  Because the projector uses a 24 volt AC signal (!!) to control the slides, we couldn’t get away with a simple transistor control. Since we were only building one of these, we decided to buy a premade Arduino Relay shield from Evil Mad Scientist.

A little bit of code and some gator clips let us confirm that everything worked as intended.  All that was left was the cleanup and assembly.  We designed some basic mounts in Tinkercad and printed them on our Makerbot Replicator Mini.  The entire electronics package was small enough to fit into the remote storage compartment on the side of the projector.  Because the door is easily removable, it’s still a snap to move the entire setup to another projector.


At this point, the rig is assembled and in production.  We use SmartShooter 3 on a computer, attached to the camera via USB, to storage the images as they’re acquired.  We use BatchCrop to crop, clean, and straighten the images.

We can image slides at the rate of roughly 3 minutes for an 80 slide carousel.   In fact, it takes far longer to load and unload the carousel than it does to perform the capture.

[youtube https://www.youtube.com/watch?v=8Z7GMDpcPGc]

Because we used a lot of repurposed parts, we don’t have a complete Bill-of-Materials cost of the rig, but it could be replicated for well under $100 (assuming you already own the camera and projector).  We could probably put another unit together in an afternoon.

We’re really excited to have done this project, and we think it’s a great representation of the LATIS Labs ethos: we saw a problem that needed solving, iterated in small steps, and ultimately put together a workable solution.  Our next build is a little more complicated and a bit more expensive – if you’ve got a few thousand dollars lying around, get in touch!

 

 

Building a Photogrammetry Turntable

One of our goals with AISOS is to make complicated imaging tasks as easy and repeatable as possible.  We want to be able to rapidly produce high quality products, and we want the process to be accessible to folks with a minimal amount of training.

One of the ways we’ve done that for photogrammetric imaging is by building an automated turntable capture setup.  Conceptually, this is a pretty straightforward solution.  A small turntable rotates an object a fixed number of degrees, then triggers a camera to take a photo.  That process is repeated until the object has made a full 360 degree rotation.  Then the camera can be adjusted to a different angle, and the process can be repeated.

As much as we like doing cool hardware hacking, we also don’t want to suffer from “not built here” syndrome.  We investigated a variety of options for off-the-shelf solutions in this space.  There are a handful of very high end products, which also handle all the camera movement.  These are amazing, but they’re both expensive (nearly $100,000) and, more crucially, massive.  None of those solutions would physically work in our space.

There are also some smaller standalone turntable options that we explored.  However, they’re all essentially small volume homemade products, and rely on proprietary software.  We were concerned about being stuck with an expensive (still thousands of dollars) product of questionable quality.

We then began to look at building our own.  We’re not the first ones to have this idea, and fortunately there are a variety of great build plans out there.  Our favorite was the Spin project from MIT.  Spin is an automated turntable setup designed for photogrammetry with an iPhone. We knew we’d need to modify the setup to work with our camera, but the fundamentals of Spin are excellent.

For our turntable, we completely replicated the physical design of the Spin, using their laser cutter templates and their 3d printed gear.  We used the same stepper motor as well, in order to utilize their mount.  Where we differed was in the electronics.

In this post, we’ll outline the basics of our design and share our Arduino code.  We don’t currently have a full wiring schematic (Sparkfun doesn’t have a fritzing diagram for our chosen stepper driver, and none of us know Eagle – get in touch if you want to help).

Our design is based around a Sparkfun Autodriver. The autodriver is relatively expensive for a stepper driver, but it’s really easy to work with, and is a little more resilient to being abused.  Our implementation is actually based on the “getting started with the autodriver” document published by Sparkfun.  We use an Arduino Redboard as our controller, along with a protoshield for making reliable connections.

The additions we’ve made to the basic Sparkfun diagram include the camera trigger control and the ability to adjust the degrees of rotation per interval.  We’re working with a Canon digital SLR, which can be triggered via a simple contact-closure trigger.  The Canon camera trigger uses three wires – one for focus, one for firing the photo, and a ground.  Connecting either of the first two to ground is all you need to do to trigger an action.  To control that from an Arduino, you just need to use a transistor attached to a digital pin on the Arduino.  Turning on the pin closes the transistor and triggers the camera.

To control the number of degrees per interval, we added a simple rotary selector.  A rotary selector is basically just a bunch of different switches – only one can be on at a time.  We use five of the analog pins on our Arduino (set to operate as digital pins) to read the value of the switch.

 

So far, we’ve been very happy with the build.  We’ve taken many thousands of photos with it, and it hasn’t missed a beat.  We expect that over time, the 3d-printed gear will wear down and need to be replaced.  Beyond that, we expect it to have a lengthy service life.

This is an abbreviated build blog.  We’ll endeavor to provide complete wiring diagrams for any future builds.  For now, just get in touch if you’re interested in learning more about our build.

An Introduction to Automated RTI

Let me preface this by saying:

1) Hi. I’m Kevin Falcetano and this is my first AISOS blog post. I am an undergraduate technician working for AISOS and have worked on the construction of our RTI equipment for almost two months.

2) This project was made far easier to complete because of Leszek Pawlowicz. His thorough documentation on the process of building an RTI dome and control system from consumer components as detailed on Hackaday was the reason for the successful and timely completion of AISOS’s very own RTI system. Another special thanks to the open software and materials from Cultural Heritage Imaging (CHI).

Okay, now that the introduction is out of the way, second things second.

What is RTI?

RTI stands for Reflectance Transformation Imaging. It is a method of digitizing/virtualizing the lighting characteristics of one face of an object by sampling multiple lighting angles from the same camera position over the object with known point light positions. The mathematical model involved produces a two dimensional image that can be relit from virtually any lighting angle, so that all of the surface detail is preserved on a per-pixel basis. The basic idea comes from the fact that if you have a surface, light reflects off of it differently and predictably depending on the angle of said surface. A visualization of surface normals, the vectors perpendicular to the surface at any given place, is provided below (credit CHI).

The information available is represented by yellow vectors, and the information we wish to calculate, the surface normals, is in red. So, given that we know the math behind how light can reflect from a surface (which we do), and we know the light path angle with respect to a fixed camera, we’re very close to calculating the normal vectors of the surface. We’re close because there are constants involved that are unknown, but if you’ve ever taken an algebra course, there is clearly a linear system that can be used to solve for those coefficients. We just need more data. That data comes in the form of images of lighting from more angles. This also helps to account for areas of an object where light from a certain angle may not hit due to occlusion by the object’s geometry, and is therefore incalculable. When all’s said and done, an RTI image is generated. Although it is two dimensional, an RTI image will be able to mimic the way the real object scatters light to a resolution that matches the source images. Below are three images as an example of how the lighting changes between each photo.

 

 Why RTI?

The openly available documentation for RTI explains its benefits better than I probably could, but put most simply, it’s like having an image, but the image knows how that object could be lit. This means that details that could not be revealed in any one possible image are shown in full in an RTI image. One of the pitfalls of photogrammetry, for instance, is it HATES reflective objects since smooth surfaces and specular/highlights make photogrammetry’s hallmark point tracking very difficult . RTI doesn’t care. RTI doesn’t need point overlap because the process asks that you eliminate many of the variables associated with photogrammetry, e.g. the camera and object do not move and the lighting angles are pre-calculated from a sphere of known geometry. The GIGAmacro can get up close, but it still produces images with single lighting angles, so, all else equal, less information per pixel. GIGAmacro, of course, has the advantage of being able to capture many camera positions very quickly, which results in many more pixels. RTI’s per-pixel information produces near perfect normal maps of the surface, which is represented with a false color standard. As an added benefit, our automated version of the workflow is blazing fast. Like, from start to RTI image file takes as little as five minutes, fast.

For a simple example of how RTI lets you relight an object, take a look at this coin.  Click the lightbulb icon and then click and drag your mouse to move the lighting.  This same data can be processed in a multitude of ways to reveal other details.

How RTI?

There are many ways to record data for RTI, including some very manually intensive methods that, for the sake of expedience, are definitely out of the question for AISOS. We operate under the assumption that researchers who use our space may have limited experience in any of their desired techniques, so the easier we can make powerful data acquisition and analysis methods the better. We decided to use Leszek’s method because it was both cost and time effective due to the way it can easily be automated. This method involves putting as many white LEDs as we would like data points, up to 64, inside an acrylic dome with a hole in the very top. The hole is there to allow a camera to look vertically down through it at an object centered under the dome. The dome is painted black on the inside to ensure internal reflections are minimized, making the only lighting source for the object the desired single LED inside the dome. Every LED is lit once for each image, and the data points may be processed and turned into an RTI image file.

This way, every LED can be turned on individually to take a picture of each lighting angle without moving or changing anything, and done so with an automatic shutter. This means that after setup, the image capture process is completely automated. Ultimately, the goal for this build was: Place dome over object, move in camera on our boom arm, focus lens, press button, RTI happens. Observe this accomplished goal in picture form:

You may notice in the above image that there are two domes of different sizes. We built two not only to accommodate different sizes of objects, but also to be able to use certain lenses, normally macro ones, that need to be closer to an object to be in focus. This allows us to pick a preferred lens for an object and then decide which dome to use with it so that the desired magnification and lens distance can be retained in most situations.

How all of this was built is outlined in a separate blog post.

Building an Automated RTI Dome

The RTI Build Experience

This is the second post in a series, written by  Kevin Falcetano.  See the first post for an introduction.

Though time-intensive, the RTI build was relatively simple. It was made so simple because the hard stuff; i.e. circuit design, dome construction, control box construction, and programming, were already done by Leszek Pawlowicz. This made the project more like a LEGO set than a massive undertaking.

I began with the control circuitry. The basic idea behind it is that it was necessary to control each LED individually, but it had to be a type of LED that was bright enough and had a quality output of consistent white light. This, unfortunately, meant that individually addressable LEDs (that may be controlled with a single data cable and an Arduino library for the control protocol) just would not do. So this is where Leszek’s build comes in. It calls for an eight by eight matrix of LED’s to be controlled by a special series of circuits connected to an Arduino Mega.

This allows for a total of 64 LEDs to be driven off of 16 digital pins, where the columns are the positive lead and the rows are the negative lead. The arduino’s power supply cannot drive this on its own, and that is why the circuitry is much more complicated. What is required is essentially the use of transistor-like gates on the positive and negative ends of the matrix so the current can be directed through one specified LED at a time.

One such set of gates is the highside MOSFET circuit, built on an Arduino Mega Shield:

An Arduino shield is a piece of hardware meant to be mounted on top of an Arduino microcontroller unit to interface with it. In this case, this was a blank board used to connect eight highside MOSFET circuits to the digital pins that would open the “gates” on the MOSFETs on the positive end to the desired column.

These components all had to be hand soldered to the shield board, but after quite a long period of time, everything was in its place without much of a hitch. The positive end of each MOSFET (the source) goes in parallel to the positive end of the power supply. Each gate (actually called the gate) gets connected to a corresponding arduino digital pin to be switched open as necessary, and finally, the negative ends of each MOSFET (the drain) get connected to corresponding pins of the positive ethernet cable that leads to the LEDs in the dome. The resistors and transistors shown on the board exist to convert the MOSFET for highside control. The other side of the shields connects the other eight Arduino digital pins to the row “gates” on the negative end, to make a complete circuit that controls each LED.

Those negative gates, controlled by CAT4101 LED drivers, were soldered onto a different board along with a beeper and USB control pins:

In this case, there is a normal resistor for baseline current limitation, which acts as a bottleneck for the maximum current through the LEDs. Alongside each resistor is a variable resistor to further reduce the current and vary/tune the intensity of each LED row. These CAT4s allow the negative side of the LED matrix to selectively connect to ground by way of the ground ethernet cable. Once all was soldered, it was a matter of mating this board, the Arduino Mega, and the MOSFET control shield.

It also helped to have a separate power distribution board, with positive and negative rails for the 5V Arduino power supply, and positive and negative rails for the 9V power supply that drives the LEDs.  After connecting the power leads up and stripping and adding pins to eight LEDs and two ethernet cables, it was time for the test: The moment of truth for my soldering and circuit building skills.

Aside from one cold solder joint that I had to fix, everything worked.

As you can probably see from the top left corner of the above image, there is a large pile of LEDs that all have wires soldered on, pins crimped, and heat shrink tubing applied to  them. This was a very fun intermediate step that requires no further explanation but will now be mildly complained about. That level of repetition is not actually fun, but very necessary.

Speaking of, the next thing on the list was painting the LEDs black. Since the goal of good RTI is to eliminate as many environmental variables as possible, the inside of the LED dome must be as non-reflective as possible, so as to eliminate internal reflections that could accidentally cause multiple other, albeit dimmer, light sources within. This means even the area around the actual LED chips had to be painted black, with the only other exception being a small portion of the positive pad so as not to mix up the polarity when it came time to wire everything together.

Now that this bit was out of the way, it was dome prep time. The domes we got were clear, which helped with marking the outside for LED positions and then transferring the marks to the inside. I did this using some clever geometry math mostly outlined by Leszek in his documents. I, however, had the idea to melt the marks into the inside of the dome using the soldering iron, which caused a rather unpleasant smell to fill up the room that may or may not have been an indicator of toxic fumes. This step was necessary so that the marks showed up even with the paint over them. Next was just that: painting both the small and big dome using exactly one entire can of matte black spray paint.

Above is an image from the spray booth inside Regis Center for Art, where I applied multiple coats of the paint in a highly impatient fashion. Everything turned out fine, since the drips in the small dome would be inconsequential to the build.

After everything was dry, it came time to mount the LEDs inside the big dome, since it was the first one we decided to have done to make sure everything was working as intended. Each LED was hot glued in place with the positive ends of all LEDs pointing in the counterclockwise direction.

The above photo shows the placement of the LEDs as well as part of the following step: the stripping and crimping of many chains of wires that connect up the LED matrix. Each LED has an attached male dupont pin head, whereas each chain of connectors has properly spaced female dupont pin heads. Every concentric circle of eight LEDs represents a negative row, and every vertical set of eight LEDs (offset to help with lighting coverage) represents a positive column.

With all of the rows and columns wired up, the open ends of each string of wires were to connect with the male dupont pins I crimped onto the ends of both the positive and negative ethernet cables’ individual wires. Every part of the control circuitry was made to correspond with matching pins 1 through 8 on the ethernet cables, such that it becomes incredibly easy to know which Arduino pin corresponds with which row and column number and which LED. The next step was then to clean up the wiring and turn the dome into a more self-contained piece.

Behind this piece of special matte tape (3M Chalkboard Tape) that has about as much stick as a thirty-year-old Post-It, are the connections to the ethernet cables pushed down and hot glued in place. With the finishing touches in place on the dome, a successful test was run and it was time to contain the control circuitry in a cozy project box.

The Arduino, MOSFET shield, CAT4101 board, and power board were all hot glued in place after drilling the requisite holes for the bells and whistles. The two potentiometers were wired in and bolted on to control the LED on time and delay, as this can vary between different shooting scenarios. A reset button, preview button, and action button are all shown above, and used to operate the reset, single light preview, and main action activation functions in the Arduino’s programming. This image was taken before I added the main 2.5mm jack used to snap the camera shutter and the switches that select the operation modes. At the right side of the box are the ethernet jacks for power to the dome (top is positive and bottom is negative), and the USB connection for cameras without a way of connecting to the 2.5mm shutter switch. Last to mention is the OLED screen that displays relevant information that helps with eliminating mistakes and debugging when repairs and/or changes must be made.

On the subject of the control box, I personally made changes to the construction in a few ways:

  • I accidentally busted a switch because the soldering iron was too hot, so I removed the sound on/off function which ended up being fine since I think I messed up the wiring of the beeper anyway.
  • I completely removed the USB shutter, servo shutter, and IR/Bluetooth shutter options because we only really need a 2.5mm auto shutter and a manual mode.
  • I added a small transistor circuit to short the 2.5mm jack to ground using an Arduino digital output in order to actually make the 2.5mm auto shutter function work.
  • I added a three way switch to reduce the number of switches, but increase the functions possible.

This is the front of the final form of the box:

After the box was wiring up the small dome. The steps were the same as the big one but just scaled down, so I’ll spare you the boring details other than the fact that this one only contains 48 LEDs, with eight columns and six rows.

With everything complete, I also had to change some of the Arduino code, which was provided by Leszek with his hackaday project. The changes to the code are outlined below:

  • Removed sound function and USB and IR/Bluetooth shutter functions
  • Added dome mode switch, so either dome can be used without reprogramming the control box
  • Added a new white balance preview function for calibrating the camera because the old one was tedious
  • Added new display functions to show new modes above
  • Added a new function to snap 2.5mm shutter
  • Added a function to skip intro screen because of impatience and changed the intro screen to display relevant version info of the new software build.

Oh, and one last minute addition before this goes up: I added a remote preview button so that the exposure settings and focus can be adjusted from the computer without getting up to turn the preview function on and off, making the setup process a bit less tedious. This feature is just a wired button that plugs into the control box at the front and is completely optional since I kept the original button in place.
The final post is a conclusion and thoughts on the lessons learned from this build.

Building an Automatic RTI Dome: Wrapping It Up

This is the third post in a series, written by  Kevin Falcetano.  See the first post for an introduction and the second post for the build process.

The RTI project was not without its share of problems, mistakes, and general annoyances, but it was ultimately both interesting and rewarding to go through. I’ve separated my concluding remarks into three distinct semi-chronological sections under which the trials, tribulations, and triumphs of this project are detailed.

Building

The construction of this project required many time-intensive repeated steps, but the hardest part, the planning, had already been worked out to a detailed part list and the very placement of the circuit components. Once again, a great many thanks to Leszek Pawlowicz for all of this work. In all of the soldering, striping, crimping, and wiring, I had made only a few easily correctable mistakes. That success is attributed to those thorough instructions and guiding images.

A few lessons were learned, still:

  • Helping hands are wonderful. Use them.
  • Lead free solder may or may not kill you more slowly, but it has a higher melting point. What that really means is it requires a delicate balance between “hot enough to heavily oxidize your soldering tip” and “too cool to flow the solder quickly.” Throw the balance too close to one side or the other and you get the same result: the heat spreads further over the extended heating period (by way of either too low a temperature or restricted conductivity through an oxide layer too thick to be affected by the flux core), melting plastics easily and possibly damaging sensitive components. I killed two switches and probably a piezo speaker this way. Be careful or use leaded solder.
  • Paint drips are fine if all you need is coverage.
  • 3M chalkboard tape may be matte black, which is wonderful for the inside of our domes, but it doesn’t stick. At all.
  • Hot glue is your friend. It’s not pretty or perfect but it’s accessable. What use is a build if you can’t repair it?
  • If one LED isn’t working in your matrix of LEDs, it is, logically, the LED that’s the problem. And by the LED being the problem I mean you are the problem. You put it in backwards but were too proud to check. The D in LED stands for diode – one way.
  • Be sure to have access to a dremel when trying to put large square holes in a hobby box.  Or have low standards.
  • When taking on a wiring project of this caliber, have an auto stripper handy. I may have lost my mind if I hadn’t used one.
  • Leave yourself some slack in the wires in case you break a dupont pin.
  • Don’t break dupont pins.

Calibrating and Processing

The calibration process was initially a bit of a headache, but I still must be grateful to Leszek and Cultural Heritage Imaging for their open software, without which more frustration would have ensued.

CHI provides a java applet that generates lp files. The file extension stands for light positions (or light points) and is pretty self explanatory: An lp file is a plain text file with the angles of the lights attributed to each of the lighting positions in order of first to last picture taken. This is done automatically with some clever math and dodgy code. CHI has told us they have a more stable version in the works, but it’s not quite done yet. Anyway, calibration is done using a black sphere (in this case also provided by CHI) that reflects each LED in the dome as a bright point. The sphere gets centered under the dome and a pass is run taking a photo for each light angle.  The rotational orientation of the dome relative to the camera is marked so as to be consistent for future use. This is because the dome is consistent so long as it it rotated to the correct position over the object, which means the lp may be reused.

The program uses an edge detection algorithm to figure out where the radius, and in turn, the center of the sphere is. After that, it uses the known geometry of the sphere and a highlight detection algorithm to figure out where the reflection point of each LED is for each picture, and then uses its position relative to the center and radius of the sphere to calculate the angle of each light point. For H-RTI, this is supposed to be done every time with a reference sphere next to every subject, but as I said before, we can reuse the lp because everything is fixed. This applet is glitchy and cumbersome, but it works when treated properly. It will run out of memory if we use full resolution photos from the Canon 5D, but downscaling them does not introduce any significant error into the calibration process. It also has very strict guidelines for filenames and a clunky default user interface, the indicators of purely functional software. It’s useful, but tedious.

Leszek has his own software that strips the lp file of its association with the calibration sphere image files and stores it for future use as a general lps file. This can then be used in his program to create a new lp for each RTI run associated with the new image files, and then it and the photos get run through various command line scripts for different RTI and PTM (polynomial texture map) processing methods. This ensures that, after initial calibration, subsequent workflow is very fast and efficient, with minimal fiddling required.

Customizing

Of course, there are some things specific to our use cases that a third party couldn’t anticipate, so some changes were made. The biggest change in functionality, but arguably the easiest change to make programming-wise, was the addition of a mode switch to toggle which dome is in use. This allows the automatic, manual, and preview functions of the control box to change depending on which dome is being used. There are more details on how I changed the hardware and code in the build blog post, but all in all it was relatively painless. Leszek’s code is very clean and manageable, so I was able to easily navigate and change what I needed (which was quite a bit). It worked perfectly from the beginning, but not for exactly what we needed it for.

Final Thoughts

My rough estimate puts a first-time build of this project from start to finish at around 80 hours. It was fun overall, and not terribly frustrating save a few parts. We are currently thinking through how to make a custom shield PCB to optimize the build process of this RTI dome’s control unit, since there may be demand for a pre-built rig from others at the University. It would be very difficult to streamline the dome building process considering it is such a custom endeavor. I could see a template being made for mounting the LEDs and placing them in the domes, possibly with ribbon cables, but it still seems pretty far off.

Although there are certain significant limitations, RTI appears, in our testing, to be a very versatile tool, especially when automated in the way we have it. We are working with researchers from across campus to explore the possibilities, and the speed and precision of these domes is a huge benefit to that work.  In the future, we hope to be able to combine RTI and Photogrammetry to produce models with surface detail we have yet to come close to being able to capture.

If you would like to take advantage of our ever-expanding imaging resources, or would simply like to chat with us about what we’re doing and how we’re doing it, feel free to contact aisos@umn.edu.

[elevator width=640 height=480 includelink=”on” includesummary=”on” fileobjectid=”58d2c8577d58aee00df62672″ objectid=”58d2c8587d58aee00df62673″ sourceurl=”http://aisos.elevator.umn.edu/asset/viewAsset/58d2c8587d58aee00df62673″]

 

Photogrammetry with GIGAMacro images

One of the exciting possibilities in the AISOS space is the opportunity to combine technologies in new or novel ways. For example, combining RTI and photogrammetry may allow for 3d models with increased precision in surface details. Along these lines, we recently had the opportunity to do some work combining our GIGAMacro with our typical photogrammetry process.

This work was inspired by Dr. Hancher from our Department of English. He brought us some wooden printing blocks, which are made up of very, very fine surface carvings. His research interests include profiling the depth of the cuts. The marks are far too fine for our typical photogrammetry equipment. While they may be well-suited to RTI, the size of the blocks would mean that imaging with RTI would be very time consuming.

As we were pondering our options, one of our graduate researchers, Samantha Porter, pointed us to a paper she’d recently worked on which dealt with a similar situation.

By manually setting the GIGAMacro to image with a lot more overlap than is typical (we ran at a 66% overlap), and using a level of magnification which fully reveals the subtle surface details we were interested in, we were able to capture images well suited to photogrammetry. This process generates a substantial amount of data (a small wooden block consisted of more than 400 images), but it’s still manageable using our normal photogrammetry tools (Agisoft Photoscan).

After approximately 8 hours of processing, the results are impressive. Even the most subtle details are revealed in the mesh (the mesh seen below has been simplified for display in the browser, and has had its texture removed to better show the surface details). Because the high-overlap images can still be stitched using the traditional GIGAMacro toolchain, we can also generate high resolution 2d images for comparison.

We’re excited to continue to refine this technique, to increase the performance and the accuracy.

Diving in on Image Stitching

As we’ve previously discussed, the Gigamacro works by taking many (many) photos of an object, with slight offsets. All of those photos need to then be combined to give you a big, beautiful gigapixel image. That process is accomplished in two steps.

First, all of the images taken at different heights need to be combined into a single in-focus image per X-Y position. This is done with focus-stacking software, like Zerene Stacker or Helicon. After collapsing these “stacks,” all of the positions need to be stitched together into a single image.

On its surface, this might seem like a pretty simple task. After all, we’ve got a precisely aligned grid, with fixed camera settings. However, there are a number of factors that complicate this.

First off, nothing about this system is “perfect” in an absolute sense. Each lens has slightly different characteristics from side to side and top to bottom. No matter how hard we try, the flashes won’t be positioned in exactly the same place on each side, and likely won’t fire with exactly the same brightness. The object may move ever so slightly due to vibrations from the unit or the building. And, while very precise, the Gigamacro itself may not move precisely the same amount each time. Keep in mind that, at the scale we’re operating at (even with a fairly wide lens), each pixel of the image represents less than a micron. If we were to blindly stitch a grid of images, even a misalignment as small as one micron would be noticeable.

To solve this, the Gigamacro utilizes commercial panorama stitching software – primarily Autopano Giga. Stitching software works by identifying similarities between images, and then calculating the necessary warping and movement to align those images. For those interested in the technical aspects of this process, we recommend reading Automatic Panoramic Image Stitching using Invariant Features by Matthew Brown and David Lowe. In addition to matching photos precisely, these tools are able to blend images so that lighting and color differences are removed, using techniques like seam carving.

While this type of software works well in most cases, there are some limitations. All off-the-shelf stitching software currently on the market is intended for traditional panoramas – a photographer stands in one place and rotates, taking many overlapping photos. This means they assume a single nodal point – the camera doesn’t translate in space. The Gigamacro is doing the opposite – the camera doesn’t rotate, but instead translates over X and Y.

Because the software is assuming camera rotation, it automatically applies different types of distortion to attempt to make the image look “right.” In this case though, right is wrong. In addition, the software assumes we’re holding the camera by hand, and thus that the camera might wobble a bit. In reality, our camera isn’t rotating around the Z axis at all.

Typically, we fool the panorma software by telling it that we were using a very, very (impossibly) long zoom lens when taking the photos. This makes it think that each photo is an impossibly small slice of the panorama, and thus the distoration is imperceptible.

However, Dr. Griffin from our Department of Geography, Environment & Society presented us with some challenging wood core samples. These samples are very long, and very narrow. Even at a relatively high level of zoom, they can fit within a single frame along the Y axis. Essentially, we end with a single, long row of images.

This arrangement presented a challenge to the commercial stitching software. With a single row of images, any incorrect rotation applied to one image will compound in the next image, increasing the error. In addition, the slight distortion from the software attempting to correct what it thinks is spherical distortion means the images end up slightly curved. We were getting results with wild shapes, none of them correlating to reality.

Through more fiddling, and with help from the Gigamacro team, we were able to establish a workflow that mostly solved the problem. By combing Autopano Giga with PTGui, another stitching tool, we were able to dial out the incorrect rotation values and get decently accurate-looking samples. However, the intention with these samples is to use them for very precise measurements, and we were unconvinced that we had removed enough error from the system.

As mentioned earlier, the problem appears, on its face, to be relatively simple. That got us to thinking – could we develop a custom stitching solution for a reduced problem set like this?

The challenging part of this problem is determining the overlap between images. As noted, it’s not exactly the same between images, so some form of pattern recognition is necessary. Fortunately, the open source OpenCV project implements many of the common pattern matching algorithms. Even more fortunately, many other people have implemented traditional image stitching applications using OpenCV, providing a good reference. The result is LinearStitch, a simple python image stitcher designed for a single horizontal row of images created with camera translation.

LinearStitch uses the SIFT algorithm to identify similarities between images, then uses those points to compute X and Y translation to match the images as closely as possible without adding any distortion. You might notice we’re translating on both X and Y. We haven’t yet fully identified why we get a slight (2-3micron) Y axis drift between some images. We’re attempting to identify the cause now.

At this point, LinearStitch isn’t packaged up as a point-and-click install, but if you’re a bit familiar with python and installing dependencies, you should be able to get it running. It uses Python3 and OpenCV3. Many thanks to David Olsen for all his assistance on this project.

Getting the hang of it

We’ve had the Gigamacro and the photogrammetry capture station up and running for about a week now. While both technologies are relatively straightforward in terms of the technology, it’s clear that both benefit from a lot of artistry to get the most out of them.

The Gigamacro is conceptually very straightforward. It’s just a camera that takes many close-up photos of an object, and then combines all those photos into a single high resolution output. The camera uses a fixed lens (rather than a zoom), and we have a variety of lenses with different levels of magnification. Two of the challenging parts of using the Gigamacro are focus and lighting. At very close distances, with the types of lenses we’re using, the “depth of field” is very narrow. Depth of field is something we’re familiar with from traditional photography – when you take a picture of a person, and the background is nicely blurred, that’s due to the depth of field. If everything in the photo is in focus, we say that it has a very wide depth of field.

With the Gigamacro, and using a high magnification lens, the depth of field is typically on the order of a few hundredths of a millimeter. The Gigamacro solves this by taking many photos at different distances from the object. Each of these photos can then be combined (“depth stacked”) into a single photo in which everything is in focus. Even on a surface that appears perfectly flat, we’re finding that a few different heights are necessary. With a more organic object, it’s not unusual to need to capture 40 or 50 different heights. Not only does this greatly increase the number of photos needed (we’re currently working on a butterfly consisting of 24,000 photos) but it increases the post-processing time necessary. All of this means we’re greatly rewarded for carefully positioning and preparing objects to minimize height variation when possible.

Another issue is light. The Gigamacro has two adjustable flashes. These mean there’s plenty of light available. At very close distances though, it’s important not to cast shadows or overexpose areas. We’re starting to get a better understanding of how to adjust the lighting to deliver quality results without losing detail, but we’re still learning. In addition, the package we have includes some filters for cross-polarizing light, which in theory will allow us to capture very shiny surfaces without reflections.

So, what’s it all look like? We’re still working on building a sample gallery, but below is one example. This is a small fish fossil, captured with a 100mm lens. The original object is approximately 3 inches across. In terms of the gigamacro, this is a very low resolution image – only approximately 400 megapixels. We’re finding that this technology is much more impressive when you’ve seen the physical object in person. Only then do you realize the scale of the resolution. We’ll be sharing more samples as we get more experience. Just click to zoom. And zoom. And zoom.

Photogrammetry

Photogrammetry is the other main technology we’re working with at this phase. Our new photogrammetry turntable is working well, and we’re continuing to explore different lighting setups and backdrop options for the space.

Because we’ve got a processing workstation near the photogrammetry station, we’re able to stream photos directly from the camera to the computer. There’s no need to manually transfer files. This, combined with the automation of the turntable, means we can do a basic photogrammetry pass on an object very quickly, then make adjustments and try again.

Below is one of our first finished objects, a small plastic dinosaur. This object is far from perfect – in particular, the tail needs additional cleanup, and there’s some “noise” around the base. This is a combination of two passes, capturing the top and the bottom of the dinosaur. It’s made of 159 individual photos, all processed with Agisoft Photoscan Pro. We’re excited to compare Photoscan with Autodesk Remake, but a recent Windows update broke Remake. Hopefully they’ll get that fixed soon.

First Light

We made good progress in the AISOS space today. The photogrammetry (and later, RTI) camera position is in place. We were lucky to inherit a nice copy stand that was already in the space, which makes camera positioning really easy. We were even able to capture our first object, albeit very roughly (the lightning needs plenty of adjustment).

2016-08-23 14.45.13

2016-08-23 13.58.04

Later in the day, we got the call that our GigaMacro had arrived. Anything that comes in a massive wooden crate is bound to be exciting.

2016-08-23 15.30.17

We don’t have our final work surface for the GigaMacro yet, but we did some initial assembly and testing. Everything seems to be working as expected. It’s definitely going to have a learning curve as we get familiar with all the variables that can be controlled on the GigaMacro. For now, you can watch it wiggle.