Reverse Engineering a LumeCube

One of our upcoming hardware projects (more on this later) requires a very bright, controllable light source. In the past, we’ve just used bare LEDs with our own cooling, power, control, etc. Not being someone who really understands how electricity works though, it always feels like a hassle to get a reliable setup. For this project, we instead decided to just give a LumeCube a try. They’re far from cheap, but they’re very bright, and hopefully have rock solid reliability.

LumeCube has an iOS app which allows for control over bluetooth. Although it also has a USB port, that’s only used for charging. We wanted to be able to do basic brightness control from within the Python application that will run the rest of the hardware. It doesn’t appear anyone has gone to the trouble of reverse engineering the LumeCube before, so we figured we’d give it a go. There were just two challenges:

  1. We’ve never reverse engineered a Bluetooth communications protocol.
  2. We don’t really understand how Bluetooth works.

Sticking with our philosophy of obtaining minimum-viable-knowledge, we started Googling “how to reverse engineer a bluetooth device” and ended up on this Medium post by Uri Shaked. Uri pointed us towards the “nRF Connect” app, which allows you to scan for Bluetooth devices and enumerate all of their characteristics (look at me using the lingo).

Acting on the assumption that LumeCube wasn’t going to go out of their way to secure their Bluetooth connection, we assumed brightness control would be pretty straightforward. It was just a matter of tracking down the characteristic ID and the structure of the data. To do that, we began by listing all of the characteristics in nRF Connect. Then we would disconnect and launch the official app. From there, we could adjust the brightness, then flip back to nRF Connect, rescan the characteristics, and see what had changed.

The relevant characteristic popped out very quickly. The third byte of 33826a4d-486a-11e4-a545-022807469bf0 varied from 0x00 to 0x64, or 0-100. Writing new values back to that characteristic confirmed that hunch – huzzah!

Once we’d identified the characteristic, it was just a matter of implementing some controls in Python. For that we used Bleak, which offers good documentation and cross-platform support. Below is a quick example that turns brightness to 100 (0x64).

import asyncio
import logging
from bleak import BleakClient

address = "819083F8-A230-4F61-8F94-CB69FF63D340"
LIGHT_UUID = "33826a4d-486a-11e4-a545-022807469bf0"
#logging.basicConfig(level=logging.DEBUG)

async def run(address):
    async with BleakClient(address) as client:
        await client.write_gatt_char(LIGHT_UUID, bytearray(b"\xfc\xa1\x64\x00"), True)
loop = asyncio.get_event_loop()
loop.run_until_complete(run(address))

Hacking Framework MacOS versions

We recently got bitten by a version of the bug mentioned in my last post here. That’s an issue where codesign uses sha256 hashes instead of sha1, causing crashes on launch on MacOS 10.10.5. In this case however, the framework was a third-party binary, which we couldn’t just recompile. Instead, we needed to hack the MacOS version to trick codesign. To check the version of a framework, you can use otool:

otool -l <framework> | grep -A 3 LC_VERSION_MIN_MACOSX

In this case, it was reporting 10.12. We need 10.10. The nice thing about values like this is that they tend to follow a really specific layout in the binary, since the loader needs to find it. The header for the MacOS loader is pretty readable, and tells us where to look. LC_VERSION_MIN_MACOSX starts with a value of 0x24, and has this structure: `\

struct version_min_command {

uint32_t    cmd;        /* LC_VERSION_MIN_MACOSX or
               LC_VERSION_MIN_IPHONEOS  */
uint32_t    cmdsize;    /* sizeof(struct min_version_command) */
uint32_t    version;    /* X.Y.Z is encoded in nibbles xxxx.yy.zz */
uint32_t    sdk;        /* X.Y.Z is encoded in nibbles xxxx.yy.zz */
};


Pop open your hex editor and search for the first 0x24. The first “12” we find is the MacOS version, and the second is the SDK version. So we just change our 0x0C to a 0x0A and save it. Then we can run the otool command again to confirm the versions. Now, codesign will apply both sha1 and sha256 hashes when we build. Hoorah!

Virtual MISLS

AISOS has been supporting Professor Kat Hayes on the production of “Virtual MISLS”, an iOS app that allows users to explore a historic building at Fort Snelling (Minnesota). It is designed for Google Cardboard and other mobile virtual reality headsets.

During World War II, the Military Intelligence Language School (MISLS) at Fort Snelling trained soldiers in the Japanese language to aid the war effort. Many of the buildings used for the MISLS are still standing, but are not open to the public. Because physical preservation and renovation are costly and take years to accomplish, researchers at the University of Minnesota have used a technique called photogrammetry to create a three-dimensional virtual reality (VR) model of one of the classroom buildings (Building 103) from Fort Snelling’s Upper Post. This app lets you experience the space of Building 103, where you can navigate through a guided virtual museum exhibit that uses text panels, photos, and audio clips of oral history interviews with Japanese American MISLS veterans.

Virtual reality is a powerful tool for public memory projects as it creates an immersive and interactive experience that allows users to imagine what it might have been like to be a soldier and student at the language school. This current app is a work in progress that seeks to provide alternative modes of access into spaces and histories that are under-represented at Historic Fort Snelling.

To use the app, you’ll need some sort of Google Cardboard viewer. We like the DSCVR viewer, but any viewer will work fine.  If you have a “plus” size iPhone, make sure your viewer supports that size.

The app begins with some tutorial information.  If you have further questions, please get in touch.

The Architectural Landscape of the Achaemenid Empire: Mapping the Photographs of Dr. Matthew P. Canepa

By Johnathan W. Hardy, DCL Graduate Assistant

Screenshot of story map with the tomb of Cyrus in the background

https://z.umn.edu/AchaemenidEmpire

This mapping project was designed first and foremost to showcase the vast collections of the Digital Content Library. The content held on the DCL relating to the ancient Iranian world ranges from original site plans, photographs from the age of excavation in the early 20th century, to contemporary documentation and artefacts. The DCL holds over 300 individual works relating to the Achaemenid empire, with an additional 500 + works documenting the Seleucid, Arsacid, and Sasanian empires. In order to showcase this invaluable collection, each section begins with a link to a search on the DCL for pertinent images relating to the site. To better illustrate the content that the viewer is engaging with, specific photographs from Dr. Canepa’s collection are highlighted in the map with links to the full work on the DCL.

The achaemenid Kaba ye Zarduxsht at naqsh eh Rustam

A view of the Ka’ba-ye Zardosht from a break in the rock face of Naqsh-e Rustam. In the distance is the Sasanian (224-650 CE) city of Staxr.

The vision of this mapping project was to provide the interested public and students information that filled the gap between a Wikipedia article and a scholarly journal. The information provided is not overly complicated or full of specialist vocabulary but exhibits a filtered look into the latest information on the debates in Achaemenid art and architecture. Each site is given a full, linked bibliography which gives the user the ability to delve deeper into the literature if they wish to know more. I wanted to have a seamless user experience that was able to guide an inexperienced user through the material, while allowing a broad degree of flexibility in how they interact with the maps. Each tab within the map series can stand on its own, without the user having to follow a set chronological or spatial timeline.

screenshot of a story map of Persepolis

The total project consists of 42 different maps all integrated into the final product. Linking such disparate material into a single, useable product was made incredibly easy through the use of Esri’s story map options, and all maps were designed and implemented through ArcGIS Online.

I am of course thankful to LATIS and the Digital Content Library for allowing me to work on this special project, as well as Dr. Canepa for allowing his incredible photographs to be digitized.

Automated Slide Capture Rig

Chances are pretty good that if you crack open a dusty storage closest at any large University, you’ll find an old slide projector, or an unlabeled box of 35mm slides.  For decades, University lectures and research work relied heavily on slides, created from photographs taken on reversal film or charts and diagrams transferred to slides for presentations.

Since 2006, the Digital Content Library within the College of Liberal Arts has been scanning slides collected by our faculty as part of their research and instruction.  Hundreds of thousands of slides have been scanned and cataloged, creating a unique collection of material from around the world and across the decades.  Each of these slides was scanned one at a time, on a scanner like the Nikon CoolScan.  These produce a quality image, but are slow and require an operator to load each slide.  With hundreds of thousands more slides in the “someday” pile, one-at-a-time meant a lot of slides would probably never get scanned.

Inspired by some commercial products and a hacker spirit, we decided to build our own automated slide capture unit, using a classic Kodak Ektagraphic carousel projector, combined with a Canon 5D Mark III digital camera.

The basic theory of operation is to leverage the automated loading and advancing capibilities of the projector, and not much else.  The lens is removed, and the illumination bulb is replaced with a lower output (and much cooler) LED.  The camera, with a macro lens, focuses on the slide sitting inside the projector. A small Arduino controller manages the whole apparatus, advancing the projector and triggering the camera.

Because this was a hobby project, and not a core part of our jobs, it evolved very gradually over the fall of 2017.   We started by placing a large LED light panel behind the projector, with the lamp drawer removed, to get a sense of the image quality.  Some basic comparisons against our Nikon scanners hinted that the quality was as good or better as the dedicated scanners.

Because we don’t run a full-time slide scanning facility, we wanted to keep our scanning station relatively portable.  For that reason, it was important that the projector stay self contained, rather than relying on an external light source. For longevity and consistency, we wanted an LED.  We knew we needed very even illumination, which meant we would likely be using diffusion.  That meant starting with a high output light source.  If you’re looking for a high output LED, you’ve got a couple options.  You can get a fancy branded LED with a quality driver, or you can get a slighty sketchy LED directly from China, via Ebay or Alibaba.  We went with the latter.  As a bonus, it came with a basic LED driver and heatsink.


With the LED in hand and firmly mounted to its heatsink, we started figuring out how to mount it in the projector.  We decided to mount the LED in the same place as the original bulb, utilizing the existing removable light tray.  This means the light mechanism can be moved to another projector easily.  The heatsink was ever-so-slightly larger than the original lamp, so a few pieces of the mechanism had to be cut away.  For testing, the heatsink was mounted with some zip ties.

The LED runs on DC power.  Rather than trying to pack a transformer into the projector (which is exclusively AC), we simply ran the power cables out through a slot in the housing.

With the LED in place, we did some more testing to see about achieving even lighting.  Our friends in the TV studios gave us some diffusion material, so we were able to stack sheets of diffusion until we could no longer perceive any hot spots on our test slides.

Once we had the LED in place, we were pretty confident the project would work out.  The remaining bits, involving some basic fabrication and automation, were things we’d tackled with past projects like our automated photogrammetry rig and our RTI domes.  We repurposed a spare Arduino Uno to control everything.  Because the projector uses a 24 volt AC signal (!!) to control the slides, we couldn’t get away with a simple transistor control. Since we were only building one of these, we decided to buy a premade Arduino Relay shield from Evil Mad Scientist.

A little bit of code and some gator clips let us confirm that everything worked as intended.  All that was left was the cleanup and assembly.  We designed some basic mounts in Tinkercad and printed them on our Makerbot Replicator Mini.  The entire electronics package was small enough to fit into the remote storage compartment on the side of the projector.  Because the door is easily removable, it’s still a snap to move the entire setup to another projector.


At this point, the rig is assembled and in production.  We use SmartShooter 3 on a computer, attached to the camera via USB, to storage the images as they’re acquired.  We use BatchCrop to crop, clean, and straighten the images.

We can image slides at the rate of roughly 3 minutes for an 80 slide carousel.   In fact, it takes far longer to load and unload the carousel than it does to perform the capture.

Because we used a lot of repurposed parts, we don’t have a complete Bill-of-Materials cost of the rig, but it could be replicated for well under $100 (assuming you already own the camera and projector).  We could probably put another unit together in an afternoon.

We’re really excited to have done this project, and we think it’s a great representation of the LATIS Labs ethos: we saw a problem that needed solving, iterated in small steps, and ultimately put together a workable solution.  Our next build is a little more complicated and a bit more expensive – if you’ve got a few thousand dollars lying around, get in touch!

 

 

Building a Photogrammetry Turntable

One of our goals with AISOS is to make complicated imaging tasks as easy and repeatable as possible.  We want to be able to rapidly produce high quality products, and we want the process to be accessible to folks with a minimal amount of training.

One of the ways we’ve done that for photogrammetric imaging is by building an automated turntable capture setup.  Conceptually, this is a pretty straightforward solution.  A small turntable rotates an object a fixed number of degrees, then triggers a camera to take a photo.  That process is repeated until the object has made a full 360 degree rotation.  Then the camera can be adjusted to a different angle, and the process can be repeated.

As much as we like doing cool hardware hacking, we also don’t want to suffer from “not built here” syndrome.  We investigated a variety of options for off-the-shelf solutions in this space.  There are a handful of very high end products, which also handle all the camera movement.  These are amazing, but they’re both expensive (nearly $100,000) and, more crucially, massive.  None of those solutions would physically work in our space.

There are also some smaller standalone turntable options that we explored.  However, they’re all essentially small volume homemade products, and rely on proprietary software.  We were concerned about being stuck with an expensive (still thousands of dollars) product of questionable quality.

We then began to look at building our own.  We’re not the first ones to have this idea, and fortunately there are a variety of great build plans out there.  Our favorite was the Spin project from MIT.  Spin is an automated turntable setup designed for photogrammetry with an iPhone. We knew we’d need to modify the setup to work with our camera, but the fundamentals of Spin are excellent.

For our turntable, we completely replicated the physical design of the Spin, using their laser cutter templates and their 3d printed gear.  We used the same stepper motor as well, in order to utilize their mount.  Where we differed was in the electronics.

In this post, we’ll outline the basics of our design and share our Arduino code.  We don’t currently have a full wiring schematic (Sparkfun doesn’t have a fritzing diagram for our chosen stepper driver, and none of us know Eagle – get in touch if you want to help).

Our design is based around a Sparkfun Autodriver. The autodriver is relatively expensive for a stepper driver, but it’s really easy to work with, and is a little more resilient to being abused.  Our implementation is actually based on the “getting started with the autodriver” document published by Sparkfun.  We use an Arduino Redboard as our controller, along with a protoshield for making reliable connections.

The additions we’ve made to the basic Sparkfun diagram include the camera trigger control and the ability to adjust the degrees of rotation per interval.  We’re working with a Canon digital SLR, which can be triggered via a simple contact-closure trigger.  The Canon camera trigger uses three wires – one for focus, one for firing the photo, and a ground.  Connecting either of the first two to ground is all you need to do to trigger an action.  To control that from an Arduino, you just need to use a transistor attached to a digital pin on the Arduino.  Turning on the pin closes the transistor and triggers the camera.

To control the number of degrees per interval, we added a simple rotary selector.  A rotary selector is basically just a bunch of different switches – only one can be on at a time.  We use five of the analog pins on our Arduino (set to operate as digital pins) to read the value of the switch.

 

So far, we’ve been very happy with the build.  We’ve taken many thousands of photos with it, and it hasn’t missed a beat.  We expect that over time, the 3d-printed gear will wear down and need to be replaced.  Beyond that, we expect it to have a lengthy service life.

This is an abbreviated build blog.  We’ll endeavor to provide complete wiring diagrams for any future builds.  For now, just get in touch if you’re interested in learning more about our build.

Creating Exhibits in Omeka

By Nathan Weaver Olson, DCL Graduate Assistant

GETTING AN OMEKA ACCOUNT

The first step in creating an Omeka exhibit is to set up an Omeka account. Currently, the best way for individuals connected with the University of Minnesota to do this is to contact DASH Domains and open an account through them. Check out the DASH website to see if this is an option for you. If you have access to a web server, you can actually download Omeka directly. Alternatively, Omeka will host your collection at Omeka.net. Omeka.net’s basic plan will give you 500 MB of free storage, although more robust storage options are available for an annual fee.

ADDING ITEMS

Once you have Omeka up and running, the first step in building an Omeka exhibit is to add the digital objects or “items” you wish to include in your exhibit to your Omeka account. Do this by clicking “Add an Item”, the green button at the top of the screen. Once you have a new item, you need to add the necessary metadata (i.e. data about the data), which you can do by clicking through the various tabs entitled “Dublin Core” metadata, “Item Type Metadata”, “Files”, “Tags”, and “Map”. For the “Mud-Brick Mosques of Mali” exhibit I added some twenty-three items to our account.

ADDING AN EXHIBIT

Once you have added all of your items to your Omeka account, you can add a new exhibit to your account by clicking on the “Exhibits” tab and then clicking “Add an Exhibit”. The look of an individual exhibit is largely controlled by the exhibit theme. Several themes come pre-loaded in Omeka, but these are just a fraction of the themes available. In fact, users can design themes of their own. In my case, I wanted to use a theme that would allow me to insert my own background image and logo. I settled on the “BigStuff” theme and then added it to our Omeka account using the DASH Domains File Manager.

THEME CONFIGURATION

When you create a new Omeka exhibit, the first page available to you is the “Edit Exhibit” page, where you select and configure the exhibit theme, design the exhibit’s main page, and add new exhibit pages to the site.  In the image below, my theme, “BigStuff”, is clearly visible in the drop down list.

Themes can also be further configured to fit your particular aesthetic and presentation interests. “Big Stuff” allows me to insert my own background image, logo, and header images. In my case, these were images that I designed in Photoshop before uploading them to Omeka.

ADDING PAGES

The content of my exhibit is stored and organized using different pages, accessed through the “Edit Exhibit” link. Each new page includes title and “slug” fields, but users are then able to add one or more blocks of content layouts entitled “File with Text”, “Gallery”, “Text”, “File”, “Geolocation Map”, “Neatline”, and “Neatline Time”.

With the exception of the “Text” layout, all of these page options allow you to pull the “Items” you created into the exhibit. When you add an item, you can also add a caption below the image. I chose to make every caption a link back to the original image in DCL Elevator. My principal goal was to showcase images from John Archer’s collection, but I was also able to locate individual mosques in space as well using Omeka’s “Geolocation Map” layout option.

PAGE ORDER

Finally, when you are on the “Edit Exhibit” page, it is easy to rearrange page order and even nest some pages below others. In my case, I have five principle pages: “Mosque Design Elements”, “Regions and Styles”, “Image Gallery”, Further Reading”, and “About the Photographer”.  Yet several of these pages contain sub-pages, and sub-sub-pages, as a tool for organizing content.

Those are the basics of exhibit building in Omeka. Now that you know the basics, it’s time to get an Omeka account of your own, decide on a narrative you would like to represent as an online exhibit, and get to work adding items to your account.

The exhibit used as an example in this tutorial, “Mali’s Mud-Brick Mosques”, is just one of a number of Omeka exhibits that we have been working to create at the DCL in recent months. We should be rolling them out on our website soon. But you can find Omeka-powered exhibits and websites all over the Internet. Check out Omeka.org’s exhibit showcase for more ideas.

John Archer and Mali’s Mud-Brick Mosques: Exhibit Building with Elevator and Omeka

By Nathan Weaver Olson, DCL Graduate Assistant

Picture a vast interior space, dark and cool, its edges hidden by a forest of columns. The only visible light is that which trickles in through an ornate window screen. You are inside one of Mali’s monumental mosques, a sacred space, and the walls, columns, and even the lofty minaret towers are likely not stone but molded earth, mud bricks plastered with a layer of mud and rice hulls. It is a vulnerable structure in a region of intense heat and seasonal torrential rains, forever dependent upon an army of skilled workers to maintain its elegant and massive form. If cared for, it will last a century or more. If neglected, it will quickly fall to ruin.

In the mid 1990s, Minnesota Professor John Archer, now Professor Emeritus, visited the country of Mali with his camera and an eye for timing and image composition. He took hundreds of images of Mali’s people as well as its natural and built environments. The majority of this wonderful collection is currently housed at LATIS’ Digital Content Lab, where we are slowly adding Archer’s images to the Digital Content Library, which currently contains over 300,000 objects. So far, we have added nearly three hundred and fifty images from John Archer’s trip to Mali to the DCL and organized them into thirty-seven different “works”, each with between one and thirty-nine attached images or “views”. Among the works already available through the DCL is a collection of images of mud-brick mosques ranging from the country’s oldest, to its most iconic and monumental, to more humble examples. It is a unique collection, and now, thanks to Omeka, we have been able to create an online exhibit using many of these images to teach our users about vernacular architectural traditions in Mali while also introducing them to our object database, called DCL-Elevator.

Searching the DCL

One of our goals at the DCL is to make our collections widely available to scholars and students, not only those at the University of Minnesota, but also those working outside of the U. While many items in our collection require the user to possess an X500 to receive access, quite a few of the objects in the DCL are part of our “Open Collections”, objects available to anyone who visits the DCL’s website. Users can currently view Archer’s Mali collection on the DCL by performing an advanced search in Elevator, our database tool, and sorting by collection and keyword.

Elevator includes comprehensive and relatively intuitive finding aids, but here at the DCL we are also looking at additional tools to better familiarize users with the site and its extensive contents. Lately we have begun to do this by building exhibits using Omeka.

Omeka Exhibits

Omeka is an open-source web-publishing platform that is oriented towards users from disciplines within the Humanities. Students, professors, librarians, and archivists can all use Omeka to develop and display scholarly collections. In the case of the DCL, Omeka allows us create exhibits that highlight our collections by focusing the user’s attention on a limited number of objects from the DCL and then sending them into Elevator to find the materials themselves. In practical terms, this has meant festooning our exhibits with links that transport the user to specific images within the DCL Elevator collection. In the exhibit featured in this post, “Mali’s Mud-Brick Mosques”, I put a “Find it in the DCL” link under nearly every image I added to exhibit, as well as a link to an exhibit that Ginny Larson created for the photographer, John Archer.

The structure of an online exhibit, which is essentially a narrative, presents the user with a familiar set of tools for viewing the collection and making sense of its contents. Instead of searching through thousands of images, an online exhibit introduces the user to a finite collection that allows them to approach the more generous holdings of the DCL Elevator database from the vantage point of a specific theme. Because our Omeka exhibits serve to not only showcase the DCL Elevator Collection, but to also extend its pedagogical value for our users, I added a “Further Reading” page to the exhibit as well.

These are all things that anyone reading this post, and especially anyone connected with the University of Minnesota, can do as well. While there are free versions of Omeka available, they have a very limited online storage capacity. But U of M users are able to acquire a more substantial Omeka account through Dash Domains. The DCL has its own domain account through DASH, and through it we have the capacity to access a number of content management applications, including Omeka. To learn more about how to create your own Omeka exhibit, click here for detailed instructions.

“Mali’s Mud-Brick Mosques” is just one of a number of Omeka exhibits that we have been working to create at the DCL in recent months. We should be rolling them out soon. But in the meantime, check out the DCL’s collection and let us know if there are other themes that you would like us to explore as online exhibits.

An Introduction to Automated RTI

Let me preface this by saying:

1) Hi. I’m Kevin Falcetano and this is my first AISOS blog post. I am an undergraduate technician working for AISOS and have worked on the construction of our RTI equipment for almost two months.

2) This project was made far easier to complete because of Leszek Pawlowicz. His thorough documentation on the process of building an RTI dome and control system from consumer components as detailed on Hackaday was the reason for the successful and timely completion of AISOS’s very own RTI system. Another special thanks to the open software and materials from Cultural Heritage Imaging (CHI).

Okay, now that the introduction is out of the way, second things second.

What is RTI?

RTI stands for Reflectance Transformation Imaging. It is a method of digitizing/virtualizing the lighting characteristics of one face of an object by sampling multiple lighting angles from the same camera position over the object with known point light positions. The mathematical model involved produces a two dimensional image that can be relit from virtually any lighting angle, so that all of the surface detail is preserved on a per-pixel basis. The basic idea comes from the fact that if you have a surface, light reflects off of it differently and predictably depending on the angle of said surface. A visualization of surface normals, the vectors perpendicular to the surface at any given place, is provided below (credit CHI).

The information available is represented by yellow vectors, and the information we wish to calculate, the surface normals, is in red. So, given that we know the math behind how light can reflect from a surface (which we do), and we know the light path angle with respect to a fixed camera, we’re very close to calculating the normal vectors of the surface. We’re close because there are constants involved that are unknown, but if you’ve ever taken an algebra course, there is clearly a linear system that can be used to solve for those coefficients. We just need more data. That data comes in the form of images of lighting from more angles. This also helps to account for areas of an object where light from a certain angle may not hit due to occlusion by the object’s geometry, and is therefore incalculable. When all’s said and done, an RTI image is generated. Although it is two dimensional, an RTI image will be able to mimic the way the real object scatters light to a resolution that matches the source images. Below are three images as an example of how the lighting changes between each photo.

 

 Why RTI?

The openly available documentation for RTI explains its benefits better than I probably could, but put most simply, it’s like having an image, but the image knows how that object could be lit. This means that details that could not be revealed in any one possible image are shown in full in an RTI image. One of the pitfalls of photogrammetry, for instance, is it HATES reflective objects since smooth surfaces and specular/highlights make photogrammetry’s hallmark point tracking very difficult . RTI doesn’t care. RTI doesn’t need point overlap because the process asks that you eliminate many of the variables associated with photogrammetry, e.g. the camera and object do not move and the lighting angles are pre-calculated from a sphere of known geometry. The GIGAmacro can get up close, but it still produces images with single lighting angles, so, all else equal, less information per pixel. GIGAmacro, of course, has the advantage of being able to capture many camera positions very quickly, which results in many more pixels. RTI’s per-pixel information produces near perfect normal maps of the surface, which is represented with a false color standard. As an added benefit, our automated version of the workflow is blazing fast. Like, from start to RTI image file takes as little as five minutes, fast.

For a simple example of how RTI lets you relight an object, take a look at this coin.  Click the lightbulb icon and then click and drag your mouse to move the lighting.  This same data can be processed in a multitude of ways to reveal other details.

How RTI?

There are many ways to record data for RTI, including some very manually intensive methods that, for the sake of expedience, are definitely out of the question for AISOS. We operate under the assumption that researchers who use our space may have limited experience in any of their desired techniques, so the easier we can make powerful data acquisition and analysis methods the better. We decided to use Leszek’s method because it was both cost and time effective due to the way it can easily be automated. This method involves putting as many white LEDs as we would like data points, up to 64, inside an acrylic dome with a hole in the very top. The hole is there to allow a camera to look vertically down through it at an object centered under the dome. The dome is painted black on the inside to ensure internal reflections are minimized, making the only lighting source for the object the desired single LED inside the dome. Every LED is lit once for each image, and the data points may be processed and turned into an RTI image file.

This way, every LED can be turned on individually to take a picture of each lighting angle without moving or changing anything, and done so with an automatic shutter. This means that after setup, the image capture process is completely automated. Ultimately, the goal for this build was: Place dome over object, move in camera on our boom arm, focus lens, press button, RTI happens. Observe this accomplished goal in picture form:

You may notice in the above image that there are two domes of different sizes. We built two not only to accommodate different sizes of objects, but also to be able to use certain lenses, normally macro ones, that need to be closer to an object to be in focus. This allows us to pick a preferred lens for an object and then decide which dome to use with it so that the desired magnification and lens distance can be retained in most situations.

How all of this was built is outlined in a separate blog post.

Building an Automated RTI Dome

The RTI Build Experience

This is the second post in a series, written by  Kevin Falcetano.  See the first post for an introduction.

Though time-intensive, the RTI build was relatively simple. It was made so simple because the hard stuff; i.e. circuit design, dome construction, control box construction, and programming, were already done by Leszek Pawlowicz. This made the project more like a LEGO set than a massive undertaking.

I began with the control circuitry. The basic idea behind it is that it was necessary to control each LED individually, but it had to be a type of LED that was bright enough and had a quality output of consistent white light. This, unfortunately, meant that individually addressable LEDs (that may be controlled with a single data cable and an Arduino library for the control protocol) just would not do. So this is where Leszek’s build comes in. It calls for an eight by eight matrix of LED’s to be controlled by a special series of circuits connected to an Arduino Mega.

This allows for a total of 64 LEDs to be driven off of 16 digital pins, where the columns are the positive lead and the rows are the negative lead. The arduino’s power supply cannot drive this on its own, and that is why the circuitry is much more complicated. What is required is essentially the use of transistor-like gates on the positive and negative ends of the matrix so the current can be directed through one specified LED at a time.

One such set of gates is the highside MOSFET circuit, built on an Arduino Mega Shield:

An Arduino shield is a piece of hardware meant to be mounted on top of an Arduino microcontroller unit to interface with it. In this case, this was a blank board used to connect eight highside MOSFET circuits to the digital pins that would open the “gates” on the MOSFETs on the positive end to the desired column.

These components all had to be hand soldered to the shield board, but after quite a long period of time, everything was in its place without much of a hitch. The positive end of each MOSFET (the source) goes in parallel to the positive end of the power supply. Each gate (actually called the gate) gets connected to a corresponding arduino digital pin to be switched open as necessary, and finally, the negative ends of each MOSFET (the drain) get connected to corresponding pins of the positive ethernet cable that leads to the LEDs in the dome. The resistors and transistors shown on the board exist to convert the MOSFET for highside control. The other side of the shields connects the other eight Arduino digital pins to the row “gates” on the negative end, to make a complete circuit that controls each LED.

Those negative gates, controlled by CAT4101 LED drivers, were soldered onto a different board along with a beeper and USB control pins:

In this case, there is a normal resistor for baseline current limitation, which acts as a bottleneck for the maximum current through the LEDs. Alongside each resistor is a variable resistor to further reduce the current and vary/tune the intensity of each LED row. These CAT4s allow the negative side of the LED matrix to selectively connect to ground by way of the ground ethernet cable. Once all was soldered, it was a matter of mating this board, the Arduino Mega, and the MOSFET control shield.

It also helped to have a separate power distribution board, with positive and negative rails for the 5V Arduino power supply, and positive and negative rails for the 9V power supply that drives the LEDs.  After connecting the power leads up and stripping and adding pins to eight LEDs and two ethernet cables, it was time for the test: The moment of truth for my soldering and circuit building skills.

Aside from one cold solder joint that I had to fix, everything worked.

As you can probably see from the top left corner of the above image, there is a large pile of LEDs that all have wires soldered on, pins crimped, and heat shrink tubing applied to  them. This was a very fun intermediate step that requires no further explanation but will now be mildly complained about. That level of repetition is not actually fun, but very necessary.

Speaking of, the next thing on the list was painting the LEDs black. Since the goal of good RTI is to eliminate as many environmental variables as possible, the inside of the LED dome must be as non-reflective as possible, so as to eliminate internal reflections that could accidentally cause multiple other, albeit dimmer, light sources within. This means even the area around the actual LED chips had to be painted black, with the only other exception being a small portion of the positive pad so as not to mix up the polarity when it came time to wire everything together.

Now that this bit was out of the way, it was dome prep time. The domes we got were clear, which helped with marking the outside for LED positions and then transferring the marks to the inside. I did this using some clever geometry math mostly outlined by Leszek in his documents. I, however, had the idea to melt the marks into the inside of the dome using the soldering iron, which caused a rather unpleasant smell to fill up the room that may or may not have been an indicator of toxic fumes. This step was necessary so that the marks showed up even with the paint over them. Next was just that: painting both the small and big dome using exactly one entire can of matte black spray paint.

Above is an image from the spray booth inside Regis Center for Art, where I applied multiple coats of the paint in a highly impatient fashion. Everything turned out fine, since the drips in the small dome would be inconsequential to the build.

After everything was dry, it came time to mount the LEDs inside the big dome, since it was the first one we decided to have done to make sure everything was working as intended. Each LED was hot glued in place with the positive ends of all LEDs pointing in the counterclockwise direction.

The above photo shows the placement of the LEDs as well as part of the following step: the stripping and crimping of many chains of wires that connect up the LED matrix. Each LED has an attached male dupont pin head, whereas each chain of connectors has properly spaced female dupont pin heads. Every concentric circle of eight LEDs represents a negative row, and every vertical set of eight LEDs (offset to help with lighting coverage) represents a positive column.

With all of the rows and columns wired up, the open ends of each string of wires were to connect with the male dupont pins I crimped onto the ends of both the positive and negative ethernet cables’ individual wires. Every part of the control circuitry was made to correspond with matching pins 1 through 8 on the ethernet cables, such that it becomes incredibly easy to know which Arduino pin corresponds with which row and column number and which LED. The next step was then to clean up the wiring and turn the dome into a more self-contained piece.

Behind this piece of special matte tape (3M Chalkboard Tape) that has about as much stick as a thirty-year-old Post-It, are the connections to the ethernet cables pushed down and hot glued in place. With the finishing touches in place on the dome, a successful test was run and it was time to contain the control circuitry in a cozy project box.

The Arduino, MOSFET shield, CAT4101 board, and power board were all hot glued in place after drilling the requisite holes for the bells and whistles. The two potentiometers were wired in and bolted on to control the LED on time and delay, as this can vary between different shooting scenarios. A reset button, preview button, and action button are all shown above, and used to operate the reset, single light preview, and main action activation functions in the Arduino’s programming. This image was taken before I added the main 2.5mm jack used to snap the camera shutter and the switches that select the operation modes. At the right side of the box are the ethernet jacks for power to the dome (top is positive and bottom is negative), and the USB connection for cameras without a way of connecting to the 2.5mm shutter switch. Last to mention is the OLED screen that displays relevant information that helps with eliminating mistakes and debugging when repairs and/or changes must be made.

On the subject of the control box, I personally made changes to the construction in a few ways:

  • I accidentally busted a switch because the soldering iron was too hot, so I removed the sound on/off function which ended up being fine since I think I messed up the wiring of the beeper anyway.
  • I completely removed the USB shutter, servo shutter, and IR/Bluetooth shutter options because we only really need a 2.5mm auto shutter and a manual mode.
  • I added a small transistor circuit to short the 2.5mm jack to ground using an Arduino digital output in order to actually make the 2.5mm auto shutter function work.
  • I added a three way switch to reduce the number of switches, but increase the functions possible.

This is the front of the final form of the box:

After the box was wiring up the small dome. The steps were the same as the big one but just scaled down, so I’ll spare you the boring details other than the fact that this one only contains 48 LEDs, with eight columns and six rows.

With everything complete, I also had to change some of the Arduino code, which was provided by Leszek with his hackaday project. The changes to the code are outlined below:

  • Removed sound function and USB and IR/Bluetooth shutter functions
  • Added dome mode switch, so either dome can be used without reprogramming the control box
  • Added a new white balance preview function for calibrating the camera because the old one was tedious
  • Added new display functions to show new modes above
  • Added a new function to snap 2.5mm shutter
  • Added a function to skip intro screen because of impatience and changed the intro screen to display relevant version info of the new software build.

Oh, and one last minute addition before this goes up: I added a remote preview button so that the exposure settings and focus can be adjusted from the computer without getting up to turn the preview function on and off, making the setup process a bit less tedious. This feature is just a wired button that plugs into the control box at the front and is completely optional since I kept the original button in place.
The final post is a conclusion and thoughts on the lessons learned from this build.