Building an Automatic RTI Dome: Wrapping It Up

This is the third post in a series, written by  Kevin Falcetano.  See the first post for an introduction and the second post for the build process.

The RTI project was not without its share of problems, mistakes, and general annoyances, but it was ultimately both interesting and rewarding to go through. I’ve separated my concluding remarks into three distinct semi-chronological sections under which the trials, tribulations, and triumphs of this project are detailed.


The construction of this project required many time-intensive repeated steps, but the hardest part, the planning, had already been worked out to a detailed part list and the very placement of the circuit components. Once again, a great many thanks to Leszek Pawlowicz for all of this work. In all of the soldering, striping, crimping, and wiring, I had made only a few easily correctable mistakes. That success is attributed to those thorough instructions and guiding images.

A few lessons were learned, still:

  • Helping hands are wonderful. Use them.
  • Lead free solder may or may not kill you more slowly, but it has a higher melting point. What that really means is it requires a delicate balance between “hot enough to heavily oxidize your soldering tip” and “too cool to flow the solder quickly.” Throw the balance too close to one side or the other and you get the same result: the heat spreads further over the extended heating period (by way of either too low a temperature or restricted conductivity through an oxide layer too thick to be affected by the flux core), melting plastics easily and possibly damaging sensitive components. I killed two switches and probably a piezo speaker this way. Be careful or use leaded solder.
  • Paint drips are fine if all you need is coverage.
  • 3M chalkboard tape may be matte black, which is wonderful for the inside of our domes, but it doesn’t stick. At all.
  • Hot glue is your friend. It’s not pretty or perfect but it’s accessable. What use is a build if you can’t repair it?
  • If one LED isn’t working in your matrix of LEDs, it is, logically, the LED that’s the problem. And by the LED being the problem I mean you are the problem. You put it in backwards but were too proud to check. The D in LED stands for diode – one way.
  • Be sure to have access to a dremel when trying to put large square holes in a hobby box.  Or have low standards.
  • When taking on a wiring project of this caliber, have an auto stripper handy. I may have lost my mind if I hadn’t used one.
  • Leave yourself some slack in the wires in case you break a dupont pin.
  • Don’t break dupont pins.

Calibrating and Processing

The calibration process was initially a bit of a headache, but I still must be grateful to Leszek and Cultural Heritage Imaging for their open software, without which more frustration would have ensued.

CHI provides a java applet that generates lp files. The file extension stands for light positions (or light points) and is pretty self explanatory: An lp file is a plain text file with the angles of the lights attributed to each of the lighting positions in order of first to last picture taken. This is done automatically with some clever math and dodgy code. CHI has told us they have a more stable version in the works, but it’s not quite done yet. Anyway, calibration is done using a black sphere (in this case also provided by CHI) that reflects each LED in the dome as a bright point. The sphere gets centered under the dome and a pass is run taking a photo for each light angle.  The rotational orientation of the dome relative to the camera is marked so as to be consistent for future use. This is because the dome is consistent so long as it it rotated to the correct position over the object, which means the lp may be reused.

The program uses an edge detection algorithm to figure out where the radius, and in turn, the center of the sphere is. After that, it uses the known geometry of the sphere and a highlight detection algorithm to figure out where the reflection point of each LED is for each picture, and then uses its position relative to the center and radius of the sphere to calculate the angle of each light point. For H-RTI, this is supposed to be done every time with a reference sphere next to every subject, but as I said before, we can reuse the lp because everything is fixed. This applet is glitchy and cumbersome, but it works when treated properly. It will run out of memory if we use full resolution photos from the Canon 5D, but downscaling them does not introduce any significant error into the calibration process. It also has very strict guidelines for filenames and a clunky default user interface, the indicators of purely functional software. It’s useful, but tedious.

Leszek has his own software that strips the lp file of its association with the calibration sphere image files and stores it for future use as a general lps file. This can then be used in his program to create a new lp for each RTI run associated with the new image files, and then it and the photos get run through various command line scripts for different RTI and PTM (polynomial texture map) processing methods. This ensures that, after initial calibration, subsequent workflow is very fast and efficient, with minimal fiddling required.


Of course, there are some things specific to our use cases that a third party couldn’t anticipate, so some changes were made. The biggest change in functionality, but arguably the easiest change to make programming-wise, was the addition of a mode switch to toggle which dome is in use. This allows the automatic, manual, and preview functions of the control box to change depending on which dome is being used. There are more details on how I changed the hardware and code in the build blog post, but all in all it was relatively painless. Leszek’s code is very clean and manageable, so I was able to easily navigate and change what I needed (which was quite a bit). It worked perfectly from the beginning, but not for exactly what we needed it for.

Final Thoughts

My rough estimate puts a first-time build of this project from start to finish at around 80 hours. It was fun overall, and not terribly frustrating save a few parts. We are currently thinking through how to make a custom shield PCB to optimize the build process of this RTI dome’s control unit, since there may be demand for a pre-built rig from others at the University. It would be very difficult to streamline the dome building process considering it is such a custom endeavor. I could see a template being made for mounting the LEDs and placing them in the domes, possibly with ribbon cables, but it still seems pretty far off.

Although there are certain significant limitations, RTI appears, in our testing, to be a very versatile tool, especially when automated in the way we have it. We are working with researchers from across campus to explore the possibilities, and the speed and precision of these domes is a huge benefit to that work.  In the future, we hope to be able to combine RTI and Photogrammetry to produce models with surface detail we have yet to come close to being able to capture.

If you would like to take advantage of our ever-expanding imaging resources, or would simply like to chat with us about what we’re doing and how we’re doing it, feel free to contact


Gopherbaloo: An Action-Packed and Fun-Filled Day of History!

By Andy Wilhide, DCL Research Assistant

On a cold, wintery Saturday afternoon, I followed some unusual visitors into Wilson Library: teenagers. They streamed in—by themselves, in pairs, in groups, and in families—talking excitedly and carrying books and bags with them. What brought them here on a weekend afternoon? Gopherbaloo, an annual event sponsored by the University of Minnesota Libraries and the Minnesota Historical Society’s History Day program. The afternoon included power conferences with History Day experts, research sessions, project workshops, and presentations on archives. Oh, and pictures with the History Day Moose—I got one too! There were raffles where students could win large exhibit boards, t-shirts, and other History Day swag.

The real prize was the opportunity for students to get some feedback on their History Day projects and to discover new resources that could help make their projects stronger. That’s where I came in—I was there to show off the Digital Content Library and help students, parents and teachers explore this unique archive.

This year’s History Day theme is “Taking A Stand.” Several students stopped by my table to check out the DCL. With the topics they had chosen, it was a bit hit and miss, but we did find some interesting materials, including dance photographs from the Alvin Ailey American Dance Theater, photographs and documentaries about Susan B. Anthony, a documentary about the L.A. Race Riots in 1992, and various materials connected to Copernicus, Kepler and Galileo—the scientists behind the theory of heliocentrism.

In preparation for the event, I made media drawers of potential History Day topics. These drawers are available to anyone who is logged into the DCL. One media drawer is dedicated to this year’s theme of “Taking A Stand.” Using the keywords “demonstrations” and “protests,” I found records ranging from a Communist rally in New York City (1930) to the Soweto Uprising in South Africa (1976) to the Poor People’s Campaign in Washington D.C. (1968) to Tiananmen Square Protests in China (1989). Many of these images came from the Department of History and the Department of Art History. One set of materials caught my eye—a series of photographs of the Iranian Revolution in 1978, taken by William Beeman, a professor in Anthropology.

Most History Day students rely on Google to find their images and resources, but we did find some materials that were not easily available through a Google search. Students, parents and teachers left my table excited to explore more of the treasures in the DCL, but from the comfort of home. Herein lies a challenge in sharing the DCL with public audiences: while public viewers can see most of what is on the DCL, they cannot download any of the images. This can be a deterrent for History Day students who need those images for their projects, which may be an exhibit, a documentary, a website or a performance. This is our first year connecting the DCL to History Day. We’ve made guest accounts for users outside of the U of M to access the DCL. We’ll see how it goes and report back!

The History Day State Competition will be held on the University of Minnesota campus, April 29, 2017.

Thank you to Lynn Skupeko and Phil Dudas from Wilson Library and the Minnesota History Day staff for inviting us to be part of Gopherbaloo. We hope to come back next year!

@MNHistoryDay [Twitter]

The DCL is available to anyone with a U of M x500 account. If you know of someone who is not affiliated with the U of M but would like a guest account to access the Digital Content Library, please have them contact Denne Wesolowski,

Meet Joan Assistant: The Meeting Room Reservation Solution

By Chris Scherr and Rebecca Moss

When we changed the layout of Anderson 110 from a cubicle farm and help desk setup to an open, customizable space, we turned four of the former offices into reservable meeting rooms. We created Google calendars for each so groups could reserve them, but spontaneous uses of these spaces were difficult to accommodate. We needed a solution that allowed folks in And110 to see if the rooms were reserved or free, and let them add a meeting on the spot.

Joan Assistant offered us the flexibility we were looking for, at a modest cost, and without the need to pay for upgrading the wiring in the room. Here is a quick video overview of how Joan Assistant works. We hope that sharing our experience might help others on campus who are looking for a lightweight solution to scheduling spaces.

Once we had the Joan Assistant, we had to figure out a way to connect it to our wireless network, which has a lot more security protocols than your average wifi setup. We gave the task to Chris Scherr, LATIS system engineer, who worked with Network Telecommunication Services (NTS) to figure out how to get it securely connected and online. Once that was accomplished, we bought two additional units and installed them by the meeting rooms.

There are three versions of Joan Assistant, and the only version supported on campus is the Executive as enterprise WPA2 is needed to authenticate to the wireless network –

The company is also offering a limited edition 9.7 inch screen which I assume is enterprise WPA2 ready:


**Any unit in CLA can use our on-premises server, we would just need to work with the owner of the room calendar to grant the proper permissions to the Joan server .

Other colleges:

Several items are needed (any IT staff is more then welcome to contact Chris directly –

From my experience setting this up, here is the order I would take for a new setup:

1) Obtain a sponsored account for wireless access for the devices (IT will not grant departmental accounts access to wireless).  Request here:

2) Get a departmental account to serve as the google calendar tie in.  This account can own the room calendars or be granted full access to existing room calendars. (  

3)  Get an on premises server.  (

The simplest solution is to request a self managed (SME) Linux VM from IT (1 core and 2 gigs of ram is adequate although two network adapters are required).  Ask them to to download the .vmdk from the link above as it is a preconfigured Ubuntu 14 VM.  Once the VM is setup I did the following configuration:  Limited ssh and 8081 traffic via iptables, changed the default root and joan user passwords.  Set IP addresses / dns names / cnames via servicegateway.

4)  Joan Configurator (

Download and configure each Joan device with the Joan Configurator available from the link above.  You will need to specify the dns name or ip address of the on-premises server and authenticate to wireless via the sponsored account created in step 1 using PEAP.  Once configured, charged, and unplugged from usb the unit should display a 8-10 digit code.  If the unit is not responding gently shaking will wake the device from standby.

5) Joan portal (

Several items need to be configured here:

User Settings

a) Hosting selection –  Make sure On-premises is selected

b) User settings – You may want to set additional contact emails for low battery warnings etc.

Calendar Settings

a)  Calendar – Connect to google calendar via the departmental account created in step 2

b)  Room Resources – Scan and add the room resources you want your Joan device(s) to be able to display/manipulate.

Device settings:

a)  Devices – click add device, you will be prompted to enter the 8-10 digit code from step 4 along with time zone and default calendar.

b)  Optional settings Logo/Office hours/Features can be changed/defined here

6)  Monitoring:  You can get quick information about your device from the on-premises server web portal.  It can be accessed here:

Devices – This will show all devices configured to use your on-premises server.  You can click on any device and get information about it.  I would suggest renaming the device names to the room number or calendar name.  Live view will confirm the server is sending the correct image to your unit(s).  You can also check Charts to show Battery levels over time, signal Strength, disconnects, etc.

The e-ink screens require very little power so the batteries last a month or more before needing recharging. When batteries get low, it will send emails to whomever you designate. The power cords are standard ones and it takes about 6-8 hours to charge the device.

The units connect to the wall with a magnet so they will not be secure in an open space without additional configurations. Here in And110, we have them mounted on the door signs already located outside each door. We customized the units so they have the U of M logo. We are still in the testing stage with these devices, but our experience so far has been very positive.  Please contact us if you would like to learn more – – or come see them in person in Anderson 110 on the West Bank.

The Stamp Project: Extruding Vector Graphics for 3D Printing Using Tinkercad

By Rachel Dallman

I have recently been experimenting with 3D printing using the ETC Lab’s MakerBot Replicator Mini, printing different open-source models I found on Thingiverse. I wanted to start printing my own models, but found traditional full-scale 3D modeling softwares like Blender and Autodesk Maya to be intimidating as a person with minimal modeling or coding experience. In my search for a user-friendly and intuitive modeling platform, I found Tinkercad – an extremely simplified browser-based program with built in tutorials that allowed me to quickly and intuitively create models from my imagination. The best part about the program, for me, was the ability to import and extrude vector designs I had made in Illustrator.

Tinkercad’s easy to use Interface

For my first project using this tool, I decided to make a stamp using a hexagonal graphic I had made for the ETC Lab previously.

My original graphic is colored, but in order to extrude the vectors the way I wanted to, I had to edit the graphic to be purely black and white, without overlaps, meaning I needed to remove the background fill color. I also had to mirror the image so that it would stamp the text in the correct orientation (I actually failed to do this on my first attempt, and ended up using the print as a magnet since it would stamp the text backwards).  I’m using Illustrator to do all of this because that’s where I created the graphic, but any vector based illustration software will work (Inkscape is a great open source option!). You can also download any .SVG file from the internet (you can browse thousands at and either purchase the file or give credit to the artist). If you’re confused about what parts of your image need to be black, it’s helpful to imagine that all of the black areas you create will be covered in ink. Below is a picture of what my image looked like in Illustrator after I had edited it.

To do this, I started by selecting each individual part of my graphic and changing the fill and stroke color to black, and removed the fill from the surrounding hexagon. To reflect the image, I selected everything and clicked Object > Transform > Reflect. My stamp-ready file looks like this:

In order for Tinkercad to read the file, I had to export it in .SVG format by going to File > Export > Export As… and choose .SVG in the drop down menu. If you’re doing this in Illustrator, you’ll want to use the following export settings:

I then opened Tinkercad and imported my file. Much to my dismay, when I first brought the .SVG file into Tinkercad, it couldn’t recognize the file format, meaning I had to do some digging around online to figure out what was going on. I found that the problem was with the way Illustrator exports the .SVG file. I had to add in a single line of code to the top of the file in order to solve the problem. The problem is that Illustrator is exporting the file at .SVG version 1.1, and Tinkercad can only read .SVG 1.0, so I had to manually revert the file to the previous version. I downloaded Atom, an open source code editor and pasted in the following line of code at the very beginning of the file and saved it. This step might be irrelevant to you depending on the software you’re using, so be sure to attempt importing the file into Tinkercad before you change any of the code.

<?xml version="1.0"?>

I then imported the updated file, ending up with this solid hexagon. This was not what I wanted, and I assumed that Tinkercad was simply filling in the outermost lines that it detected from my vector file. Apparently, the price for the simplicity of the program is one of its many limitations. 

After I noticed that it was possible to manually create hexagons in Tinkercad, I decided to go back into Illustrator and delete the surrounding hexagon and then simply build it back in Tinkercad after my text had been imported. Depending on the complexity of your design, you may decide to do it like I did and build simple shapes directly in Tinkercad, or you may want to upload multiple separate .SVG files that you can then piece together. This is what my new vector file looked like after I imported it.

Next, I wanted to make the base of the stamp, and a hexagonal ridge at the same height of my text that would stamp a line around my text like my original vector file. To do this, I selected the hexagonal prism, clicking and dragging it onto the canvas. I then adjusted the size and position visually by clicking and dragging the vertices (hold Shift if you want to keep the shape proportionate) until it fit the way I wanted it to. I then duplicated the first hexagon twice by copying and pasting. I then scaled one of those hexagons to be slightly smaller than the other and placed it directly on top of the other, until their difference was the border size that I wanted. I then switched the mode of the smaller hexagon to “Hole” in the righthand corner, so that my smaller hexagon would be cut out of the larger one, leaving me with my hexagonal border. Next, I positioned the hollow hexagon directly on top of the base, and extruded it to the same height as my letters, so that it would stamp. For precise measurements like this, I chose to type in the exact height I wanted in the righthand panel. My final stamp model looked like this:

Then, I downloaded the model as an .STL file, and opened it in our MakerBot program and sized it for printing. Around three hours later, my print was ready and looked like this:


As you can probably tell, the stamp has already been inked. While my print turned out exactly the way I planned it to, I found that the PLA material was not great for actually stamping. On my first stamp attempt, I could only see a few lines and couldn’t make out the text at all.


I assumed that the stamping problems had something to do with the stamp’s ability to hold ink, and the stiffness of the plastic. I decided to sand the stamp to create more grit for holding ink, and tried placing the stamp face up with the paper on top of it instead, allowing the paper to get into the grooves of the stamp. This process worked a bit better, but still didn’t have the rich black stamp I was hoping for.


Because of this difficulty with actually stamping my print, in the end, I actually preferred my “mistake” print that I had done without mirroring the text, and turned it into a magnet!


This process can be applied to any project you can think of, and I found the ability to work in 2D and then extrude extremely helpful for me, as I feel more comfortable in 2D design programs. Tinkercad was simple and easy to use, but its simplicity meant that I had to do a few workarounds to get the results I wanted. I’m still troubleshooting ways to make my stamp “stampable”, and would appreciate any ideas you all have! As always feel free to come explore with us for free in the ETC Lab by attending Friday Open Hours from 10:00am to 4:00pm, or email for an appointment.


Painting in Virtual Reality: Exploring Google Tilt Brush

By Thaddeus Kaszuba-Dias

Currently in the ETC lab we have been experimenting with the capabilities of Google Tilt Brush, a virtual reality program designed for the HTC Vive Virtual Reality headset (VR). This program is essentially a 3D sketchbook where you can create, draw, design, and experiment to your heart’s content in a virtual 3D space. It takes the idea of drawing on paper and gives you a 3rd dimension with a very cool palette of brushes and effects to play with to make your pieces come to life.

Me playing around in Tilt Brush

Some of the questions we have been asking are “What are the capabilities of this program in CLA? If there are multiple people working with this program, what is the vocabulary like? How does that vocabulary shift when creating a 2D to 3D piece?” These are things that we in the ETC lab will be exploring more and more with the many diverse guests we have in our space!

As of now, google tilt brush is a very interesting program to experiment with importing 3D properties that others have developed in another program such as Maya, Google Sketch-up, or Blender. Then users have added their own creative spins on these imports using the tools provided in google tilt brush. These can also be exported to be played with in their respective platforms.

Screenshot of a drawing on top of an imported model in Tilt Brush

There is also a capacity for storytelling in google tilt brush. The creative, and perhaps perfectionist mindset thrives in Google Tilt Brush. For those wishing to express a single emotion, tell a story, or express a thought visually, Google Tilt Brush seems to have just the right amount of tools and atmosphere for that kind of work to thrive. All pieces created in Tilt Brush can be exported as 3D models to be uploaded to galleries, websites, or blogs. It could possibly be a revolutionary tool for creatives on a budget. And don’t worry, its free of charge to use here in the ETC Lab!

Animated .GIF recorded in Tilt Brush

The Virtuality Continuum for Dummies

By Michael Major

Do you watch TV, surf the internet, or occasionally leave your residence and talk to other human beings? If you can answer yes to any of those conditions, you have probably heard of virtual reality, augmented reality, or mixed reality. Perhaps you have seen one of Samsung’s Gear VR commercials, played Pokemon GO, taken a Microsoft Hololens for a test drive, or watched a YouTube video about the mysterious Magic Leap. If you are anything like me, then your first experiences with these new realities left you with a lot of questions like What do the terms Virtual Reality, Augmented Reality, and Mixed Reality even mean? What are the differences between VR/AR/MR? Google searches may bring you to confusing articles about the science that makes the blending of realities possible, which is extremely overwhelming. So let’s break these concepts down into terms that we can all understand.

The first step to understanding the virtuality continuum is grasping the difference between the real environment (blue section labeled RE on left in the diagram below) and a completely virtual environment (labeled VR on the right in the diagram below).


The real environment is the physical space that you are in. For example, I am sitting in a real chair, at my real desk, in a real room as I am writing this.

I can feel the keys on my keyboard, see my co-workers walk to the break room, hear them describe their weekends, smell the coffee that gets brewed, and taste the sandwich that I brought for lunch.

A completely virtual environment is a digital environment that a user enters by wearing a headset.

So let’s say I load up theBlu (a VR application that lets the user experience deep sea diving) and put on the HTC Vive headset and a good pair of noise cancelling headphones. I will no longer see or hear any of my co-workers, or anything else from the room that I am physically standing in. Instead, I will see and hear giant whales! I am also able to look around in all directions as though I am actually there in the ocean.

The next step to understanding the Virtuality Continuum is knowing the difference between augmented reality (teal section labeled AR in the diagram below) and augmented virtuality (green section labeled AV in the diagram below).


Augmented reality involves layering digital content on top of a live view of the physical space that you are in. A fun example of this is how Vespa, a company that sells motorized scooters, hired a company called 900lbs of Creative to create an augmented reality app that lets you customize your own scooter by holding your phone up to their ad in a magazine as if you were going to take a picture of it.

The app recognizes the blue pattern on the page of the magazine and then adds the 3D model of the scooter to the screen on top of that blue pattern. Without this app, the man would just be looking at a magazine sitting on a table, instead of being able to see both the magazine and a digital scooter that he can customize and even drive around on the table!

Augmented virtuality is when objects from the user’s real-world environment are added to the virtual environment that the user is experiencing. Let’s dive back into theBlu to explore an example of augmented virtuality. Imagine that I have added some sensors to the front of the Vive headset. These sensors have the ability to recognize my hands and track their movements. Now I can turn this completely virtual experience into an augmented virtuality experience in which I can see and use my hands inside the virtual environment.

Note: these sensors are not yet available for VR headsets (as of December 2016). However, Intel has a product called Intel RealSense Technology which allows cameras to sense depth and is used in some computer and mobile phone applications. But let’s imagine that I do have this kind of sensor for the Vive.

With this new technology, I could add cool features to theBlu such as the ability to pick up virtual seashells with my hands instead of using a controller to do so. Or I could swim around in the virtual ocean by doing a breaststroke instead of holding down a button on the controller. This would make my virtual experience much more immersive.

The last step in understanding the virtuality continuum is figuring out what people mean when they refer to mixed reality (green section labeled MR in the figure below).


By definition, the term mixed reality means “the merging of real and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact in real time.” So augmented reality and augmented virtuality are both technically under the mixed reality umbrella because they are mixing the real world with digital content in some way or another. However, a company called Magic Leap has recently started to refer to their experience as a mixed reality experience in an effort to set themselves apart from augmented reality and augmented virtuality technologies. When Magic Leap uses the term mixed reality, it is meant to describe a technology that makes it difficult for the user to discern between what is real and what is digital, as if everything that the user is experiencing is part of the real world. I must admit, if the videos that Magic Leap has released are accurate, then their technology really is in a league of its own. Take a look at their website and decide for yourself.

There you have it, the Virtuality Continuum. Now you will know what people are talking about when they refer to virtual reality, augmented reality, or anything in between.


Virtual Reality and Immersive Content for Education

By Jake Buffalo

Virtual reality devices and applications are becoming increasingly popular, especially as the years go on and the technologies required to deliver virtual reality experiences are becoming more affordable. So, if you think virtual reality is a thing of the future, think again! There is a lot of content available today and suitable devices for experiencing virtual reality are not hard to find. Although there is a huge focus on video games for these types of devices, there is a lot you can do with them to create an educational experience for users.

In the ETC Lab, we have the HTC Vive and the Google Cardboard accessible for CLA students and professors to try out! In this blog post, I will give you a brief overview of each and let you know what kind of educational purposes they can have. We have a great list of some different apps and experiences that we found to be of interest. Take a look here:


HTC Vive

The HTC Vive is a headset that brings you into the world of virtual reality through a variety of different apps available in the Steam VR store online. The Vive system also includes controllers that allow you to move around in the different scenes and perform different actions depending on which application you are using.

There are many different apps available for the Vive, ranging anywhere from artistic and cultural experiences to immersive games. For example, you may be put “inside” an audiobook through immersive storytelling and placed inside scenes that bring the narration to life. There are also apps that put you inside the paintings of famous artists like Vincent Van Gogh or allow you to walk through a museum with different historical exhibits. The options really are endless and can be applied to a vast array of CLA majors and your specific interests!


Google Cardboard

The Google Cardboard is a handheld extension that hooks up to your smartphone and allows you to look through its eyeholes at your smartphone screen to view immersive virtual content. Originally, the device was only available in actual paper cardboard, but now they are available in all types of materials and designs, with plenty of sturdier options for a better experience. One cool thing you can do with the Cardboard is watch 360° videos on YouTube from different places around the world. You can go on a tour of historic locations you have always wanted to experience, like Buckingham Palace, or even watch Broadway’s “The Lion King” from the perspective of the performers. There are some experiences that are more artistically based–allowing you to experience the “Dreams of Dali” or enter into Bosch’s “Garden of Earthly Delights”–that may be relevant for different art majors at the U as well.

In addition, you can download apps on your iPhone or Android and use them in the Google Cardboard. There are many different apps available that relate to art, culture, science, journalism and more! For example, New York Times has different news stories that are available in a virtual reality narrative with the Cardboard. You can even experience the life of a neuron wandering around the human brain. Augmented reality is another feature available that overlays visual content on top of your real-life surroundings using the camera on your phone.

Overall, virtual reality is not just for “nerds”–there are programs available for everyone and we continue to find more possibilities every day. So, don’t forget to check out the list of apps we have made to help you get started with virtual reality at the top of the page and get in touch at or come to our Friday 10am-4pm open hours if you want to try out some of these cool technologies!

Code signing Gotcha on MacOS 10.12 Sierra

Most Mac developers have, at one time or another, struggled with an issue related to code signing. Code signing is the process by which a cryptographic “signature” is embedded into an application, allowing the operating system to confirm that the application hasn’t been tampered with. This is a powerful tool for preventing forgery and hacking attempts. It can be pretty complicated to get it all right though.

We recently ran into an issue in which a new test build of EditReady was working fine on our development machines (running MacOS 10.12, Sierra), and was working fine on the oldest version of MacOS we support for EditReady (Mac OS X 10.8.5), but wasn’t working properly on Mac OS X 10.10. That seemed pretty strange – it worked on versions of the operating system older and newer than 10.10, so we would expect it to work there as well.

The issue had to do with something related to code signing – the operating system was reporting an error with one of the libraries that EditReady uses. Libraries are chunks of code which are designed to be reusable across applications. It’s important that they be code signed as well, since the code inside them gets executed. Normally, when an application is exported from Xcode, all of the libraries inside it are signed. Everything appeared right – Apple’s diagnostic tools like codesign and spctl reported no problems.

The library that was failing was one that we had recently recompiled. When we compared the old version of the library with the new one, the only difference we saw was in the types of cryptographic hashes being applied. The old version of the hash was signed with both the sha1 and sha256 algorithms, whereas the new version was only signed sha256.

We finally stumbled upon a tech note from Apple, which states

Note: When you set the deployment target in Xcode build settings to 10.12 or higher, the code signing machinery generates only the modern code signature for Mach-O binaries. A binary executable is always unsuitable for systems older than the specified deployment target, but in this case, older systems also fail to interpret the code signature.

That seemed like a clue. Older versions of Mac OS X don’t support sha256 signing, and need the sha1 hash. However, all of our Xcode build targets clearly specify 10.8. There was another missing piece.

It turns out that the codesign tool, which is a command line utility invoked by Xcode, actually looks at the LC_VERSION_MIN_MACOSX load command within each binary it inspects. It then decides which types of hashes to apply based on the data it finds there. In our case, when we compiled the dynamic library using the traditional “configure” and “make” commands, we hadn’t specified a version (it’s not otherwise necessary for this library) and so it defaulted to the current version. By recompiling with the “-mmacosx-version-min=10.8” compiler flags, we were successfully able to build an application that ran on 10.8.

Oh, and what about 10.8? It turns out that versions of Mac OS X prior to 10.10.5 don’t validate the code signatures of libraries.

Photogrammetry with GIGAMacro images

One of the exciting possibilities in the AISOS space is the opportunity to combine technologies in new or novel ways. For example, combining RTI and photogrammetry may allow for 3d models with increased precision in surface details. Along these lines, we recently had the opportunity to do some work combining our GIGAMacro with our typical photogrammetry process.

This work was inspired by Dr. Hancher from our Department of English. He brought us some wooden printing blocks, which are made up of very, very fine surface carvings. His research interests include profiling the depth of the cuts. The marks are far too fine for our typical photogrammetry equipment. While they may be well-suited to RTI, the size of the blocks would mean that imaging with RTI would be very time consuming.

As we were pondering our options, one of our graduate researchers, Samantha Porter, pointed us to a paper she’d recently worked on which dealt with a similar situation.

By manually setting the GIGAMacro to image with a lot more overlap than is typical (we ran at a 66% overlap), and using a level of magnification which fully reveals the subtle surface details we were interested in, we were able to capture images well suited to photogrammetry. This process generates a substantial amount of data (a small wooden block consisted of more than 400 images), but it’s still manageable using our normal photogrammetry tools (Agisoft Photoscan).

After approximately 8 hours of processing, the results are impressive. Even the most subtle details are revealed in the mesh (the mesh seen below has been simplified for display in the browser, and has had its texture removed to better show the surface details). Because the high-overlap images can still be stitched using the traditional GIGAMacro toolchain, we can also generate high resolution 2d images for comparison.

We’re excited to continue to refine this technique, to increase the performance and the accuracy.