An Introduction to Automated RTI

Let me preface this by saying:

1) Hi. I’m Kevin Falcetano and this is my first AISOS blog post. I am an undergraduate technician working for AISOS and have worked on the construction of our RTI equipment for almost two months.

2) This project was made far easier to complete because of Leszek Pawlowicz. His thorough documentation on the process of building an RTI dome and control system from consumer components as detailed on Hackaday was the reason for the successful and timely completion of AISOS’s very own RTI system. Another special thanks to the open software and materials from Cultural Heritage Imaging (CHI).

Okay, now that the introduction is out of the way, second things second.

What is RTI?

RTI stands for Reflectance Transformation Imaging. It is a method of digitizing/virtualizing the lighting characteristics of one face of an object by sampling multiple lighting angles from the same camera position over the object with known point light positions. The mathematical model involved produces a two dimensional image that can be relit from virtually any lighting angle, so that all of the surface detail is preserved on a per-pixel basis. The basic idea comes from the fact that if you have a surface, light reflects off of it differently and predictably depending on the angle of said surface. A visualization of surface normals, the vectors perpendicular to the surface at any given place, is provided below (credit CHI).

The information available is represented by yellow vectors, and the information we wish to calculate, the surface normals, is in red. So, given that we know the math behind how light can reflect from a surface (which we do), and we know the light path angle with respect to a fixed camera, we’re very close to calculating the normal vectors of the surface. We’re close because there are constants involved that are unknown, but if you’ve ever taken an algebra course, there is clearly a linear system that can be used to solve for those coefficients. We just need more data. That data comes in the form of images of lighting from more angles. This also helps to account for areas of an object where light from a certain angle may not hit due to occlusion by the object’s geometry, and is therefore incalculable. When all’s said and done, an RTI image is generated. Although it is two dimensional, an RTI image will be able to mimic the way the real object scatters light to a resolution that matches the source images. Below are three images as an example of how the lighting changes between each photo.

 

 Why RTI?

The openly available documentation for RTI explains its benefits better than I probably could, but put most simply, it’s like having an image, but the image knows how that object could be lit. This means that details that could not be revealed in any one possible image are shown in full in an RTI image. One of the pitfalls of photogrammetry, for instance, is it HATES reflective objects since smooth surfaces and specular/highlights make photogrammetry’s hallmark point tracking very difficult . RTI doesn’t care. RTI doesn’t need point overlap because the process asks that you eliminate many of the variables associated with photogrammetry, e.g. the camera and object do not move and the lighting angles are pre-calculated from a sphere of known geometry. The GIGAmacro can get up close, but it still produces images with single lighting angles, so, all else equal, less information per pixel. GIGAmacro, of course, has the advantage of being able to capture many camera positions very quickly, which results in many more pixels. RTI’s per-pixel information produces near perfect normal maps of the surface, which is represented with a false color standard. As an added benefit, our automated version of the workflow is blazing fast. Like, from start to RTI image file takes as little as five minutes, fast.

For a simple example of how RTI lets you relight an object, take a look at this coin.  Click the lightbulb icon and then click and drag your mouse to move the lighting.  This same data can be processed in a multitude of ways to reveal other details.

How RTI?

There are many ways to record data for RTI, including some very manually intensive methods that, for the sake of expedience, are definitely out of the question for AISOS. We operate under the assumption that researchers who use our space may have limited experience in any of their desired techniques, so the easier we can make powerful data acquisition and analysis methods the better. We decided to use Leszek’s method because it was both cost and time effective due to the way it can easily be automated. This method involves putting as many white LEDs as we would like data points, up to 64, inside an acrylic dome with a hole in the very top. The hole is there to allow a camera to look vertically down through it at an object centered under the dome. The dome is painted black on the inside to ensure internal reflections are minimized, making the only lighting source for the object the desired single LED inside the dome. Every LED is lit once for each image, and the data points may be processed and turned into an RTI image file.

This way, every LED can be turned on individually to take a picture of each lighting angle without moving or changing anything, and done so with an automatic shutter. This means that after setup, the image capture process is completely automated. Ultimately, the goal for this build was: Place dome over object, move in camera on our boom arm, focus lens, press button, RTI happens. Observe this accomplished goal in picture form:

You may notice in the above image that there are two domes of different sizes. We built two not only to accommodate different sizes of objects, but also to be able to use certain lenses, normally macro ones, that need to be closer to an object to be in focus. This allows us to pick a preferred lens for an object and then decide which dome to use with it so that the desired magnification and lens distance can be retained in most situations.

How all of this was built is outlined in a separate blog post.

Building an Automated RTI Dome

The RTI Build Experience

This is the second post in a series, written by  Kevin Falcetano.  See the first post for an introduction.

Though time-intensive, the RTI build was relatively simple. It was made so simple because the hard stuff; i.e. circuit design, dome construction, control box construction, and programming, were already done by Leszek Pawlowicz. This made the project more like a LEGO set than a massive undertaking.

I began with the control circuitry. The basic idea behind it is that it was necessary to control each LED individually, but it had to be a type of LED that was bright enough and had a quality output of consistent white light. This, unfortunately, meant that individually addressable LEDs (that may be controlled with a single data cable and an Arduino library for the control protocol) just would not do. So this is where Leszek’s build comes in. It calls for an eight by eight matrix of LED’s to be controlled by a special series of circuits connected to an Arduino Mega.

This allows for a total of 64 LEDs to be driven off of 16 digital pins, where the columns are the positive lead and the rows are the negative lead. The arduino’s power supply cannot drive this on its own, and that is why the circuitry is much more complicated. What is required is essentially the use of transistor-like gates on the positive and negative ends of the matrix so the current can be directed through one specified LED at a time.

One such set of gates is the highside MOSFET circuit, built on an Arduino Mega Shield:

An Arduino shield is a piece of hardware meant to be mounted on top of an Arduino microcontroller unit to interface with it. In this case, this was a blank board used to connect eight highside MOSFET circuits to the digital pins that would open the “gates” on the MOSFETs on the positive end to the desired column.

These components all had to be hand soldered to the shield board, but after quite a long period of time, everything was in its place without much of a hitch. The positive end of each MOSFET (the source) goes in parallel to the positive end of the power supply. Each gate (actually called the gate) gets connected to a corresponding arduino digital pin to be switched open as necessary, and finally, the negative ends of each MOSFET (the drain) get connected to corresponding pins of the positive ethernet cable that leads to the LEDs in the dome. The resistors and transistors shown on the board exist to convert the MOSFET for highside control. The other side of the shields connects the other eight Arduino digital pins to the row “gates” on the negative end, to make a complete circuit that controls each LED.

Those negative gates, controlled by CAT4101 LED drivers, were soldered onto a different board along with a beeper and USB control pins:

In this case, there is a normal resistor for baseline current limitation, which acts as a bottleneck for the maximum current through the LEDs. Alongside each resistor is a variable resistor to further reduce the current and vary/tune the intensity of each LED row. These CAT4s allow the negative side of the LED matrix to selectively connect to ground by way of the ground ethernet cable. Once all was soldered, it was a matter of mating this board, the Arduino Mega, and the MOSFET control shield.

It also helped to have a separate power distribution board, with positive and negative rails for the 5V Arduino power supply, and positive and negative rails for the 9V power supply that drives the LEDs.  After connecting the power leads up and stripping and adding pins to eight LEDs and two ethernet cables, it was time for the test: The moment of truth for my soldering and circuit building skills.

Aside from one cold solder joint that I had to fix, everything worked.

As you can probably see from the top left corner of the above image, there is a large pile of LEDs that all have wires soldered on, pins crimped, and heat shrink tubing applied to  them. This was a very fun intermediate step that requires no further explanation but will now be mildly complained about. That level of repetition is not actually fun, but very necessary.

Speaking of, the next thing on the list was painting the LEDs black. Since the goal of good RTI is to eliminate as many environmental variables as possible, the inside of the LED dome must be as non-reflective as possible, so as to eliminate internal reflections that could accidentally cause multiple other, albeit dimmer, light sources within. This means even the area around the actual LED chips had to be painted black, with the only other exception being a small portion of the positive pad so as not to mix up the polarity when it came time to wire everything together.

Now that this bit was out of the way, it was dome prep time. The domes we got were clear, which helped with marking the outside for LED positions and then transferring the marks to the inside. I did this using some clever geometry math mostly outlined by Leszek in his documents. I, however, had the idea to melt the marks into the inside of the dome using the soldering iron, which caused a rather unpleasant smell to fill up the room that may or may not have been an indicator of toxic fumes. This step was necessary so that the marks showed up even with the paint over them. Next was just that: painting both the small and big dome using exactly one entire can of matte black spray paint.

Above is an image from the spray booth inside Regis Center for Art, where I applied multiple coats of the paint in a highly impatient fashion. Everything turned out fine, since the drips in the small dome would be inconsequential to the build.

After everything was dry, it came time to mount the LEDs inside the big dome, since it was the first one we decided to have done to make sure everything was working as intended. Each LED was hot glued in place with the positive ends of all LEDs pointing in the counterclockwise direction.

The above photo shows the placement of the LEDs as well as part of the following step: the stripping and crimping of many chains of wires that connect up the LED matrix. Each LED has an attached male dupont pin head, whereas each chain of connectors has properly spaced female dupont pin heads. Every concentric circle of eight LEDs represents a negative row, and every vertical set of eight LEDs (offset to help with lighting coverage) represents a positive column.

With all of the rows and columns wired up, the open ends of each string of wires were to connect with the male dupont pins I crimped onto the ends of both the positive and negative ethernet cables’ individual wires. Every part of the control circuitry was made to correspond with matching pins 1 through 8 on the ethernet cables, such that it becomes incredibly easy to know which Arduino pin corresponds with which row and column number and which LED. The next step was then to clean up the wiring and turn the dome into a more self-contained piece.

Behind this piece of special matte tape (3M Chalkboard Tape) that has about as much stick as a thirty-year-old Post-It, are the connections to the ethernet cables pushed down and hot glued in place. With the finishing touches in place on the dome, a successful test was run and it was time to contain the control circuitry in a cozy project box.

The Arduino, MOSFET shield, CAT4101 board, and power board were all hot glued in place after drilling the requisite holes for the bells and whistles. The two potentiometers were wired in and bolted on to control the LED on time and delay, as this can vary between different shooting scenarios. A reset button, preview button, and action button are all shown above, and used to operate the reset, single light preview, and main action activation functions in the Arduino’s programming. This image was taken before I added the main 2.5mm jack used to snap the camera shutter and the switches that select the operation modes. At the right side of the box are the ethernet jacks for power to the dome (top is positive and bottom is negative), and the USB connection for cameras without a way of connecting to the 2.5mm shutter switch. Last to mention is the OLED screen that displays relevant information that helps with eliminating mistakes and debugging when repairs and/or changes must be made.

On the subject of the control box, I personally made changes to the construction in a few ways:

  • I accidentally busted a switch because the soldering iron was too hot, so I removed the sound on/off function which ended up being fine since I think I messed up the wiring of the beeper anyway.
  • I completely removed the USB shutter, servo shutter, and IR/Bluetooth shutter options because we only really need a 2.5mm auto shutter and a manual mode.
  • I added a small transistor circuit to short the 2.5mm jack to ground using an Arduino digital output in order to actually make the 2.5mm auto shutter function work.
  • I added a three way switch to reduce the number of switches, but increase the functions possible.

This is the front of the final form of the box:

After the box was wiring up the small dome. The steps were the same as the big one but just scaled down, so I’ll spare you the boring details other than the fact that this one only contains 48 LEDs, with eight columns and six rows.

With everything complete, I also had to change some of the Arduino code, which was provided by Leszek with his hackaday project. The changes to the code are outlined below:

  • Removed sound function and USB and IR/Bluetooth shutter functions
  • Added dome mode switch, so either dome can be used without reprogramming the control box
  • Added a new white balance preview function for calibrating the camera because the old one was tedious
  • Added new display functions to show new modes above
  • Added a new function to snap 2.5mm shutter
  • Added a function to skip intro screen because of impatience and changed the intro screen to display relevant version info of the new software build.

Oh, and one last minute addition before this goes up: I added a remote preview button so that the exposure settings and focus can be adjusted from the computer without getting up to turn the preview function on and off, making the setup process a bit less tedious. This feature is just a wired button that plugs into the control box at the front and is completely optional since I kept the original button in place.
The final post is a conclusion and thoughts on the lessons learned from this build.

Building an Automatic RTI Dome: Wrapping It Up

This is the third post in a series, written by  Kevin Falcetano.  See the first post for an introduction and the second post for the build process.

The RTI project was not without its share of problems, mistakes, and general annoyances, but it was ultimately both interesting and rewarding to go through. I’ve separated my concluding remarks into three distinct semi-chronological sections under which the trials, tribulations, and triumphs of this project are detailed.

Building

The construction of this project required many time-intensive repeated steps, but the hardest part, the planning, had already been worked out to a detailed part list and the very placement of the circuit components. Once again, a great many thanks to Leszek Pawlowicz for all of this work. In all of the soldering, striping, crimping, and wiring, I had made only a few easily correctable mistakes. That success is attributed to those thorough instructions and guiding images.

A few lessons were learned, still:

  • Helping hands are wonderful. Use them.
  • Lead free solder may or may not kill you more slowly, but it has a higher melting point. What that really means is it requires a delicate balance between “hot enough to heavily oxidize your soldering tip” and “too cool to flow the solder quickly.” Throw the balance too close to one side or the other and you get the same result: the heat spreads further over the extended heating period (by way of either too low a temperature or restricted conductivity through an oxide layer too thick to be affected by the flux core), melting plastics easily and possibly damaging sensitive components. I killed two switches and probably a piezo speaker this way. Be careful or use leaded solder.
  • Paint drips are fine if all you need is coverage.
  • 3M chalkboard tape may be matte black, which is wonderful for the inside of our domes, but it doesn’t stick. At all.
  • Hot glue is your friend. It’s not pretty or perfect but it’s accessable. What use is a build if you can’t repair it?
  • If one LED isn’t working in your matrix of LEDs, it is, logically, the LED that’s the problem. And by the LED being the problem I mean you are the problem. You put it in backwards but were too proud to check. The D in LED stands for diode – one way.
  • Be sure to have access to a dremel when trying to put large square holes in a hobby box.  Or have low standards.
  • When taking on a wiring project of this caliber, have an auto stripper handy. I may have lost my mind if I hadn’t used one.
  • Leave yourself some slack in the wires in case you break a dupont pin.
  • Don’t break dupont pins.

Calibrating and Processing

The calibration process was initially a bit of a headache, but I still must be grateful to Leszek and Cultural Heritage Imaging for their open software, without which more frustration would have ensued.

CHI provides a java applet that generates lp files. The file extension stands for light positions (or light points) and is pretty self explanatory: An lp file is a plain text file with the angles of the lights attributed to each of the lighting positions in order of first to last picture taken. This is done automatically with some clever math and dodgy code. CHI has told us they have a more stable version in the works, but it’s not quite done yet. Anyway, calibration is done using a black sphere (in this case also provided by CHI) that reflects each LED in the dome as a bright point. The sphere gets centered under the dome and a pass is run taking a photo for each light angle.  The rotational orientation of the dome relative to the camera is marked so as to be consistent for future use. This is because the dome is consistent so long as it it rotated to the correct position over the object, which means the lp may be reused.

The program uses an edge detection algorithm to figure out where the radius, and in turn, the center of the sphere is. After that, it uses the known geometry of the sphere and a highlight detection algorithm to figure out where the reflection point of each LED is for each picture, and then uses its position relative to the center and radius of the sphere to calculate the angle of each light point. For H-RTI, this is supposed to be done every time with a reference sphere next to every subject, but as I said before, we can reuse the lp because everything is fixed. This applet is glitchy and cumbersome, but it works when treated properly. It will run out of memory if we use full resolution photos from the Canon 5D, but downscaling them does not introduce any significant error into the calibration process. It also has very strict guidelines for filenames and a clunky default user interface, the indicators of purely functional software. It’s useful, but tedious.

Leszek has his own software that strips the lp file of its association with the calibration sphere image files and stores it for future use as a general lps file. This can then be used in his program to create a new lp for each RTI run associated with the new image files, and then it and the photos get run through various command line scripts for different RTI and PTM (polynomial texture map) processing methods. This ensures that, after initial calibration, subsequent workflow is very fast and efficient, with minimal fiddling required.

Customizing

Of course, there are some things specific to our use cases that a third party couldn’t anticipate, so some changes were made. The biggest change in functionality, but arguably the easiest change to make programming-wise, was the addition of a mode switch to toggle which dome is in use. This allows the automatic, manual, and preview functions of the control box to change depending on which dome is being used. There are more details on how I changed the hardware and code in the build blog post, but all in all it was relatively painless. Leszek’s code is very clean and manageable, so I was able to easily navigate and change what I needed (which was quite a bit). It worked perfectly from the beginning, but not for exactly what we needed it for.

Final Thoughts

My rough estimate puts a first-time build of this project from start to finish at around 80 hours. It was fun overall, and not terribly frustrating save a few parts. We are currently thinking through how to make a custom shield PCB to optimize the build process of this RTI dome’s control unit, since there may be demand for a pre-built rig from others at the University. It would be very difficult to streamline the dome building process considering it is such a custom endeavor. I could see a template being made for mounting the LEDs and placing them in the domes, possibly with ribbon cables, but it still seems pretty far off.

Although there are certain significant limitations, RTI appears, in our testing, to be a very versatile tool, especially when automated in the way we have it. We are working with researchers from across campus to explore the possibilities, and the speed and precision of these domes is a huge benefit to that work.  In the future, we hope to be able to combine RTI and Photogrammetry to produce models with surface detail we have yet to come close to being able to capture.

If you would like to take advantage of our ever-expanding imaging resources, or would simply like to chat with us about what we’re doing and how we’re doing it, feel free to contact aisos@umn.edu.

[elevator width=640 height=480 includelink=”on” includesummary=”on” fileobjectid=”58d2c8577d58aee00df62672″ objectid=”58d2c8587d58aee00df62673″ sourceurl=”http://aisos.elevator.umn.edu/asset/viewAsset/58d2c8587d58aee00df62673″]

 

Gopherbaloo: An Action-Packed and Fun-Filled Day of History!

By Andy Wilhide, DCL Research Assistant

On a cold, wintery Saturday afternoon, I followed some unusual visitors into Wilson Library: teenagers. They streamed in—by themselves, in pairs, in groups, and in families—talking excitedly and carrying books and bags with them. What brought them here on a weekend afternoon? Gopherbaloo, an annual event sponsored by the University of Minnesota Libraries and the Minnesota Historical Society’s History Day program. The afternoon included power conferences with History Day experts, research sessions, project workshops, and presentations on archives. Oh, and pictures with the History Day Moose—I got one too! There were raffles where students could win large exhibit boards, t-shirts, and other History Day swag.

The real prize was the opportunity for students to get some feedback on their History Day projects and to discover new resources that could help make their projects stronger. That’s where I came in—I was there to show off the Digital Content Library and help students, parents and teachers explore this unique archive.

This year’s History Day theme is “Taking A Stand.” Several students stopped by my table to check out the DCL. With the topics they had chosen, it was a bit hit and miss, but we did find some interesting materials, including dance photographs from the Alvin Ailey American Dance Theater, photographs and documentaries about Susan B. Anthony, a documentary about the L.A. Race Riots in 1992, and various materials connected to Copernicus, Kepler and Galileo—the scientists behind the theory of heliocentrism.

In preparation for the event, I made media drawers of potential History Day topics. These drawers are available to anyone who is logged into the DCL. One media drawer is dedicated to this year’s theme of “Taking A Stand.” Using the keywords “demonstrations” and “protests,” I found records ranging from a Communist rally in New York City (1930) to the Soweto Uprising in South Africa (1976) to the Poor People’s Campaign in Washington D.C. (1968) to Tiananmen Square Protests in China (1989). Many of these images came from the Department of History and the Department of Art History. One set of materials caught my eye—a series of photographs of the Iranian Revolution in 1978, taken by William Beeman, a professor in Anthropology.


Most History Day students rely on Google to find their images and resources, but we did find some materials that were not easily available through a Google search. Students, parents and teachers left my table excited to explore more of the treasures in the DCL, but from the comfort of home. Herein lies a challenge in sharing the DCL with public audiences: while public viewers can see most of what is on the DCL, they cannot download any of the images. This can be a deterrent for History Day students who need those images for their projects, which may be an exhibit, a documentary, a website or a performance. This is our first year connecting the DCL to History Day. We’ve made guest accounts for users outside of the U of M to access the DCL. We’ll see how it goes and report back!

The History Day State Competition will be held on the University of Minnesota campus, April 29, 2017.

Thank you to Lynn Skupeko and Phil Dudas from Wilson Library and the Minnesota History Day staff for inviting us to be part of Gopherbaloo. We hope to come back next year!

Links:
http://education.mnhs.org/historyday/
@MNHistoryDay [Twitter]

The DCL is available to anyone with a U of M x500 account. If you know of someone who is not affiliated with the U of M but would like a guest account to access the Digital Content Library, please have them contact Denne Wesolowski, weso0001@umn.edu

Meet Joan Assistant: The Meeting Room Reservation Solution

By Chris Scherr and Rebecca Moss

When we changed the layout of Anderson 110 from a cubicle farm and help desk setup to an open, customizable space, we turned four of the former offices into reservable meeting rooms. We created Google calendars for each so groups could reserve them, but spontaneous uses of these spaces were difficult to accommodate. We needed a solution that allowed folks in And110 to see if the rooms were reserved or free, and let them add a meeting on the spot.

Joan Assistant offered us the flexibility we were looking for, at a modest cost, and without the need to pay for upgrading the wiring in the room. Here is a quick video overview of how Joan Assistant works. We hope that sharing our experience might help others on campus who are looking for a lightweight solution to scheduling spaces.

Once we had the Joan Assistant, we had to figure out a way to connect it to our wireless network, which has a lot more security protocols than your average wifi setup. We gave the task to Chris Scherr, LATIS system engineer, who worked with Network Telecommunication Services (NTS) to figure out how to get it securely connected and online. Once that was accomplished, we bought two additional units and installed them by the meeting rooms.

There are three versions of Joan Assistant, and the only version supported on campus is the Executive as enterprise WPA2 is needed to authenticate to the wireless network – https://joanassistant.com/us/store/.

The company is also offering a limited edition 9.7 inch screen which I assume is enterprise WPA2 ready:  https://joanassistant.com/news/9-7-limited-edition/

Setup/configuration:

**Any unit in CLA can use our on-premises server, we would just need to work with the owner of the room calendar to grant the proper permissions to the Joan server .

Other colleges:

Several items are needed (any IT staff is more then welcome to contact Chris directly – cbs@umn.edu)

From my experience setting this up, here is the order I would take for a new setup:

1) Obtain a sponsored account for wireless access for the devices (IT will not grant departmental accounts access to wireless).  Request here:  https://my-account.umn.edu/create-sponsor-acct

2) Get a departmental account to serve as the google calendar tie in.  This account can own the room calendars or be granted full access to existing room calendars. (https://my-account.umn.edu/create-dept-acct)  

3)  Get an on premises server.  (https://joanassistant.com/us/help/hosting-and-calendar-support/install-joan-premises-server-infrastructure/)

The simplest solution is to request a self managed (SME) Linux VM from IT (1 core and 2 gigs of ram is adequate although two network adapters are required).  Ask them to to download the .vmdk from the link above as it is a preconfigured Ubuntu 14 VM.  Once the VM is setup I did the following configuration:  Limited ssh and 8081 traffic via iptables, changed the default root and joan user passwords.  Set IP addresses / dns names / cnames via servicegateway.

4)  Joan Configurator (https://portal.joanassistant.com/devices/wifi-settings)

Download and configure each Joan device with the Joan Configurator available from the link above.  You will need to specify the dns name or ip address of the on-premises server and authenticate to wireless via the sponsored account created in step 1 using PEAP.  Once configured, charged, and unplugged from usb the unit should display a 8-10 digit code.  If the unit is not responding gently shaking will wake the device from standby.

5) Joan portal (https://portal.joanassistant.com/devices/)

Several items need to be configured here:

User Settings

a) Hosting selection –  Make sure On-premises is selected

b) User settings – You may want to set additional contact emails for low battery warnings etc.

Calendar Settings

a)  Calendar – Connect to google calendar via the departmental account created in step 2

b)  Room Resources – Scan and add the room resources you want your Joan device(s) to be able to display/manipulate.

Device settings:

a)  Devices – click add device, you will be prompted to enter the 8-10 digit code from step 4 along with time zone and default calendar.

b)  Optional settings Logo/Office hours/Features can be changed/defined here

6)  Monitoring:  You can get quick information about your device from the on-premises server web portal.  It can be accessed here:   http://yourservername.umn.edu:8081/

Devices – This will show all devices configured to use your on-premises server.  You can click on any device and get information about it.  I would suggest renaming the device names to the room number or calendar name.  Live view will confirm the server is sending the correct image to your unit(s).  You can also check Charts to show Battery levels over time, signal Strength, disconnects, etc.

The e-ink screens require very little power so the batteries last a month or more before needing recharging. When batteries get low, it will send emails to whomever you designate. The power cords are standard ones and it takes about 6-8 hours to charge the device.

The units connect to the wall with a magnet so they will not be secure in an open space without additional configurations. Here in And110, we have them mounted on the door signs already located outside each door. We customized the units so they have the U of M logo. We are still in the testing stage with these devices, but our experience so far has been very positive.  Please contact us if you would like to learn more – latis@umn.edu – or come see them in person in Anderson 110 on the West Bank.

The Stamp Project: Extruding Vector Graphics for 3D Printing Using Tinkercad

By Rachel Dallman

I have recently been experimenting with 3D printing using the ETC Lab’s MakerBot Replicator Mini, printing different open-source models I found on Thingiverse. I wanted to start printing my own models, but found traditional full-scale 3D modeling softwares like Blender and Autodesk Maya to be intimidating as a person with minimal modeling or coding experience. In my search for a user-friendly and intuitive modeling platform, I found Tinkercad – an extremely simplified browser-based program with built in tutorials that allowed me to quickly and intuitively create models from my imagination. The best part about the program, for me, was the ability to import and extrude vector designs I had made in Illustrator.

Tinkercad’s easy to use Interface

For my first project using this tool, I decided to make a stamp using a hexagonal graphic I had made for the ETC Lab previously.

My original graphic is colored, but in order to extrude the vectors the way I wanted to, I had to edit the graphic to be purely black and white, without overlaps, meaning I needed to remove the background fill color. I also had to mirror the image so that it would stamp the text in the correct orientation (I actually failed to do this on my first attempt, and ended up using the print as a magnet since it would stamp the text backwards).  I’m using Illustrator to do all of this because that’s where I created the graphic, but any vector based illustration software will work (Inkscape is a great open source option!). You can also download any .SVG file from the internet (you can browse thousands at https://thenounproject.com/ and either purchase the file or give credit to the artist). If you’re confused about what parts of your image need to be black, it’s helpful to imagine that all of the black areas you create will be covered in ink. Below is a picture of what my image looked like in Illustrator after I had edited it.

To do this, I started by selecting each individual part of my graphic and changing the fill and stroke color to black, and removed the fill from the surrounding hexagon. To reflect the image, I selected everything and clicked Object > Transform > Reflect. My stamp-ready file looks like this:

In order for Tinkercad to read the file, I had to export it in .SVG format by going to File > Export > Export As… and choose .SVG in the drop down menu. If you’re doing this in Illustrator, you’ll want to use the following export settings:

I then opened Tinkercad and imported my file. Much to my dismay, when I first brought the .SVG file into Tinkercad, it couldn’t recognize the file format, meaning I had to do some digging around online to figure out what was going on. I found that the problem was with the way Illustrator exports the .SVG file. I had to add in a single line of code to the top of the file in order to solve the problem. The problem is that Illustrator is exporting the file at .SVG version 1.1, and Tinkercad can only read .SVG 1.0, so I had to manually revert the file to the previous version. I downloaded Atom, an open source code editor and pasted in the following line of code at the very beginning of the file and saved it. This step might be irrelevant to you depending on the software you’re using, so be sure to attempt importing the file into Tinkercad before you change any of the code.

<?xml version="1.0"?>

I then imported the updated file, ending up with this solid hexagon. This was not what I wanted, and I assumed that Tinkercad was simply filling in the outermost lines that it detected from my vector file. Apparently, the price for the simplicity of the program is one of its many limitations. 

After I noticed that it was possible to manually create hexagons in Tinkercad, I decided to go back into Illustrator and delete the surrounding hexagon and then simply build it back in Tinkercad after my text had been imported. Depending on the complexity of your design, you may decide to do it like I did and build simple shapes directly in Tinkercad, or you may want to upload multiple separate .SVG files that you can then piece together. This is what my new vector file looked like after I imported it.

Next, I wanted to make the base of the stamp, and a hexagonal ridge at the same height of my text that would stamp a line around my text like my original vector file. To do this, I selected the hexagonal prism, clicking and dragging it onto the canvas. I then adjusted the size and position visually by clicking and dragging the vertices (hold Shift if you want to keep the shape proportionate) until it fit the way I wanted it to. I then duplicated the first hexagon twice by copying and pasting. I then scaled one of those hexagons to be slightly smaller than the other and placed it directly on top of the other, until their difference was the border size that I wanted. I then switched the mode of the smaller hexagon to “Hole” in the righthand corner, so that my smaller hexagon would be cut out of the larger one, leaving me with my hexagonal border. Next, I positioned the hollow hexagon directly on top of the base, and extruded it to the same height as my letters, so that it would stamp. For precise measurements like this, I chose to type in the exact height I wanted in the righthand panel. My final stamp model looked like this:

Then, I downloaded the model as an .STL file, and opened it in our MakerBot program and sized it for printing. Around three hours later, my print was ready and looked like this:

 

As you can probably tell, the stamp has already been inked. While my print turned out exactly the way I planned it to, I found that the PLA material was not great for actually stamping. On my first stamp attempt, I could only see a few lines and couldn’t make out the text at all.

 

I assumed that the stamping problems had something to do with the stamp’s ability to hold ink, and the stiffness of the plastic. I decided to sand the stamp to create more grit for holding ink, and tried placing the stamp face up with the paper on top of it instead, allowing the paper to get into the grooves of the stamp. This process worked a bit better, but still didn’t have the rich black stamp I was hoping for.

 

Because of this difficulty with actually stamping my print, in the end, I actually preferred my “mistake” print that I had done without mirroring the text, and turned it into a magnet!

 

This process can be applied to any project you can think of, and I found the ability to work in 2D and then extrude extremely helpful for me, as I feel more comfortable in 2D design programs. Tinkercad was simple and easy to use, but its simplicity meant that I had to do a few workarounds to get the results I wanted. I’m still troubleshooting ways to make my stamp “stampable”, and would appreciate any ideas you all have! As always feel free to come explore with us for free in the ETC Lab by attending Friday Open Hours from 10:00am to 4:00pm, or email etclab@umn.edu for an appointment.

 

Painting in Virtual Reality: Exploring Google Tilt Brush

By Thaddeus Kaszuba-Dias

Currently in the ETC lab we have been experimenting with the capabilities of Google Tilt Brush, a virtual reality program designed for the HTC Vive Virtual Reality headset (VR). This program is essentially a 3D sketchbook where you can create, draw, design, and experiment to your heart’s content in a virtual 3D space. It takes the idea of drawing on paper and gives you a 3rd dimension with a very cool palette of brushes and effects to play with to make your pieces come to life.

Me playing around in Tilt Brush

Some of the questions we have been asking are “What are the capabilities of this program in CLA? If there are multiple people working with this program, what is the vocabulary like? How does that vocabulary shift when creating a 2D to 3D piece?” These are things that we in the ETC lab will be exploring more and more with the many diverse guests we have in our space!

As of now, google tilt brush is a very interesting program to experiment with importing 3D properties that others have developed in another program such as Maya, Google Sketch-up, or Blender. Then users have added their own creative spins on these imports using the tools provided in google tilt brush. These can also be exported to be played with in their respective platforms.

Screenshot of a drawing on top of an imported model in Tilt Brush

There is also a capacity for storytelling in google tilt brush. The creative, and perhaps perfectionist mindset thrives in Google Tilt Brush. For those wishing to express a single emotion, tell a story, or express a thought visually, Google Tilt Brush seems to have just the right amount of tools and atmosphere for that kind of work to thrive. All pieces created in Tilt Brush can be exported as 3D models to be uploaded to galleries, websites, or blogs. It could possibly be a revolutionary tool for creatives on a budget. And don’t worry, its free of charge to use here in the ETC Lab!

Animated .GIF recorded in Tilt Brush

The Virtuality Continuum for Dummies

By Michael Major

Do you watch TV, surf the internet, or occasionally leave your residence and talk to other human beings? If you can answer yes to any of those conditions, you have probably heard of virtual reality, augmented reality, or mixed reality. Perhaps you have seen one of Samsung’s Gear VR commercials, played Pokemon GO, taken a Microsoft Hololens for a test drive, or watched a YouTube video about the mysterious Magic Leap. If you are anything like me, then your first experiences with these new realities left you with a lot of questions like What do the terms Virtual Reality, Augmented Reality, and Mixed Reality even mean? What are the differences between VR/AR/MR? Google searches may bring you to confusing articles about the science that makes the blending of realities possible, which is extremely overwhelming. So let’s break these concepts down into terms that we can all understand.

The first step to understanding the virtuality continuum is grasping the difference between the real environment (blue section labeled RE on left in the diagram below) and a completely virtual environment (labeled VR on the right in the diagram below).

Source: http://smartideasblog.trekk.com/augmented-or-virtual-how-do-you-like-your-reality

The real environment is the physical space that you are in. For example, I am sitting in a real chair, at my real desk, in a real room as I am writing this.

I can feel the keys on my keyboard, see my co-workers walk to the break room, hear them describe their weekends, smell the coffee that gets brewed, and taste the sandwich that I brought for lunch.

A completely virtual environment is a digital environment that a user enters by wearing a headset.

So let’s say I load up theBlu (a VR application that lets the user experience deep sea diving) and put on the HTC Vive headset and a good pair of noise cancelling headphones. I will no longer see or hear any of my co-workers, or anything else from the room that I am physically standing in. Instead, I will see and hear giant whales! I am also able to look around in all directions as though I am actually there in the ocean.

The next step to understanding the Virtuality Continuum is knowing the difference between augmented reality (teal section labeled AR in the diagram below) and augmented virtuality (green section labeled AV in the diagram below).

Source: http://smartideasblog.trekk.com/augmented-or-virtual-how-do-you-like-your-reality

Augmented reality involves layering digital content on top of a live view of the physical space that you are in. A fun example of this is how Vespa, a company that sells motorized scooters, hired a company called 900lbs of Creative to create an augmented reality app that lets you customize your own scooter by holding your phone up to their ad in a magazine as if you were going to take a picture of it.

The app recognizes the blue pattern on the page of the magazine and then adds the 3D model of the scooter to the screen on top of that blue pattern. Without this app, the man would just be looking at a magazine sitting on a table, instead of being able to see both the magazine and a digital scooter that he can customize and even drive around on the table!

Augmented virtuality is when objects from the user’s real-world environment are added to the virtual environment that the user is experiencing. Let’s dive back into theBlu to explore an example of augmented virtuality. Imagine that I have added some sensors to the front of the Vive headset. These sensors have the ability to recognize my hands and track their movements. Now I can turn this completely virtual experience into an augmented virtuality experience in which I can see and use my hands inside the virtual environment.

Note: these sensors are not yet available for VR headsets (as of December 2016). However, Intel has a product called Intel RealSense Technology which allows cameras to sense depth and is used in some computer and mobile phone applications. But let’s imagine that I do have this kind of sensor for the Vive.

With this new technology, I could add cool features to theBlu such as the ability to pick up virtual seashells with my hands instead of using a controller to do so. Or I could swim around in the virtual ocean by doing a breaststroke instead of holding down a button on the controller. This would make my virtual experience much more immersive.

The last step in understanding the virtuality continuum is figuring out what people mean when they refer to mixed reality (green section labeled MR in the figure below).

Source: http://smartideasblog.trekk.com/augmented-or-virtual-how-do-you-like-your-reality

By definition, the term mixed reality means “the merging of real and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact in real time.” So augmented reality and augmented virtuality are both technically under the mixed reality umbrella because they are mixing the real world with digital content in some way or another. However, a company called Magic Leap has recently started to refer to their experience as a mixed reality experience in an effort to set themselves apart from augmented reality and augmented virtuality technologies. When Magic Leap uses the term mixed reality, it is meant to describe a technology that makes it difficult for the user to discern between what is real and what is digital, as if everything that the user is experiencing is part of the real world. I must admit, if the videos that Magic Leap has released are accurate, then their technology really is in a league of its own. Take a look at their website and decide for yourself.

There you have it, the Virtuality Continuum. Now you will know what people are talking about when they refer to virtual reality, augmented reality, or anything in between.

 

Virtual Reality and Immersive Content for Education

By Jake Buffalo

Virtual reality devices and applications are becoming increasingly popular, especially as the years go on and the technologies required to deliver virtual reality experiences are becoming more affordable. So, if you think virtual reality is a thing of the future, think again! There is a lot of content available today and suitable devices for experiencing virtual reality are not hard to find. Although there is a huge focus on video games for these types of devices, there is a lot you can do with them to create an educational experience for users.

In the ETC Lab, we have the HTC Vive and the Google Cardboard accessible for CLA students and professors to try out! In this blog post, I will give you a brief overview of each and let you know what kind of educational purposes they can have. We have a great list of some different apps and experiences that we found to be of interest. Take a look here: https://docs.google.com/document/d/1QJBMTpOtGAqF3P_5E7BMaUj518F3QBMV-OsroUe2nDw/edit#

 

HTC Vive

The HTC Vive is a headset that brings you into the world of virtual reality through a variety of different apps available in the Steam VR store online. The Vive system also includes controllers that allow you to move around in the different scenes and perform different actions depending on which application you are using.

There are many different apps available for the Vive, ranging anywhere from artistic and cultural experiences to immersive games. For example, you may be put “inside” an audiobook through immersive storytelling and placed inside scenes that bring the narration to life. There are also apps that put you inside the paintings of famous artists like Vincent Van Gogh or allow you to walk through a museum with different historical exhibits. The options really are endless and can be applied to a vast array of CLA majors and your specific interests!

 

Google Cardboard

The Google Cardboard is a handheld extension that hooks up to your smartphone and allows you to look through its eyeholes at your smartphone screen to view immersive virtual content. Originally, the device was only available in actual paper cardboard, but now they are available in all types of materials and designs, with plenty of sturdier options for a better experience. One cool thing you can do with the Cardboard is watch 360° videos on YouTube from different places around the world. You can go on a tour of historic locations you have always wanted to experience, like Buckingham Palace, or even watch Broadway’s “The Lion King” from the perspective of the performers. There are some experiences that are more artistically based–allowing you to experience the “Dreams of Dali” or enter into Bosch’s “Garden of Earthly Delights”–that may be relevant for different art majors at the U as well.

In addition, you can download apps on your iPhone or Android and use them in the Google Cardboard. There are many different apps available that relate to art, culture, science, journalism and more! For example, New York Times has different news stories that are available in a virtual reality narrative with the Cardboard. You can even experience the life of a neuron wandering around the human brain. Augmented reality is another feature available that overlays visual content on top of your real-life surroundings using the camera on your phone.

Overall, virtual reality is not just for “nerds”–there are programs available for everyone and we continue to find more possibilities every day. So, don’t forget to check out the list of apps we have made to help you get started with virtual reality at the top of the page and get in touch at etclab@umn.edu or come to our Friday 10am-4pm open hours if you want to try out some of these cool technologies!