The Stamp Project: Extruding Vector Graphics for 3D Printing Using Tinkercad

By Rachel Dallman

I have recently been experimenting with 3D printing using the ETC Lab’s MakerBot Replicator Mini, printing different open-source models I found on Thingiverse. I wanted to start printing my own models, but found traditional full-scale 3D modeling softwares like Blender and Autodesk Maya to be intimidating as a person with minimal modeling or coding experience. In my search for a user-friendly and intuitive modeling platform, I found Tinkercad – an extremely simplified browser-based program with built in tutorials that allowed me to quickly and intuitively create models from my imagination. The best part about the program, for me, was the ability to import and extrude vector designs I had made in Illustrator.

Tinkercad’s easy to use Interface

For my first project using this tool, I decided to make a stamp using a hexagonal graphic I had made for the ETC Lab previously.

My original graphic is colored, but in order to extrude the vectors the way I wanted to, I had to edit the graphic to be purely black and white, without overlaps, meaning I needed to remove the background fill color. I also had to mirror the image so that it would stamp the text in the correct orientation (I actually failed to do this on my first attempt, and ended up using the print as a magnet since it would stamp the text backwards).  I’m using Illustrator to do all of this because that’s where I created the graphic, but any vector based illustration software will work (Inkscape is a great open source option!). You can also download any .SVG file from the internet (you can browse thousands at https://thenounproject.com/ and either purchase the file or give credit to the artist). If you’re confused about what parts of your image need to be black, it’s helpful to imagine that all of the black areas you create will be covered in ink. Below is a picture of what my image looked like in Illustrator after I had edited it.

To do this, I started by selecting each individual part of my graphic and changing the fill and stroke color to black, and removed the fill from the surrounding hexagon. To reflect the image, I selected everything and clicked Object > Transform > Reflect. My stamp-ready file looks like this:

In order for Tinkercad to read the file, I had to export it in .SVG format by going to File > Export > Export As… and choose .SVG in the drop down menu. If you’re doing this in Illustrator, you’ll want to use the following export settings:

I then opened Tinkercad and imported my file. Much to my dismay, when I first brought the .SVG file into Tinkercad, it couldn’t recognize the file format, meaning I had to do some digging around online to figure out what was going on. I found that the problem was with the way Illustrator exports the .SVG file. I had to add in a single line of code to the top of the file in order to solve the problem. The problem is that Illustrator is exporting the file at .SVG version 1.1, and Tinkercad can only read .SVG 1.0, so I had to manually revert the file to the previous version. I downloaded Atom, an open source code editor and pasted in the following line of code at the very beginning of the file and saved it. This step might be irrelevant to you depending on the software you’re using, so be sure to attempt importing the file into Tinkercad before you change any of the code.

<?xml version="1.0"?>

I then imported the updated file, ending up with this solid hexagon. This was not what I wanted, and I assumed that Tinkercad was simply filling in the outermost lines that it detected from my vector file. Apparently, the price for the simplicity of the program is one of its many limitations. 

After I noticed that it was possible to manually create hexagons in Tinkercad, I decided to go back into Illustrator and delete the surrounding hexagon and then simply build it back in Tinkercad after my text had been imported. Depending on the complexity of your design, you may decide to do it like I did and build simple shapes directly in Tinkercad, or you may want to upload multiple separate .SVG files that you can then piece together. This is what my new vector file looked like after I imported it.

Next, I wanted to make the base of the stamp, and a hexagonal ridge at the same height of my text that would stamp a line around my text like my original vector file. To do this, I selected the hexagonal prism, clicking and dragging it onto the canvas. I then adjusted the size and position visually by clicking and dragging the vertices (hold Shift if you want to keep the shape proportionate) until it fit the way I wanted it to. I then duplicated the first hexagon twice by copying and pasting. I then scaled one of those hexagons to be slightly smaller than the other and placed it directly on top of the other, until their difference was the border size that I wanted. I then switched the mode of the smaller hexagon to “Hole” in the righthand corner, so that my smaller hexagon would be cut out of the larger one, leaving me with my hexagonal border. Next, I positioned the hollow hexagon directly on top of the base, and extruded it to the same height as my letters, so that it would stamp. For precise measurements like this, I chose to type in the exact height I wanted in the righthand panel. My final stamp model looked like this:

Then, I downloaded the model as an .STL file, and opened it in our MakerBot program and sized it for printing. Around three hours later, my print was ready and looked like this:

 

As you can probably tell, the stamp has already been inked. While my print turned out exactly the way I planned it to, I found that the PLA material was not great for actually stamping. On my first stamp attempt, I could only see a few lines and couldn’t make out the text at all.

 

I assumed that the stamping problems had something to do with the stamp’s ability to hold ink, and the stiffness of the plastic. I decided to sand the stamp to create more grit for holding ink, and tried placing the stamp face up with the paper on top of it instead, allowing the paper to get into the grooves of the stamp. This process worked a bit better, but still didn’t have the rich black stamp I was hoping for.

 

Because of this difficulty with actually stamping my print, in the end, I actually preferred my “mistake” print that I had done without mirroring the text, and turned it into a magnet!

 

This process can be applied to any project you can think of, and I found the ability to work in 2D and then extrude extremely helpful for me, as I feel more comfortable in 2D design programs. Tinkercad was simple and easy to use, but its simplicity meant that I had to do a few workarounds to get the results I wanted. I’m still troubleshooting ways to make my stamp “stampable”, and would appreciate any ideas you all have! As always feel free to come explore with us for free in the ETC Lab by attending Friday Open Hours from 10:00am to 4:00pm, or email etclab@umn.edu for an appointment.

 

Painting in Virtual Reality: Exploring Google Tilt Brush

By Thaddeus Kaszuba-Dias

Currently in the ETC lab we have been experimenting with the capabilities of Google Tilt Brush, a virtual reality program designed for the HTC Vive Virtual Reality headset (VR). This program is essentially a 3D sketchbook where you can create, draw, design, and experiment to your heart’s content in a virtual 3D space. It takes the idea of drawing on paper and gives you a 3rd dimension with a very cool palette of brushes and effects to play with to make your pieces come to life.

Me playing around in Tilt Brush

Some of the questions we have been asking are “What are the capabilities of this program in CLA? If there are multiple people working with this program, what is the vocabulary like? How does that vocabulary shift when creating a 2D to 3D piece?” These are things that we in the ETC lab will be exploring more and more with the many diverse guests we have in our space!

As of now, google tilt brush is a very interesting program to experiment with importing 3D properties that others have developed in another program such as Maya, Google Sketch-up, or Blender. Then users have added their own creative spins on these imports using the tools provided in google tilt brush. These can also be exported to be played with in their respective platforms.

Screenshot of a drawing on top of an imported model in Tilt Brush

There is also a capacity for storytelling in google tilt brush. The creative, and perhaps perfectionist mindset thrives in Google Tilt Brush. For those wishing to express a single emotion, tell a story, or express a thought visually, Google Tilt Brush seems to have just the right amount of tools and atmosphere for that kind of work to thrive. All pieces created in Tilt Brush can be exported as 3D models to be uploaded to galleries, websites, or blogs. It could possibly be a revolutionary tool for creatives on a budget. And don’t worry, its free of charge to use here in the ETC Lab!

Animated .GIF recorded in Tilt Brush

The Virtuality Continuum for Dummies

By Michael Major

Do you watch TV, surf the internet, or occasionally leave your residence and talk to other human beings? If you can answer yes to any of those conditions, you have probably heard of virtual reality, augmented reality, or mixed reality. Perhaps you have seen one of Samsung’s Gear VR commercials, played Pokemon GO, taken a Microsoft Hololens for a test drive, or watched a YouTube video about the mysterious Magic Leap. If you are anything like me, then your first experiences with these new realities left you with a lot of questions like What do the terms Virtual Reality, Augmented Reality, and Mixed Reality even mean? What are the differences between VR/AR/MR? Google searches may bring you to confusing articles about the science that makes the blending of realities possible, which is extremely overwhelming. So let’s break these concepts down into terms that we can all understand.

The first step to understanding the virtuality continuum is grasping the difference between the real environment (blue section labeled RE on left in the diagram below) and a completely virtual environment (labeled VR on the right in the diagram below).

Source: http://smartideasblog.trekk.com/augmented-or-virtual-how-do-you-like-your-reality

The real environment is the physical space that you are in. For example, I am sitting in a real chair, at my real desk, in a real room as I am writing this.

I can feel the keys on my keyboard, see my co-workers walk to the break room, hear them describe their weekends, smell the coffee that gets brewed, and taste the sandwich that I brought for lunch.

A completely virtual environment is a digital environment that a user enters by wearing a headset.

So let’s say I load up theBlu (a VR application that lets the user experience deep sea diving) and put on the HTC Vive headset and a good pair of noise cancelling headphones. I will no longer see or hear any of my co-workers, or anything else from the room that I am physically standing in. Instead, I will see and hear giant whales! I am also able to look around in all directions as though I am actually there in the ocean.

The next step to understanding the Virtuality Continuum is knowing the difference between augmented reality (teal section labeled AR in the diagram below) and augmented virtuality (green section labeled AV in the diagram below).

Source: http://smartideasblog.trekk.com/augmented-or-virtual-how-do-you-like-your-reality

Augmented reality involves layering digital content on top of a live view of the physical space that you are in. A fun example of this is how Vespa, a company that sells motorized scooters, hired a company called 900lbs of Creative to create an augmented reality app that lets you customize your own scooter by holding your phone up to their ad in a magazine as if you were going to take a picture of it.

The app recognizes the blue pattern on the page of the magazine and then adds the 3D model of the scooter to the screen on top of that blue pattern. Without this app, the man would just be looking at a magazine sitting on a table, instead of being able to see both the magazine and a digital scooter that he can customize and even drive around on the table!

Augmented virtuality is when objects from the user’s real-world environment are added to the virtual environment that the user is experiencing. Let’s dive back into theBlu to explore an example of augmented virtuality. Imagine that I have added some sensors to the front of the Vive headset. These sensors have the ability to recognize my hands and track their movements. Now I can turn this completely virtual experience into an augmented virtuality experience in which I can see and use my hands inside the virtual environment.

Note: these sensors are not yet available for VR headsets (as of December 2016). However, Intel has a product called Intel RealSense Technology which allows cameras to sense depth and is used in some computer and mobile phone applications. But let’s imagine that I do have this kind of sensor for the Vive.

With this new technology, I could add cool features to theBlu such as the ability to pick up virtual seashells with my hands instead of using a controller to do so. Or I could swim around in the virtual ocean by doing a breaststroke instead of holding down a button on the controller. This would make my virtual experience much more immersive.

The last step in understanding the virtuality continuum is figuring out what people mean when they refer to mixed reality (green section labeled MR in the figure below).

Source: http://smartideasblog.trekk.com/augmented-or-virtual-how-do-you-like-your-reality

By definition, the term mixed reality means “the merging of real and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact in real time.” So augmented reality and augmented virtuality are both technically under the mixed reality umbrella because they are mixing the real world with digital content in some way or another. However, a company called Magic Leap has recently started to refer to their experience as a mixed reality experience in an effort to set themselves apart from augmented reality and augmented virtuality technologies. When Magic Leap uses the term mixed reality, it is meant to describe a technology that makes it difficult for the user to discern between what is real and what is digital, as if everything that the user is experiencing is part of the real world. I must admit, if the videos that Magic Leap has released are accurate, then their technology really is in a league of its own. Take a look at their website and decide for yourself.

There you have it, the Virtuality Continuum. Now you will know what people are talking about when they refer to virtual reality, augmented reality, or anything in between.

 

Virtual Reality and Immersive Content for Education

By Jake Buffalo

Virtual reality devices and applications are becoming increasingly popular, especially as the years go on and the technologies required to deliver virtual reality experiences are becoming more affordable. So, if you think virtual reality is a thing of the future, think again! There is a lot of content available today and suitable devices for experiencing virtual reality are not hard to find. Although there is a huge focus on video games for these types of devices, there is a lot you can do with them to create an educational experience for users.

In the ETC Lab, we have the HTC Vive and the Google Cardboard accessible for CLA students and professors to try out! In this blog post, I will give you a brief overview of each and let you know what kind of educational purposes they can have. We have a great list of some different apps and experiences that we found to be of interest. Take a look here: https://docs.google.com/document/d/1QJBMTpOtGAqF3P_5E7BMaUj518F3QBMV-OsroUe2nDw/edit#

 

HTC Vive

The HTC Vive is a headset that brings you into the world of virtual reality through a variety of different apps available in the Steam VR store online. The Vive system also includes controllers that allow you to move around in the different scenes and perform different actions depending on which application you are using.

There are many different apps available for the Vive, ranging anywhere from artistic and cultural experiences to immersive games. For example, you may be put “inside” an audiobook through immersive storytelling and placed inside scenes that bring the narration to life. There are also apps that put you inside the paintings of famous artists like Vincent Van Gogh or allow you to walk through a museum with different historical exhibits. The options really are endless and can be applied to a vast array of CLA majors and your specific interests!

 

Google Cardboard

The Google Cardboard is a handheld extension that hooks up to your smartphone and allows you to look through its eyeholes at your smartphone screen to view immersive virtual content. Originally, the device was only available in actual paper cardboard, but now they are available in all types of materials and designs, with plenty of sturdier options for a better experience. One cool thing you can do with the Cardboard is watch 360° videos on YouTube from different places around the world. You can go on a tour of historic locations you have always wanted to experience, like Buckingham Palace, or even watch Broadway’s “The Lion King” from the perspective of the performers. There are some experiences that are more artistically based–allowing you to experience the “Dreams of Dali” or enter into Bosch’s “Garden of Earthly Delights”–that may be relevant for different art majors at the U as well.

In addition, you can download apps on your iPhone or Android and use them in the Google Cardboard. There are many different apps available that relate to art, culture, science, journalism and more! For example, New York Times has different news stories that are available in a virtual reality narrative with the Cardboard. You can even experience the life of a neuron wandering around the human brain. Augmented reality is another feature available that overlays visual content on top of your real-life surroundings using the camera on your phone.

Overall, virtual reality is not just for “nerds”–there are programs available for everyone and we continue to find more possibilities every day. So, don’t forget to check out the list of apps we have made to help you get started with virtual reality at the top of the page and get in touch at etclab@umn.edu or come to our Friday 10am-4pm open hours if you want to try out some of these cool technologies!

Exploring Sonic Pi: A Software For Live Coding Music

Exploring Sonic Pi: A Software For Live Coding Music

By Leela Li

Self-proclaimed “future of music”, Sonic Pi is an open source software aimed towards aiding musical instructors in engaging students in a different way: coding to make music. This software is pre-installed on Raspbian Jessie, but can also be run on other operating systems.

Having taken a few introductory computer science classes in the past, this concept of programming music – as opposed to the traditional sense of picking up a real instrument – is what pushed me to pick this up as my first Raspberry Pi project.

Raspberry Pi is a small single-board computer that comes with all of the essential functionalities to be a computer. One only needs to insert an SD card, plug in a monitor, mouse, keyboard, and power supply to get it up and running. One of its main purposes is so that the “average Joe” can have something small, powerful and flexible enough to allow for tinkering around, gaining knowledge in our increasingly digital world, and to hopefully contribute to it as well. My goal for its use today was to write a program on Sonic Pi that played “Canon in D”.

Getting started was simple enough. Once the Raspberry Pi was booted up, I opened up Sonic Pi and followed the instructions listed on the Raspberry Pi site. This provided me with the basic knowledge of things such as making sounds with MIDI note numbers and manipulating basic parameters of notes to change the way it sounds.

Now, came the hard part. My prior knowledge in programming did not help replace my lack of knowledge in music.

In Sonic Pi, there is a total of 128 (0 to 127) MIDI note numbers that get divided up by 11 (0 to 10) octaves. Having only taken three months of piano lessons back in the fifth grade, the word, “octaves” is nothing but foreign to me.

Staring at the online sheet music of the most basic version of “Canon”, I worked hard to translate each note to its corresponding MIDI note number. After about an hour or two, the conclusion was too obvious: this was well beyond the capacity of my now, musically untalented, 22-year-old self.

Overall, Sonic Pi is an amazing platform for those interested in creating music through a different medium. Just make sure you don’t make my mistake and remember to brush up on the musical aspect prior to diving in.

If you’re still not convinced, just look below to see what you could do with Sonic Pi!

Example of what one can do with Sonic Pi:

Aerodynamic” by Daft Punk

https://www.youtube.com/watch?v=cydH_JAgSfg

Resources:

Building a photogrammetry turntable

As we explained in our photogrammetry introduction, photogrammetry fundamentally involves taking many photos of an object from multiple angles.  This is pretty straightforward, but it can also be time consuming.  A typical process involves a camera on a tripod, being moved slightly between each shot.  An alternative is to put the object on a turntable, so the camera can remain in one place.  This is still pretty fiddly though – moving a turntable slightly, taking a photo, then repeating 30 or 40 times.  It also introduces opportunity for error – the camera settings might be bumped, or the object might fall over.

To us, this sounded like a great opportunity for some automation.  As part of the LATIS Summer Camp 2016, we challenged ourselves to build an automated turntable, which could move a fixed amount, then trigger a camera shutter, repeating until the object had completed a full circle.

Fortunately, we weren’t the first people to have this idea, and we were able to draw upon the work of many other projects.  In particular, we used the hardware design from the MIT Spin Project, along with some of the software and electrical design from Sparkfun’s Autodriver Getting Started Guide.  We put the pieces together and added a bit of custom code and hardware for the camera trigger.

The first step was getting all the parts.  This is our current build sheet, though we’re making some adjustments as we continue to test.

[googleapps domain=”docs” dir=”spreadsheets/d/18FVIXNNT8n3cJQVWfgfFD0G26C9ARmLLhKIKlKBOokA/pubhtml” query=”widget=true&headers=false” width=”100%” height=”500″ /]

2016-07-12 11.18.31We also had to do some fabrication, using a laser cutter and 3d printer.  Fortunately, here at the University of Minnesota, we can leverage the XYZ Lab.  The equipment there made short work of our acrylic, and our 3d printed gear came out beautifully.

With the parts on hand and the enclosure fabricated, it was mostly just a matter of putting it all together.  We started with a basic electronics breadboard to do some testing and experimentation.

Our cameras use a standard 2.5mm “minijack” (like a headphone jack, but smaller) connector for camera control.  These are very easy to work with – they just have three wires (a ground and two others).  By connecting one wire to ground, you can trigger the focus function.  The other wire will trigger the shutter.  A single basic transistor is all that’s necessary to give our Arduino the ability to control these functions.

The basic wiring for the motor control follows the hookup guide from Sparkfun, especially the wiring diagram towards the end. The only other addition we made as a simple momentary button to start and stop the process.

Once we were confident that the equipment we working, we disassembled the whole thing and put it back together with a solder-on prototyping board and some hardware connectors.  Eventually, we’d like to fabricate a PCB for this, which would be even more robust and compact. The spin project has PCB plans available, though their electronics setup is a little different. If anyone has experience laying out PCBs and wants to work on a project, let us know!

While building the first turntable was pretty time consuming, the next one could be put together in only a few hours.  We’re going to continue tuning the software for this one, to find the ideal settings.  If there are folks on campus who’d like to build their own, just let us know!

 

2016-07-15 12.39.54

2016-07-11 14.41.44

 

 

 

 

 

 

[youtube https://www.youtube.com/watch?v=txXyAkVK_tE&w=560&h=315]

Introduction to Photogrammetry

Whether you’re working on VR, 3D printing, or innovative research, capturing real-world objects in three dimensions is an increasingly common need.  There are a lot of technologies to aid in this process.  When people think of 3D capture, the first thing the often comes to mind is a laser scanner – a laser beam that moves across an object capturing data about the surface.  They look very “hollywood” and impressive.

Another common type of capture is structured-light capture.  In structured light 3d capture, different patterns are projected onto an object (using something like a traditional computer projector).  A camera then looks at how the patterns are stretched or blocked, and calculates the shape of the surface from there.  This is also how the Microsoft Kinect works, though it uses invisible (infrared) patterns.

Both of these approaches can deliver high precision results, and have some particular use cases.  But they require specialized equipment, and often specialized facilities.  There’s another technology that’s much more accessible: photogrammetry.

Calculated Camera Positions

Calculated Camera Positions

In very simple terms, photogrammetry involves taking many photos of an object, from different sides and angles.  Software then uses those photos to reconstruct a 3d representation, including the surface texture.  The term photogrammetry actually encapsulates a wide variety of processes and outputs, but in the 3d space, we’re specifically interested in stereophotogrammetry.  This process involves finding the overlap between photos.  From there, the original camera positions can be calculated.  Once you know the camera positions, you can use triangulation to calculate the position in space of any given point.

The process itself is very compute-intensive, so it needs a powerful computer (or a patient user).  Photogrammetry benefits from very powerful graphics cards, so it’s currently best suited to use on customized PCs.

One of the most exciting parts of photogrammetry is that it doesn’t work only with photos taken in a controlled lighting situation.  You can experiment with performing photogrammetry using a mobile device and the 123d Catch application.  Photogrammetry can even be performed using still frames extracted from video – walking around an object capturing video for example.

For users looking to get better results, we’re going to be writing some guides on optimizing the process. A good quality digital camera with a tripod, and some basic lighting equipment can dramatically improve the results.

Because it uses simple images, photogrammetry is also well suited to covering 3D data from imaging platforms like drones or satellites.

Photogrammetry Software

There are a handful of popular tools for photogrammetry.  One of the oldest and most established tools is PhotoScan from AgiSoft.  PhotoScan is a “power user” tool, which allows for many custom interventions to optimize the photogrammetry process.  It can be pretty intimidating for new users, but in the hands of the right user it’s very powerful.

An easier (and still very powerful) alternative is Autodesk Remake.  Remake doesn’t expose the same level of control that PhotoScan has, but in many cases it can deliver a stellar result without any tweaking.  It also has sophisticated tools for touching up 3d objects after the conversion process.  An additional benefit is that it has the ability to output models for a variety of popular 3d scanners.  Remake is free for educational uses as well.

There are also photogrammetry tools for specialized used cases.  We’ve been experimenting with Pix4D, which is a photogrammetry toolset designed specifically for drone imaging.  Because Pix4D knows about different models of drones, it can automatically correct for camera distortion and the types of drift that are common with drones.  Pix4D also has special drone control software, which can handle the capture side, ensuring that the right number of photos are captured, with the right amount of overlap.

CORS: The Internet’s security “bouncer”

One of the realities of the modern web is that every new technology needs to balance between functionality and security.  In this article, we talk about one particular case of this balance that comes into play and how it may affect working with virtual reality technology on the web.

When building a website, it’s not unusual to embed resources from one website inside another website.  For example, an image on a webpage (loaded via the “img” tag) can point to a JPEG stored on an entirely different server.  Similarly, Javascript files or other resources might come from remote servers.  This introduces some potential for security issues for consumers of web content.  For example, if your website loads a Javascript file from another site, and a hacker is able to modify that file, you’ll be loading potentially malicious code on your website.  Browsers normally address the dangers of mixed-origin content by tightly controlling the ways in which scripts from different servers can talk to other servers on the Internet.  This area of security is called “cross origin protection.”

For example, let’s say we have a webpage, Foo.com.  This webpage loads in our browser, along with an associated Javascript file, Foo.js, which is responsible for loading additional assets and managing interactive content on the page.  This Foo.js file, then, attempts to load some image content from Bar.com and plop it into a <canvas> tag on our webpage.  This all seems innocent enough so far, right?… WRONG!

In fact, this is a major security risk.  For example, imagine Bar.com is a web server that displays scanned documents.  For illustrative purposes, let’s pretend Bar.com is actually “IRS.com”, and contains millions of users’ scanned tax records.  In the scenario above, without any security measures in place, our Foo.js file would be able to reach into Bar.com, grab a secret file, plop it into the page’s <canvas> tag, read out the contents, and store it back to the Foo.com server for further exploitation.  The server administrator at Foo.com would then have access to millions of users’ tax records data that had been maliciously sniped from Bar.com.  It’s easy to see, then, that scripts like Foo.js can quickly become a security risk.  Content that “crosses origins”–that loads from separate places on the Internet–needs to prevented, from co-mingling in potentially malicious ways.

The solution?  Browsers, by default, will block this type of cross-origin content loading altogether!  If your website and script are being loaded from Foo.com, then your browser forces the website to “stick to its lane”.  Foo.com webpages will only be allowed to load other content that comes from Foo.com, and will be blocked from loading–and potentially exploiting–content from Bar.com.

Cross-origin protection and WebVR

This basic situation plays out in many different ways within web technology.  A cross-origin image can’t be read back into an HTML <canvas>, and, importantly for our conversation, can’t be used as a WebGL texture.  WebGL is the technology underlying all of the web-based virtual reality tools like AFrame and ThreeJS.  The specifics of why cross-domain images can’t be used as textures in WebGL are pretty fascinating, but also pretty complicated.

In practice, what this means is that if your virtual reality Javascript is stored on one server, it can’t easily load images or videos stored on another server.  Unfortunately, this could be pretty restrictive in when trying to create WebVR content; even within the University, we often have resources split across many servers.

Cross-Origin Resource Sharing (CORS)

Fortunately, there’s a solution called Cross Origin Resource Sharing.  This is a way to tell your web servers to explicitly opt-in to cross-domain uses of their content.  It allows a webserver like Bar.com to say “I expect to send resources to scripts at Foo.com, so allow those requests to go through and load into the browser.”  It’s basically the Internet equivalent of telling the bouncer at your favorite club to put your buddy on a VIP access list, rather than leaving him standing at the door.  As long as the bouncer…erm, browser…sees that a specific source of data is vouched for, it will allow the requests to go through.

whiteboard drawing of a browser requesting content from bar.com and getting blocked by stick-figure CORS bouncer guy

Doing these CORS checks requires some extra communication between the browser and the server, so occasionally the browser skips CORS checks.  However, when creating VR content in particular, sometimes we want to explicitly ask the browser to perform a CORS check so that the content can be loaded into a secure element like an HTML <canvas> or WebGL texture for VR display.   In this case, the “crossdomain” attribute on HTML elements is necessary.  If we load an image using the HTML code <img src="http://bar.com/image.jpg" crossorigin="anonymous"/> the browser will perform a CORS check before loading the image for the user to view.  Assuming the server hosting the image (in this case, Bar.com) has CORS allowed for the target website (Foo.com), that image will be considered safe for things like loading into an HTML <canvas> or using as a WebGL texture on Foo.com.  In this way, VR websites hosted on one server can continue to load pre-approved resources from other remote servers, as long as those servers provide the necessary “OK” for CORS.

CORS troubleshooting

Even if you’re doing everything right in the browser and on the server, CORS can still provide some headaches.  When you encounter a CORS-related failure, the errors that are generated are often opaque and hard to unpack.  Things like caching within the browser can also make these errors feel sporadic and harder to track down: one minute it may look like your image is suddenly loading (or suddenly not loading) correctly, when in fact what you’re seeing is a previously-cached, older version of the image or associated Javascript that your browser has stored behind the scenes.

Even worse, some browsers have broken or unreliable implementations of CORS.  For example, Safari on Mac OS X cannot currently load videos via CORS, regardless of the browser and server settings.  As a general tip, if you see errors in your browser’s web developer console that mention any sort of security restriction, start by looking into whether you’ve come up against a CORS-related issue.

Detecting Spherical Media Files

In many ways, VR is still a case of a “wild west” as far as technology goes.  There are very few true standards, and those that do exist haven’t been implemented widely.

Recently, we’ve been looking at how to automatically identify spherical (equirectangular) photos and videos so they can be displayed properly in our Elevator digital asset management tool.  “Why is this such a problem in the first place?” you may be wondering.  Well, spherical photos and videos are packaged in a way that they resemble pretty much any other type of photo of video.  At this point, we’re working primarily with images from our Ricoh Theta spherical cameras, which saves photos as .JPG files and videos as .MP4 files.  Our computers recognize these file types as being photo and video files – which they are – but doesn’t have an automatic way of detecting the “special sauce”: the fact that they’re spherical!  You can open up these files in your standard photo/video viewer, but they look a little odd and distorted:

R0010012

So, we clearly need some way of detecting if our photos and videos were shot with a spherical camera.  That way, when we view them, we can automatically plop them into a spherical viewer, which can project our photos and videos into a spherical shape so they can be experienced as they were intended to be experienced!  As it turns out, this gets a bit messy…

Let’s start by looking at spherical photos.  We hypothesized that there must be metadata within the files to identify them as spherical.  The best way to investigate a file in a case like this is with ExifTool, which extracts metadata from nearly every media format.

While there’s lots of metadata in an image file (camera settings, date and time information, etc.), our Ricoh Theta files had some very promising additional items:

Projection Type : equirectangular
Use Panorama Viewer : True
Pose Heading Degrees : 0.0
Pose Pitch Degrees : 5.8
Pose Roll Degrees : 2.8

Additional googling reveals that the UsePanoramaViewer attribute has its origins in Google Streetview’s panoramic metadata extensions.  This is somewhere in the “quasi-standard” category – there’s no standards body that has agreed on this as the way to flag panoramic images, but manufacturers have adopted it.

Video, on the other hand is a little harder to deal with at the moment.  Fortunately, it has the promise of becoming easier in the future.  There’s a “request for comments” with a proposed metadata standard for spherical metadata.  This RFC is specifically focused on storing spherical metadata in web-delivery files (WebM and MP4), using a special identifier (a “UUID”) and some XML.

Right now, reading that metadata is pretty problematic.  None of the common video tools can display it.  However, open source projects are moving quickly to adopt it, and Google is already leveraging this metadata with files uploaded to YouTube.  In the case of the Ricoh cameras we use, their desktop video conversion tool has recently been updated to incorporate this type of metadata as well.

One of the most exciting parts of working in VR right now is that the landscape is changing on a week-by-week basis.  Problems are being solved quickly, and new problems are being discovered just as quickly.

Sharing code in higher ed

The “sharing first” sentiment is gaining momentum across academia…

The Economist recently ran an article about commercial applications of code from higher education.  While LATIS Labs isn’t exactly planning to churn out million-dollar software to help monetize eyeballs or synergize business practices, we do want to be sharing software.

We believe that sharing software is a part of our responsibility as developers at a public institution.  Of course we’ll be releasing code – we’re in higher education.  This “sharing first” sentiment is also gaining momentum in other parts of academia, from open textbooks, to open access journals, to open data (see some related links below).

We also believe that releasing code makes for better code.  At a big institution like the University of Minnesota, it’s easy to cut corners on software development by relying on private access to databases or by making assumptions about your users.  Writing with an eye towards open source forces you to design software the right way, it forces to you document your code, and it forces you to write software you’re proud of.

As we work on things in LATIS Labs, you’ll find them at github.com/umn-latis.   Clone them, fork them, file issues on them.  We’ll keep sharing.

Resources & references