Meet Joan Assistant: The Meeting Room Reservation Solution

By Chris Scherr and Rebecca Moss

When we changed the layout of Anderson 110 from a cubicle farm and help desk setup to an open, customizable space, we turned four of the former offices into reservable meeting rooms. We created Google calendars for each so groups could reserve them, but spontaneous uses of these spaces were difficult to accommodate. We needed a solution that allowed folks in And110 to see if the rooms were reserved or free, and let them add a meeting on the spot.

Joan Assistant offered us the flexibility we were looking for, at a modest cost, and without the need to pay for upgrading the wiring in the room. Here is a quick video overview of how Joan Assistant works. We hope that sharing our experience might help others on campus who are looking for a lightweight solution to scheduling spaces.

Once we had the Joan Assistant, we had to figure out a way to connect it to our wireless network, which has a lot more security protocols than your average wifi setup. We gave the task to Chris Scherr, LATIS system engineer, who worked with Network Telecommunication Services (NTS) to figure out how to get it securely connected and online. Once that was accomplished, we bought two additional units and installed them by the meeting rooms.

There are three versions of Joan Assistant, and the only version supported on campus is the Executive as enterprise WPA2 is needed to authenticate to the wireless network – https://joanassistant.com/us/store/.

The company is also offering a limited edition 9.7 inch screen which I assume is enterprise WPA2 ready:  https://joanassistant.com/news/9-7-limited-edition/

Setup/configuration:

**Any unit in CLA can use our on-premises server, we would just need to work with the owner of the room calendar to grant the proper permissions to the Joan server .

Other colleges:

Several items are needed (any IT staff is more then welcome to contact Chris directly – cbs@umn.edu)

From my experience setting this up, here is the order I would take for a new setup:

1) Obtain a sponsored account for wireless access for the devices (IT will not grant departmental accounts access to wireless).  Request here:  https://my-account.umn.edu/create-sponsor-acct

2) Get a departmental account to serve as the google calendar tie in.  This account can own the room calendars or be granted full access to existing room calendars. (https://my-account.umn.edu/create-dept-acct)  

3)  Get an on premises server.  (https://joanassistant.com/us/help/hosting-and-calendar-support/install-joan-premises-server-infrastructure/)

The simplest solution is to request a self managed (SME) Linux VM from IT (1 core and 2 gigs of ram is adequate although two network adapters are required).  Ask them to to download the .vmdk from the link above as it is a preconfigured Ubuntu 14 VM.  Once the VM is setup I did the following configuration:  Limited ssh and 8081 traffic via iptables, changed the default root and joan user passwords.  Set IP addresses / dns names / cnames via servicegateway.

4)  Joan Configurator (https://portal.joanassistant.com/devices/wifi-settings)

Download and configure each Joan device with the Joan Configurator available from the link above.  You will need to specify the dns name or ip address of the on-premises server and authenticate to wireless via the sponsored account created in step 1 using PEAP.  Once configured, charged, and unplugged from usb the unit should display a 8-10 digit code.  If the unit is not responding gently shaking will wake the device from standby.

5) Joan portal (https://portal.joanassistant.com/devices/)

Several items need to be configured here:

User Settings

a) Hosting selection –  Make sure On-premises is selected

b) User settings – You may want to set additional contact emails for low battery warnings etc.

Calendar Settings

a)  Calendar – Connect to google calendar via the departmental account created in step 2

b)  Room Resources – Scan and add the room resources you want your Joan device(s) to be able to display/manipulate.

Device settings:

a)  Devices – click add device, you will be prompted to enter the 8-10 digit code from step 4 along with time zone and default calendar.

b)  Optional settings Logo/Office hours/Features can be changed/defined here

6)  Monitoring:  You can get quick information about your device from the on-premises server web portal.  It can be accessed here:   http://yourservername.umn.edu:8081/

Devices – This will show all devices configured to use your on-premises server.  You can click on any device and get information about it.  I would suggest renaming the device names to the room number or calendar name.  Live view will confirm the server is sending the correct image to your unit(s).  You can also check Charts to show Battery levels over time, signal Strength, disconnects, etc.

The e-ink screens require very little power so the batteries last a month or more before needing recharging. When batteries get low, it will send emails to whomever you designate. The power cords are standard ones and it takes about 6-8 hours to charge the device.

The units connect to the wall with a magnet so they will not be secure in an open space without additional configurations. Here in And110, we have them mounted on the door signs already located outside each door. We customized the units so they have the U of M logo. We are still in the testing stage with these devices, but our experience so far has been very positive.  Please contact us if you would like to learn more – latis@umn.edu – or come see them in person in Anderson 110 on the West Bank.

The Stamp Project: Extruding Vector Graphics for 3D Printing Using Tinkercad

By Rachel Dallman

I have recently been experimenting with 3D printing using the ETC Lab’s MakerBot Replicator Mini, printing different open-source models I found on Thingiverse. I wanted to start printing my own models, but found traditional full-scale 3D modeling softwares like Blender and Autodesk Maya to be intimidating as a person with minimal modeling or coding experience. In my search for a user-friendly and intuitive modeling platform, I found Tinkercad – an extremely simplified browser-based program with built in tutorials that allowed me to quickly and intuitively create models from my imagination. The best part about the program, for me, was the ability to import and extrude vector designs I had made in Illustrator.

Tinkercad’s easy to use Interface

For my first project using this tool, I decided to make a stamp using a hexagonal graphic I had made for the ETC Lab previously.

My original graphic is colored, but in order to extrude the vectors the way I wanted to, I had to edit the graphic to be purely black and white, without overlaps, meaning I needed to remove the background fill color. I also had to mirror the image so that it would stamp the text in the correct orientation (I actually failed to do this on my first attempt, and ended up using the print as a magnet since it would stamp the text backwards).  I’m using Illustrator to do all of this because that’s where I created the graphic, but any vector based illustration software will work (Inkscape is a great open source option!). You can also download any .SVG file from the internet (you can browse thousands at https://thenounproject.com/ and either purchase the file or give credit to the artist). If you’re confused about what parts of your image need to be black, it’s helpful to imagine that all of the black areas you create will be covered in ink. Below is a picture of what my image looked like in Illustrator after I had edited it.

To do this, I started by selecting each individual part of my graphic and changing the fill and stroke color to black, and removed the fill from the surrounding hexagon. To reflect the image, I selected everything and clicked Object > Transform > Reflect. My stamp-ready file looks like this:

In order for Tinkercad to read the file, I had to export it in .SVG format by going to File > Export > Export As… and choose .SVG in the drop down menu. If you’re doing this in Illustrator, you’ll want to use the following export settings:

I then opened Tinkercad and imported my file. Much to my dismay, when I first brought the .SVG file into Tinkercad, it couldn’t recognize the file format, meaning I had to do some digging around online to figure out what was going on. I found that the problem was with the way Illustrator exports the .SVG file. I had to add in a single line of code to the top of the file in order to solve the problem. The problem is that Illustrator is exporting the file at .SVG version 1.1, and Tinkercad can only read .SVG 1.0, so I had to manually revert the file to the previous version. I downloaded Atom, an open source code editor and pasted in the following line of code at the very beginning of the file and saved it. This step might be irrelevant to you depending on the software you’re using, so be sure to attempt importing the file into Tinkercad before you change any of the code.

<?xml version="1.0"?>

I then imported the updated file, ending up with this solid hexagon. This was not what I wanted, and I assumed that Tinkercad was simply filling in the outermost lines that it detected from my vector file. Apparently, the price for the simplicity of the program is one of its many limitations. 

After I noticed that it was possible to manually create hexagons in Tinkercad, I decided to go back into Illustrator and delete the surrounding hexagon and then simply build it back in Tinkercad after my text had been imported. Depending on the complexity of your design, you may decide to do it like I did and build simple shapes directly in Tinkercad, or you may want to upload multiple separate .SVG files that you can then piece together. This is what my new vector file looked like after I imported it.

Next, I wanted to make the base of the stamp, and a hexagonal ridge at the same height of my text that would stamp a line around my text like my original vector file. To do this, I selected the hexagonal prism, clicking and dragging it onto the canvas. I then adjusted the size and position visually by clicking and dragging the vertices (hold Shift if you want to keep the shape proportionate) until it fit the way I wanted it to. I then duplicated the first hexagon twice by copying and pasting. I then scaled one of those hexagons to be slightly smaller than the other and placed it directly on top of the other, until their difference was the border size that I wanted. I then switched the mode of the smaller hexagon to “Hole” in the righthand corner, so that my smaller hexagon would be cut out of the larger one, leaving me with my hexagonal border. Next, I positioned the hollow hexagon directly on top of the base, and extruded it to the same height as my letters, so that it would stamp. For precise measurements like this, I chose to type in the exact height I wanted in the righthand panel. My final stamp model looked like this:

Then, I downloaded the model as an .STL file, and opened it in our MakerBot program and sized it for printing. Around three hours later, my print was ready and looked like this:

 

As you can probably tell, the stamp has already been inked. While my print turned out exactly the way I planned it to, I found that the PLA material was not great for actually stamping. On my first stamp attempt, I could only see a few lines and couldn’t make out the text at all.

 

I assumed that the stamping problems had something to do with the stamp’s ability to hold ink, and the stiffness of the plastic. I decided to sand the stamp to create more grit for holding ink, and tried placing the stamp face up with the paper on top of it instead, allowing the paper to get into the grooves of the stamp. This process worked a bit better, but still didn’t have the rich black stamp I was hoping for.

 

Because of this difficulty with actually stamping my print, in the end, I actually preferred my “mistake” print that I had done without mirroring the text, and turned it into a magnet!

 

This process can be applied to any project you can think of, and I found the ability to work in 2D and then extrude extremely helpful for me, as I feel more comfortable in 2D design programs. Tinkercad was simple and easy to use, but its simplicity meant that I had to do a few workarounds to get the results I wanted. I’m still troubleshooting ways to make my stamp “stampable”, and would appreciate any ideas you all have! As always feel free to come explore with us for free in the ETC Lab by attending Friday Open Hours from 10:00am to 4:00pm, or email etclab@umn.edu for an appointment.

 

Painting in Virtual Reality: Exploring Google Tilt Brush

By Thaddeus Kaszuba-Dias

Currently in the ETC lab we have been experimenting with the capabilities of Google Tilt Brush, a virtual reality program designed for the HTC Vive Virtual Reality headset (VR). This program is essentially a 3D sketchbook where you can create, draw, design, and experiment to your heart’s content in a virtual 3D space. It takes the idea of drawing on paper and gives you a 3rd dimension with a very cool palette of brushes and effects to play with to make your pieces come to life.

Me playing around in Tilt Brush

Some of the questions we have been asking are “What are the capabilities of this program in CLA? If there are multiple people working with this program, what is the vocabulary like? How does that vocabulary shift when creating a 2D to 3D piece?” These are things that we in the ETC lab will be exploring more and more with the many diverse guests we have in our space!

As of now, google tilt brush is a very interesting program to experiment with importing 3D properties that others have developed in another program such as Maya, Google Sketch-up, or Blender. Then users have added their own creative spins on these imports using the tools provided in google tilt brush. These can also be exported to be played with in their respective platforms.

Screenshot of a drawing on top of an imported model in Tilt Brush

There is also a capacity for storytelling in google tilt brush. The creative, and perhaps perfectionist mindset thrives in Google Tilt Brush. For those wishing to express a single emotion, tell a story, or express a thought visually, Google Tilt Brush seems to have just the right amount of tools and atmosphere for that kind of work to thrive. All pieces created in Tilt Brush can be exported as 3D models to be uploaded to galleries, websites, or blogs. It could possibly be a revolutionary tool for creatives on a budget. And don’t worry, its free of charge to use here in the ETC Lab!

Animated .GIF recorded in Tilt Brush

The Virtuality Continuum for Dummies

By Michael Major

Do you watch TV, surf the internet, or occasionally leave your residence and talk to other human beings? If you can answer yes to any of those conditions, you have probably heard of virtual reality, augmented reality, or mixed reality. Perhaps you have seen one of Samsung’s Gear VR commercials, played Pokemon GO, taken a Microsoft Hololens for a test drive, or watched a YouTube video about the mysterious Magic Leap. If you are anything like me, then your first experiences with these new realities left you with a lot of questions like What do the terms Virtual Reality, Augmented Reality, and Mixed Reality even mean? What are the differences between VR/AR/MR? Google searches may bring you to confusing articles about the science that makes the blending of realities possible, which is extremely overwhelming. So let’s break these concepts down into terms that we can all understand.

The first step to understanding the virtuality continuum is grasping the difference between the real environment (blue section labeled RE on left in the diagram below) and a completely virtual environment (labeled VR on the right in the diagram below).

Source: http://smartideasblog.trekk.com/augmented-or-virtual-how-do-you-like-your-reality

The real environment is the physical space that you are in. For example, I am sitting in a real chair, at my real desk, in a real room as I am writing this.

I can feel the keys on my keyboard, see my co-workers walk to the break room, hear them describe their weekends, smell the coffee that gets brewed, and taste the sandwich that I brought for lunch.

A completely virtual environment is a digital environment that a user enters by wearing a headset.

So let’s say I load up theBlu (a VR application that lets the user experience deep sea diving) and put on the HTC Vive headset and a good pair of noise cancelling headphones. I will no longer see or hear any of my co-workers, or anything else from the room that I am physically standing in. Instead, I will see and hear giant whales! I am also able to look around in all directions as though I am actually there in the ocean.

The next step to understanding the Virtuality Continuum is knowing the difference between augmented reality (teal section labeled AR in the diagram below) and augmented virtuality (green section labeled AV in the diagram below).

Source: http://smartideasblog.trekk.com/augmented-or-virtual-how-do-you-like-your-reality

Augmented reality involves layering digital content on top of a live view of the physical space that you are in. A fun example of this is how Vespa, a company that sells motorized scooters, hired a company called 900lbs of Creative to create an augmented reality app that lets you customize your own scooter by holding your phone up to their ad in a magazine as if you were going to take a picture of it.

The app recognizes the blue pattern on the page of the magazine and then adds the 3D model of the scooter to the screen on top of that blue pattern. Without this app, the man would just be looking at a magazine sitting on a table, instead of being able to see both the magazine and a digital scooter that he can customize and even drive around on the table!

Augmented virtuality is when objects from the user’s real-world environment are added to the virtual environment that the user is experiencing. Let’s dive back into theBlu to explore an example of augmented virtuality. Imagine that I have added some sensors to the front of the Vive headset. These sensors have the ability to recognize my hands and track their movements. Now I can turn this completely virtual experience into an augmented virtuality experience in which I can see and use my hands inside the virtual environment.

Note: these sensors are not yet available for VR headsets (as of December 2016). However, Intel has a product called Intel RealSense Technology which allows cameras to sense depth and is used in some computer and mobile phone applications. But let’s imagine that I do have this kind of sensor for the Vive.

With this new technology, I could add cool features to theBlu such as the ability to pick up virtual seashells with my hands instead of using a controller to do so. Or I could swim around in the virtual ocean by doing a breaststroke instead of holding down a button on the controller. This would make my virtual experience much more immersive.

The last step in understanding the virtuality continuum is figuring out what people mean when they refer to mixed reality (green section labeled MR in the figure below).

Source: http://smartideasblog.trekk.com/augmented-or-virtual-how-do-you-like-your-reality

By definition, the term mixed reality means “the merging of real and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact in real time.” So augmented reality and augmented virtuality are both technically under the mixed reality umbrella because they are mixing the real world with digital content in some way or another. However, a company called Magic Leap has recently started to refer to their experience as a mixed reality experience in an effort to set themselves apart from augmented reality and augmented virtuality technologies. When Magic Leap uses the term mixed reality, it is meant to describe a technology that makes it difficult for the user to discern between what is real and what is digital, as if everything that the user is experiencing is part of the real world. I must admit, if the videos that Magic Leap has released are accurate, then their technology really is in a league of its own. Take a look at their website and decide for yourself.

There you have it, the Virtuality Continuum. Now you will know what people are talking about when they refer to virtual reality, augmented reality, or anything in between.

 

Virtual Reality and Immersive Content for Education

By Jake Buffalo

Virtual reality devices and applications are becoming increasingly popular, especially as the years go on and the technologies required to deliver virtual reality experiences are becoming more affordable. So, if you think virtual reality is a thing of the future, think again! There is a lot of content available today and suitable devices for experiencing virtual reality are not hard to find. Although there is a huge focus on video games for these types of devices, there is a lot you can do with them to create an educational experience for users.

In the ETC Lab, we have the HTC Vive and the Google Cardboard accessible for CLA students and professors to try out! In this blog post, I will give you a brief overview of each and let you know what kind of educational purposes they can have. We have a great list of some different apps and experiences that we found to be of interest. Take a look here: https://docs.google.com/document/d/1QJBMTpOtGAqF3P_5E7BMaUj518F3QBMV-OsroUe2nDw/edit#

 

HTC Vive

The HTC Vive is a headset that brings you into the world of virtual reality through a variety of different apps available in the Steam VR store online. The Vive system also includes controllers that allow you to move around in the different scenes and perform different actions depending on which application you are using.

There are many different apps available for the Vive, ranging anywhere from artistic and cultural experiences to immersive games. For example, you may be put “inside” an audiobook through immersive storytelling and placed inside scenes that bring the narration to life. There are also apps that put you inside the paintings of famous artists like Vincent Van Gogh or allow you to walk through a museum with different historical exhibits. The options really are endless and can be applied to a vast array of CLA majors and your specific interests!

 

Google Cardboard

The Google Cardboard is a handheld extension that hooks up to your smartphone and allows you to look through its eyeholes at your smartphone screen to view immersive virtual content. Originally, the device was only available in actual paper cardboard, but now they are available in all types of materials and designs, with plenty of sturdier options for a better experience. One cool thing you can do with the Cardboard is watch 360° videos on YouTube from different places around the world. You can go on a tour of historic locations you have always wanted to experience, like Buckingham Palace, or even watch Broadway’s “The Lion King” from the perspective of the performers. There are some experiences that are more artistically based–allowing you to experience the “Dreams of Dali” or enter into Bosch’s “Garden of Earthly Delights”–that may be relevant for different art majors at the U as well.

In addition, you can download apps on your iPhone or Android and use them in the Google Cardboard. There are many different apps available that relate to art, culture, science, journalism and more! For example, New York Times has different news stories that are available in a virtual reality narrative with the Cardboard. You can even experience the life of a neuron wandering around the human brain. Augmented reality is another feature available that overlays visual content on top of your real-life surroundings using the camera on your phone.

Overall, virtual reality is not just for “nerds”–there are programs available for everyone and we continue to find more possibilities every day. So, don’t forget to check out the list of apps we have made to help you get started with virtual reality at the top of the page and get in touch at etclab@umn.edu or come to our Friday 10am-4pm open hours if you want to try out some of these cool technologies!