First Light

We made good progress in the AISOS space today. The photogrammetry (and later, RTI) camera position is in place. We were lucky to inherit a nice copy stand that was already in the space, which makes camera positioning really easy. We were even able to capture our first object, albeit very roughly (the lightning needs plenty of adjustment).

2016-08-23 14.45.13

2016-08-23 13.58.04

Later in the day, we got the call that our GigaMacro had arrived. Anything that comes in a massive wooden crate is bound to be exciting.

2016-08-23 15.30.17

We don’t have our final work surface for the GigaMacro yet, but we did some initial assembly and testing. Everything seems to be working as expected. It’s definitely going to have a learning curve as we get familiar with all the variables that can be controlled on the GigaMacro. For now, you can watch it wiggle.

All cleaned up

The AISOS space has now been cleaned, painted, and the ethernet jacks are in. Most of our equipment is on-site, with the exception of the Gigamacro which should be arriving next week.

We’re going to begin experimenting with different arrangements for the space. We’ve also got a few more pieces of furniture to move in.

2016-08-19 10.12.45

Getting Rolling

We’re really excited to be moving forward with AISOS. We’re currently in the process of preparing our space, and ordering all of our equipment. We’re hoping to have most of the equipment installed and ready to go by the end of August.

If you’re curious to learn more, send us an email. Otherwise, look for a tour soon!

Building a photogrammetry turntable

As we explained in our photogrammetry introduction, photogrammetry fundamentally involves taking many photos of an object from multiple angles.  This is pretty straightforward, but it can also be time consuming.  A typical process involves a camera on a tripod, being moved slightly between each shot.  An alternative is to put the object on a turntable, so the camera can remain in one place.  This is still pretty fiddly though – moving a turntable slightly, taking a photo, then repeating 30 or 40 times.  It also introduces opportunity for error – the camera settings might be bumped, or the object might fall over.

To us, this sounded like a great opportunity for some automation.  As part of the LATIS Summer Camp 2016, we challenged ourselves to build an automated turntable, which could move a fixed amount, then trigger a camera shutter, repeating until the object had completed a full circle.

Fortunately, we weren’t the first people to have this idea, and we were able to draw upon the work of many other projects.  In particular, we used the hardware design from the MIT Spin Project, along with some of the software and electrical design from Sparkfun’s Autodriver Getting Started Guide.  We put the pieces together and added a bit of custom code and hardware for the camera trigger.

The first step was getting all the parts.  This is our current build sheet, though we’re making some adjustments as we continue to test.

[googleapps domain=”docs” dir=”spreadsheets/d/18FVIXNNT8n3cJQVWfgfFD0G26C9ARmLLhKIKlKBOokA/pubhtml” query=”widget=true&headers=false” width=”100%” height=”500″ /]

2016-07-12 11.18.31We also had to do some fabrication, using a laser cutter and 3d printer.  Fortunately, here at the University of Minnesota, we can leverage the XYZ Lab.  The equipment there made short work of our acrylic, and our 3d printed gear came out beautifully.

With the parts on hand and the enclosure fabricated, it was mostly just a matter of putting it all together.  We started with a basic electronics breadboard to do some testing and experimentation.

Our cameras use a standard 2.5mm “minijack” (like a headphone jack, but smaller) connector for camera control.  These are very easy to work with – they just have three wires (a ground and two others).  By connecting one wire to ground, you can trigger the focus function.  The other wire will trigger the shutter.  A single basic transistor is all that’s necessary to give our Arduino the ability to control these functions.

The basic wiring for the motor control follows the hookup guide from Sparkfun, especially the wiring diagram towards the end. The only other addition we made as a simple momentary button to start and stop the process.

Once we were confident that the equipment we working, we disassembled the whole thing and put it back together with a solder-on prototyping board and some hardware connectors.  Eventually, we’d like to fabricate a PCB for this, which would be even more robust and compact. The spin project has PCB plans available, though their electronics setup is a little different. If anyone has experience laying out PCBs and wants to work on a project, let us know!

While building the first turntable was pretty time consuming, the next one could be put together in only a few hours.  We’re going to continue tuning the software for this one, to find the ideal settings.  If there are folks on campus who’d like to build their own, just let us know!

 

2016-07-15 12.39.54

2016-07-11 14.41.44

 

 

 

 

 

 

[youtube https://www.youtube.com/watch?v=txXyAkVK_tE&w=560&h=315]

Introduction to Photogrammetry

Whether you’re working on VR, 3D printing, or innovative research, capturing real-world objects in three dimensions is an increasingly common need.  There are a lot of technologies to aid in this process.  When people think of 3D capture, the first thing the often comes to mind is a laser scanner – a laser beam that moves across an object capturing data about the surface.  They look very “hollywood” and impressive.

Another common type of capture is structured-light capture.  In structured light 3d capture, different patterns are projected onto an object (using something like a traditional computer projector).  A camera then looks at how the patterns are stretched or blocked, and calculates the shape of the surface from there.  This is also how the Microsoft Kinect works, though it uses invisible (infrared) patterns.

Both of these approaches can deliver high precision results, and have some particular use cases.  But they require specialized equipment, and often specialized facilities.  There’s another technology that’s much more accessible: photogrammetry.

Calculated Camera Positions

Calculated Camera Positions

In very simple terms, photogrammetry involves taking many photos of an object, from different sides and angles.  Software then uses those photos to reconstruct a 3d representation, including the surface texture.  The term photogrammetry actually encapsulates a wide variety of processes and outputs, but in the 3d space, we’re specifically interested in stereophotogrammetry.  This process involves finding the overlap between photos.  From there, the original camera positions can be calculated.  Once you know the camera positions, you can use triangulation to calculate the position in space of any given point.

The process itself is very compute-intensive, so it needs a powerful computer (or a patient user).  Photogrammetry benefits from very powerful graphics cards, so it’s currently best suited to use on customized PCs.

One of the most exciting parts of photogrammetry is that it doesn’t work only with photos taken in a controlled lighting situation.  You can experiment with performing photogrammetry using a mobile device and the 123d Catch application.  Photogrammetry can even be performed using still frames extracted from video – walking around an object capturing video for example.

For users looking to get better results, we’re going to be writing some guides on optimizing the process. A good quality digital camera with a tripod, and some basic lighting equipment can dramatically improve the results.

Because it uses simple images, photogrammetry is also well suited to covering 3D data from imaging platforms like drones or satellites.

Photogrammetry Software

There are a handful of popular tools for photogrammetry.  One of the oldest and most established tools is PhotoScan from AgiSoft.  PhotoScan is a “power user” tool, which allows for many custom interventions to optimize the photogrammetry process.  It can be pretty intimidating for new users, but in the hands of the right user it’s very powerful.

An easier (and still very powerful) alternative is Autodesk Remake.  Remake doesn’t expose the same level of control that PhotoScan has, but in many cases it can deliver a stellar result without any tweaking.  It also has sophisticated tools for touching up 3d objects after the conversion process.  An additional benefit is that it has the ability to output models for a variety of popular 3d scanners.  Remake is free for educational uses as well.

There are also photogrammetry tools for specialized used cases.  We’ve been experimenting with Pix4D, which is a photogrammetry toolset designed specifically for drone imaging.  Because Pix4D knows about different models of drones, it can automatically correct for camera distortion and the types of drift that are common with drones.  Pix4D also has special drone control software, which can handle the capture side, ensuring that the right number of photos are captured, with the right amount of overlap.