Code signing Gotcha on MacOS 10.12 Sierra

Most Mac developers have, at one time or another, struggled with an issue related to code signing. Code signing is the process by which a cryptographic “signature” is embedded into an application, allowing the operating system to confirm that the application hasn’t been tampered with. This is a powerful tool for preventing forgery and hacking attempts. It can be pretty complicated to get it all right though.

We recently ran into an issue in which a new test build of EditReady was working fine on our development machines (running MacOS 10.12, Sierra), and was working fine on the oldest version of MacOS we support for EditReady (Mac OS X 10.8.5), but wasn’t working properly on Mac OS X 10.10. That seemed pretty strange – it worked on versions of the operating system older and newer than 10.10, so we would expect it to work there as well.

The issue had to do with something related to code signing – the operating system was reporting an error with one of the libraries that EditReady uses. Libraries are chunks of code which are designed to be reusable across applications. It’s important that they be code signed as well, since the code inside them gets executed. Normally, when an application is exported from Xcode, all of the libraries inside it are signed. Everything appeared right – Apple’s diagnostic tools like codesign and spctl reported no problems.

The library that was failing was one that we had recently recompiled. When we compared the old version of the library with the new one, the only difference we saw was in the types of cryptographic hashes being applied. The old version of the hash was signed with both the sha1 and sha256 algorithms, whereas the new version was only signed sha256.

We finally stumbled upon a tech note from Apple, which states

Note: When you set the deployment target in Xcode build settings to 10.12 or higher, the code signing machinery generates only the modern code signature for Mach-O binaries. A binary executable is always unsuitable for systems older than the specified deployment target, but in this case, older systems also fail to interpret the code signature.

That seemed like a clue. Older versions of Mac OS X don’t support sha256 signing, and need the sha1 hash. However, all of our Xcode build targets clearly specify 10.8. There was another missing piece.

It turns out that the codesign tool, which is a command line utility invoked by Xcode, actually looks at the LC_VERSION_MIN_MACOSX load command within each binary it inspects. It then decides which types of hashes to apply based on the data it finds there. In our case, when we compiled the dynamic library using the traditional “configure” and “make” commands, we hadn’t specified a version (it’s not otherwise necessary for this library) and so it defaulted to the current version. By recompiling with the “-mmacosx-version-min=10.8” compiler flags, we were successfully able to build an application that ran on 10.8.

Oh, and what about 10.8? It turns out that versions of Mac OS X prior to 10.10.5 don’t validate the code signatures of libraries.

Photogrammetry with GIGAMacro images

One of the exciting possibilities in the AISOS space is the opportunity to combine technologies in new or novel ways. For example, combining RTI and photogrammetry may allow for 3d models with increased precision in surface details. Along these lines, we recently had the opportunity to do some work combining our GIGAMacro with our typical photogrammetry process.

This work was inspired by Dr. Hancher from our Department of English. He brought us some wooden printing blocks, which are made up of very, very fine surface carvings. His research interests include profiling the depth of the cuts. The marks are far too fine for our typical photogrammetry equipment. While they may be well-suited to RTI, the size of the blocks would mean that imaging with RTI would be very time consuming.

As we were pondering our options, one of our graduate researchers, Samantha Porter, pointed us to a paper she’d recently worked on which dealt with a similar situation.

By manually setting the GIGAMacro to image with a lot more overlap than is typical (we ran at a 66% overlap), and using a level of magnification which fully reveals the subtle surface details we were interested in, we were able to capture images well suited to photogrammetry. This process generates a substantial amount of data (a small wooden block consisted of more than 400 images), but it’s still manageable using our normal photogrammetry tools (Agisoft Photoscan).

After approximately 8 hours of processing, the results are impressive. Even the most subtle details are revealed in the mesh (the mesh seen below has been simplified for display in the browser, and has had its texture removed to better show the surface details). Because the high-overlap images can still be stitched using the traditional GIGAMacro toolchain, we can also generate high resolution 2d images for comparison.

We’re excited to continue to refine this technique, to increase the performance and the accuracy.