Code signing Gotcha on MacOS 10.12 Sierra

Most Mac developers have, at one time or another, struggled with an issue related to code signing. Code signing is the process by which a cryptographic “signature” is embedded into an application, allowing the operating system to confirm that the application hasn’t been tampered with. This is a powerful tool for preventing forgery and hacking attempts. It can be pretty complicated to get it all right though.

We recently ran into an issue in which a new test build of EditReady was working fine on our development machines (running MacOS 10.12, Sierra), and was working fine on the oldest version of MacOS we support for EditReady (Mac OS X 10.8.5), but wasn’t working properly on Mac OS X 10.10. That seemed pretty strange – it worked on versions of the operating system older and newer than 10.10, so we would expect it to work there as well.

The issue had to do with something related to code signing – the operating system was reporting an error with one of the libraries that EditReady uses. Libraries are chunks of code which are designed to be reusable across applications. It’s important that they be code signed as well, since the code inside them gets executed. Normally, when an application is exported from Xcode, all of the libraries inside it are signed. Everything appeared right – Apple’s diagnostic tools like codesign and spctl reported no problems.

The library that was failing was one that we had recently recompiled. When we compared the old version of the library with the new one, the only difference we saw was in the types of cryptographic hashes being applied. The old version of the hash was signed with both the sha1 and sha256 algorithms, whereas the new version was only signed sha256.

We finally stumbled upon a tech note from Apple, which states

Note: When you set the deployment target in Xcode build settings to 10.12 or higher, the code signing machinery generates only the modern code signature for Mach-O binaries. A binary executable is always unsuitable for systems older than the specified deployment target, but in this case, older systems also fail to interpret the code signature.

That seemed like a clue. Older versions of Mac OS X don’t support sha256 signing, and need the sha1 hash. However, all of our Xcode build targets clearly specify 10.8. There was another missing piece.

It turns out that the codesign tool, which is a command line utility invoked by Xcode, actually looks at the LC_VERSION_MIN_MACOSX load command within each binary it inspects. It then decides which types of hashes to apply based on the data it finds there. In our case, when we compiled the dynamic library using the traditional “configure” and “make” commands, we hadn’t specified a version (it’s not otherwise necessary for this library) and so it defaulted to the current version. By recompiling with the “-mmacosx-version-min=10.8” compiler flags, we were successfully able to build an application that ran on 10.8.

Oh, and what about 10.8? It turns out that versions of Mac OS X prior to 10.10.5 don’t validate the code signatures of libraries.

Parsing and plotting OMNIC Specta SPA files with R and PHP

This is a quick “howto” post to describe how to parse OMNIC Specta SPA files, in case anyone goes a-google’n for a similar solution in the future.

SPA files consist of some metadata, along with the data as little endian float32. The files contain a basic manifest right near the start, including the offset and runlength for the data. The start offset is at byte 386 (two byte integer), and the run length is at 390 (another two byte int). The actual data is strictly made up of the little endian floats – no start and stop, no control characters.

These files are pretty easy to parse and plot, at least to get a simple display. Here’s some R code to read and plot an SPA:

pathToSource <- "fill_in_your_path";
to.read = file(pathToSource, "rb");

# Read the start offset
seek(to.read, 386, origin="start");
startOffset <- readBin(to.read, "int", n=1, size=2);
# Read the length
seek(to.read, 390, origin="start");
readLength <- readBin(to.read, "int", n=1, size=2);

# seek to the start
seek(to.read, startOffset, origin="start");

# we'll read four byte chunks
floatCount <- readLength/4

# read all our floats
floatData <- c(readBin(to.read,"double",floatCount, size=4))

floatDataFrame <- as.data.frame(floatData)
floatDataFrame$ID<-seq.int(nrow(floatDataFrame))
p.plot <- ggplot(data = floatDataFrame,aes(x=ID, y=floatData))
p.plot + geom_line() + theme_bw()

In my particular case, I need to plot them from PHP, and already have a pipeline that shells out to gnuplot to plot other types of data. So, in case it’s helpful to anyone, here’s the same plotting in PHP.

<?php

function generatePlotForSPA($source, $targetFile) {

    $sourceFile = fopen($source, "rb");

    fseek($sourceFile, 386);
    $targetOffset = current(unpack("v", fread($sourceFile, 2)));
    if($targetOffset > filesize($source)) {
        return false;
    }
    fseek($sourceFile, 390);
    $dataLength = current(unpack("v", fread($sourceFile, 2)));
    if($dataLength + $targetOffset > filesize($source)) {
        return false;
    }

    fseek($sourceFile, $targetOffset);

    $rawData = fread($sourceFile, $dataLength);
    $rawDataOutputPath = $source . "_raw_data";
    $outputFile = fopen($rawDataOutputPath, "w");
    fwrite($outputFile, $rawData);
    fclose($outputFile);
    $gnuScript = "set terminal png size {width},{height};
        set output '{output}';

        unset key;
        unset border;

    plot '<cat' binary filetype=bin format='%float32' endian=little array=1:0 with lines lt rgb 'black';";

    $targetScript = str_replace("{output}", $targetFile, $gnuScript);
    $targetScript = str_replace("{width}", 500, $targetScript);
    $targetScript = str_replace("{height}", 400, $targetScript);
    $gnuPath = "gnuplot";
    $outputScript = "cat \"" . $rawDataOutputPath . "\" | " . $gnuPath . " -e \"" . $targetScript . "\"";
    exec($outputScript);
    if(!file_exists($targetFile)) {
        return false;
    }
    return true;
}
?>

Transcoding Modern Formats

Since I’ve been working on a tool in this space recently, I thought I’d write something up in case it helps folks unravel how to think about transcoding these days.

The tool I’ve been working on is EditReady, a transcoding app for the Mac. But why do you want to transcode in the first place?

Dailies

After a day of shooting, there are a lot of people who need to see the footage from the day. Most of these folks aren’t equipped with editing suites or viewing stations – they want to view footage on their desktop or mobile device. That can be a problem if you’re shooting ProRes or similar.

Converting ProRes, DNxHD or MPEG2 footage with EditReady to H.264 is fast and easy. With bulk metadata editing and custom file naming, the management of all the files from the set becomes simpler and more trackable.

One common workflow would be to drop all the footage from a given shot into EditReady. Use the “set metadata for all” command to attach a consistent reel name to all of the clips. Do some quick spot-checks on the footage using the built in player to make sure it’s what you expect. Use the filename builder to tag all the footage with the reel name and the file creation date. Then, select the H.264 preset and hit convert. Now anyone who needs the footage can easily take the proxies with them on the go, without needing special codecs or players, and regardless of whether they’re working on a PC, a Mac, or even a mobile device.

If your production is being shot in the Log space, you can use the LUT feature in EditReady to give your viewers a more traditional “video levels” daily. Just load a basic Log to Video Levels LUT for the batch, and your converted files will more closely resemble graded footage.

Mezzanine Formats

Even though many modern post production tools can work natively with H.264 from a GoPro or iPhone, there are a variety of downsides to that type of workflow. First and foremost is performance. When you’re working with H.264 in an editor or color correction tool, your computer has to constantly work to decompress the H.264 footage. Those are CPU cycles that aren’t being spent generating effects, responding to user interface clicks, or drawing your previews. Even apps that endeavor to support H.264 natively often get bogged down, or have trouble with all of the “flavors” of H.264 that are in use. For example, mixing and matching H.264 from a GoPro with H.264 from a mobile phone often leads to hiccups or instability.

By using EditReady to batch transcode all of your footage to a format like ProRes or DNxHD, you get great performance throughout your post production pipeline, and more importantly, you get consistent performance. Since you’ll generally be exporting these formats from other parts of your pipeline as well – getting ProRes effects shots for example – you don’t have to worry about mix-and-match problems cropping up late in the production process either.

Just like with dailies, the ability to apply bulk or custom metadata to your footage during your initial ingest also makes management easier for the rest of your production. It also makes your final output faster – transcoding from H.264 to another format is generally slower than transcoding from a mezzanine format. Nothing takes the fun out of finishing a project like watching an “exporting” bar endlessly creep along.

Modernization

The video industry has gone through a lot of digital formats over the last 20 years. As Mac OS X has been upgraded over the years, it’s gotten harder to play some of those old formats. There’s a lot of irreplaceable footage stored in formats like Sorensen Video, Apple Intermediate Codec, or Apple Animation. It’s important that this footage be moved to a modern format like ProRes or H.264 before it becomes totally unplayable by modern computers. Because EditReady contains a robust, flexible backend with legacy support, you can bring this footage in, select a modern format, and click convert. Back when I started this blog, we were mostly talking about DV and HDV, with a bit of Apple Intermediate Codec mixed in. If you’ve still got footage like that around, it’s time to bring it forward!

Output

Finally, the powerful H.264 transcoding pipeline in EditReady means you generate beautiful deliverable H.264 more rapidly than ever. Just drop in your final, edited ProRes, DNxHD, or even uncompressed footage and generate a high quality H.264 for delivery. It’s never been this easy!

See for yourself

We released a free trial of EditReady so you can give it a shot yourself. Or drop me a line if you have questions.

Getting Old in a Maturing Industry

I recently returned from the 2013 NAB Expo in Las Vegas. For those who are unfamiliar, the NAB Expo is the biggest trade show and convention for the domestic film and broadcast industry. For one week every year, the Las Vegas Convention Center plays host to 100,000 film, video, and radio professionals, who come to check out the latest gear, learn what’s hot, and just generally hang out with a group of like-minded folks. It’s a “who’s who” of the business, and it’s far more massive than you can possibly imagine.

This was, as best as I can recall, my 12th year attending NAB. Some of those years have been quick hit-and-run visits (fly in in the morning, and out in the evening) and some have been incredibly intense week long affairs, building and manning the booth for Divergent Media. This year was somewhere in the middle – a week in Las Vegas, but only smaller events for the company, and plenty of time to explore the show.

In the aftermath of the show, I’ve been reflecting on the changes I’ve seen, and what it means about the industry.

When I first went to NAB in 2001, there was a sharp divide between the “indie” producer and the professionals. The desktop video revolution was still picking up steam, as DV cameras got better and “affordable” editing software gained pro features. Indie filmmaking involved a lot of clever repurposing, scraping by, and bending rules. Figuring out how to build your own steadicam approximation, repurposing Home Depot lights for your shoot, and assembling firewire drives from bare enclosures to save a few bucks.

There was precious little “prosumer” gear on the market – the pricing model still favored rental houses and big shops. Indies were largely ignored on the show floor. Apple and a few others saw where things were going, but the “big guys” were indifferent or downright disparaging.

Over the following four or five years, the “indie” part of “indie filmmaking” dropped off – everyone was an indie filmmaker, and everyone was a pro. Equipment became much more affordable and much better, software became cheaper, and everyone began to accept the new reality.

This was a particularly exciting time in the industry because we were right on the edge of what technology was capable of. Realtime effects, HD editing, multicam editing – you could see Moore’s Law in action. Each year, the quality of what could be done on a reasonable budget (or done at all) improved immensely.

The bursting of the financial bubble had a big impact on NAB, as it did on every trade show. Attendance dropped, and the pace of innovation slowed. But the bigger change is what began to happen around 2009, and built speed over the following years. The part of the industry we used to call “indie” grew up and became mature, and nothing replaced it to hurl rocks at the old guard. Today, innovation is no longer a matter of “last year that wasn’t possible and now it is,” or “last year this was $10k and now it’s $1k,” but rather “last year that took four steps and now it takes three” or “now it looks 5% better.”

Take, for example, the upstart camera manufacturer Red. Red was the star of the 2006 NAB show – they announced the Red One, a 4K “cinema” camera at a price that indies could potentially afford. They were aiming to disrupt the camera market, and filmmaking in general.

While Red had teething problems, by late 2007 it was a real product and accomplished a lot of what its creators set out to do. People who had been shooting films on $250,000 Sony F950s or actual film cameras began working with this $30,000 rig. Red bet on affordable pixel density, and for a while, they won.

The problem with disrupting a market is that if your competition survives the disruption, you have to compete on an ongoing basis. Now, in 2013, we have 4K (or higher) cameras from every manufacturer, at prices that in some cases substantially undercut Red. There are new kids on the block, like Blackmagic Design. And big vendors like Sony have caught up on pricing, marketing, and features. Whereas Red was once the star of the show, they’re not a bit of an also-ran.

The industry has matured in such a way that it gives filmmakers more choice for lower prices. You can’t pick a bad camera anymore. You can’t pick a bad editor. The industry has caught up with the market it serves. It’s no longer in need of a massive disruption. You don’t need clever tricks or dumpster diving – you just need talent.

It’s not just true in cameras of course – Final Cut Pro disrupted editing, Blackmagic’s revival of Resolve disrupted color correction, the LED revolution disrupted lighting, and obviously distribution is just one long stream of disruption.

The reality is that the film and video industry has grown up, and reached a point of stability. While there may be a new “indie filmmaking” revolution in the future, it doesn’t feel like it’s right around the corner. This is reflected in the NAB Expo. It feels like the demographics are skewing a bit older. I still feel like I’m one of the “young” folks there, even though I’m substantially older than when I first attended. There’s less razzle-dazzle – fewer vendors going to extremes to show up the vendor at the next booth. It’s less circus, less sexy, just business.

Divergent Opinions: NAB 2013

On this week’s episode of the media industry’s most influential podcast, Mike and I discuss our expectations for NAB 2013. We also dive into Apple’s new Final Cut Pro X marketing push and what it means for the NLE sector.