Loading JPEG-compressed pyramidal TIFF files in the browser

Quick bit of backstory. In the world of digital preservation and archiving, there’s a standard called IIIF (pronounced triple-eye-eff). It’s designed to provide media (initially images, but now other types of media) interoperability between different archival platforms. So, if you’ve got a two high resolution images, stored in two different archives, IIIF would let you load both images side by side in a viewer for comparison. That’s pretty cool! In theory!

The reality of IIIF in practice is a little more boring – it’s become shorthand for “zooming image viewer” and there seem to be relatively few interesting examples of interoperability. Because IIIF is basically only used by nonprofit archives and museums, the tooling isn’t necessarily the latest and greatest. For zooming images (the focus of this article), one option is to precompute all of your image pyramids and write them out as individual tiles on a server, with a JSON manifest. However, storing tens of thousands of tiny JPEGs on a storage platform like S3 also has its downsides in terms of cost, complexity and performance.

Alternatively, you can run a IIIF server which does tiling on-the-fly. On-the-fly tiling can be really CPU and IO intensive, so these servers often rely on healthy caches to deliver good performance. One workaround is to store your source images in a pre-tiled, single file format like a pyramidal TIFF file. It’s basically what it sounds like – a single .tif file, which internally stores your image in a bunch of different zoom levels, broken up into small (usually 256×256 pixel) tiles. These individual tiles can be JPEG compressed to minimize storage size. A tile server can very quickly profile these tiles in response to a request, because no actual image processing is required – just disk IO. In a cloud-first world, tile servers add a lot of cost and complexity, which is one disadvantage of this approach.

Our Elevator app has supported zooming images for years, for both standard high resolution images and specialized use cases like SVS and CZI files from microscopes. Initially, our backend processors did this using the many-files approach described above. We used vips to generate deepzoom pyramids, which are essentially a set of folders each with JPEG tiles (sometimes tens of thousands). These were copied to S3, and could be loaded directly from S3 by our custom Leaflet plugin. Because the images follow a predictable naming convention (zoom_level/x_y.jpeg), there’s no translation needed to find the right tiles for a given image coordinate. This had some definitely downsides though – writing all those files was disk intensive, copying them to S3 was slow and sometimes flakey (hitting rate limits for example) and deleting a pyramid was an o(n) operation – one call per image.

As a quick fix to deal with some specific rate limit issues, we first moved to a tar+index approach using an in-house tool called Tarrific (thanks James!). After using vips to make the deepzoom pyramid, we tarred the files (without gzip compression) and copied the single tar file to the server. Tarrific then produced a JSON index of the tar, with bye offsets for each image. That was stored on the server as well. We were able to update our leaflet plugin to read the index file (itself gzip encoded), then do range requests into the tar to access the files we needed, which were still just JPEGs. This solved the S3 side of the equation, giving us a single file to upload or delete. However, I didn’t love having a totally proprietary format, and it still involved a bunch of extra disk operations during encoding.

That brings us up to the present-ish day. Now, I should mention, Elevator doesn’t support IIIF. That’s mostly because I run it as a dictatorship and I don’t find IIIF that useful. But it’s also because the IIIF format imposes some costs that wouldn’t fit in well in our AWS attribution model. That said, I’ve always tried to keep IIIF in mind as something we could support in the future, if there was a good use case. To that end, I recently spent some time looking at different file formats which would be well suited for serving via a IIIF server. In pursuit of that I did a deep dive on pyramidal tiffs. As I dug in, it seemed like in principle they should be able to be served directly to the browser, using range requests, much like we’d done with tar files. In fact, there are related formats like GeoTIFF that seem to do exactly that. Those other formats (or specifically, their tooling) didn’t seem well suited to the massive scale of images we deal with though (think tens of hundreds of gigapixels).

I recently had a long train ride across Sri Lanka (#humblebrag) and decided to see if I could make a pyramidal tiff file work directly in Leaflet without any server side processing. Turns out the answer is yes, hence this post!

Beginning at the beginning, we’re still using VIPS to do our image processing, using the tiffsave command.

vips tiffsave sourceFile --tile --pyramid --compression jpeg -Q 90 --tile-width 256 --tile-height 256 --bigtiff --depth onepixel outputFile.tiff

(The onepixel flag is there to maintain compatibly with our existing leaflet plugin, which assumes that pyramids start from a 1×1 scale instead of a single tile scale.)

The trick with pyramidal tiff files is that even though they support using jpeg compression on the image tiles, they don’t actually store full jpegs internally. Each zoom level shares a single set of quantization tables, and the individual tiles don’t have a full set of JPEG headers. Fortunately, JPEG is a super flexible format when it comes to manipulation, as it’s a linear sequence of sections, each with a start and end marker. Nothing is based on specific byte offsets, which would have made this a nightmare.

Rather than writing a full tiff parser from scratch, we’re able to leverage geotiff.js to do a lot of the heavy lifting. It handles reading the tiff header and determining the number of zoom levels (image file directories or IFDs in tiff parlance). From there, we can determine the offsets for specific tiles and get the raw data for each tile. The basic code for doing that is below (without the leaflet specific bits), though you can check out (very poorly organized) git repo for the full leaflet plugin and parser.

// first, a method to fetch the TIFF image headers
var image;
const loadIndex = async function() {
    tiff = await GeoTIFF.fromUrl("PathToYourTiffFile);
    image = await tiff.getImage();
}

var subimages = {};

// get the subimage for a given zoom level from the overall image.
// abuse globals because we've already broken the seal on that.
const getSubimage = async function getSubimage(coords) {
    // the headers for each zoom level need to be fetched once.
    if(subimages[coords.z] == undefined) {
        subimages[coords.z] = await tiff.getImage(maxZoom - coords.z);
    }
    const subimage = subimages[coords.z];
    return subimage;
};


// fetch the raw JPEG tile data. Note that this isn't a valid jpeg.
// coords is an object with z, x, and y properties. 
const fetchRawJPEGTile = async function(coords) {

    const tileSize = 256;
    const subimage = await getSubimage(coords);
    const numTilesPerRow = Math.ceil(subimage.getWidth() / subimage.getTileWidth());
    const numTilesPerCol = Math.ceil(subimage.getHeight() / subimage.getTileHeight());
    const index = (coords.y * numTilesPerRow) + coords.x;
    // do this with our own fetch instead of geotiff so that we can get parallel requests
    // we need to trick the browser into thinking we're making the request
    // against different files or it won't parallelize the requests
    const offset = subimage.fileDirectory.TileOffsets[index];
    const byteCount = subimage.fileDirectory.TileByteCounts[index];
    const response = await fetch(PathToYourTiffFile + "?random=" + Math.random(), {
        headers: {
            Range: `bytes=${offset}-${offset+byteCount}`,
        },
    });
    const buffer = await response.arrayBuffer();
    return buffer
}

Using the above code, we can use a method like this to fetch the contents of a tile.

await loadIndex();
let myTile = await fetchRawJPEGTile({x: 10, y: 10, z:10});

There’s one big catch here though – as I mentioned, the contents of a pyramidal tiff file are jpeg compressed, but they’re not actual JPEGs.

Geotiff.js has the ability to return tiles as raster (RGB) images, but that wasn’t a great option for us for a couple reasons. First, our existing Leaflet plugin counts on being able to work with <img> tags, and getting the raster data back into an <img> tag (presumably roundtripping through a <canvas>?) would have been clunky. Second, Geotiff.js uses its own internal JPEG decoder, which seems like a waste given every modern chip can do that decoding in hardware. Instead, I went down the path of trying to turn the jpeg-compressed tiles into actual jpeg files.

Through a mix of trial and error with a hex editor, reading various tiff and jpeg specifications, and the very readable libjpeg and jpegtrans source, I arrived at the code below, which gloms together just enough data to make a valid jpeg file.

const parseTileToJPEG = async function parseTileToJPEGBlob(data, coords) {
    const subimage = await getSubimage(coords);
    const uintRaw = new Uint8Array(data);
     // magic adobe header which forces the jpeg to be interpreted as RGB instead of YCbCr
    const rawAdobeHeader = hexStringToUint8Array("FFD8FFEE000E41646F626500640000000000");
    // allocate the output array
    var mergedArray = new Uint8Array(rawAdobeHeader.length + uintRaw.length + subimage.fileDirectory.JPEGTables.length -2 - 2 - 2);
    mergedArray.set(rawAdobeHeader);
    // first two bytes of the quant tables have start of image token which we have to strip, and last two bytes are image end which we gotta strip too
    mergedArray.set(subimage.fileDirectory.JPEGTables.slice(2, -2), rawAdobeHeader.length);
    mergedArray.set(uintRaw.slice(2), rawAdobeHeader.length + subimage.fileDirectory.JPEGTables.length-2 - 2);
    const url =URL.createObjectURL(new Blob([mergedArray], {type:"image/jpeg"})); 
    return url;
};

So, with all of those bits put together, we can arrive at the following sample invocation, which will give us back a blob URL which can be set as the SRC for an <img>. Obviously this could be refactored in a variety of ways – I’m adapting our Leaflet code to make it more readable, but running within Leaflet means we do some abusive things vis-a-vis async/await.

let coords = {x: 10, y: 10, z:10};
let myTile = await fetchRawJPEGTile(coords);
let tileImageSource = await parseTileToJPEG(myTile, coords);

Let’s review the advantages of this approach. First, we’re able to generate a single .tiff file in our image processing pipeline. On our EC2 instances with relatively slow disk IO, this is meaningfully faster than generating a deepzoom pyramid. Synchronizing it to S3 is orders of magnitude faster as well. Working with the file on S3 is now a single file operation instead of o(n). And finally, the files we’re writing will be directly compatible with a IIIF image server if we decide we need to support that in the future.

In an ideal world, I’d love to see the IIIF spec roll in support for this approach to image handling, and just drop the requirement for a server. However, IIIF servers do lots of other stuff as well – dynamically rescaling images, rotating them, etc – so that’s probably not going to happen.

Reverse Engineering a LumeCube

One of our upcoming hardware projects (more on this later) requires a very bright, controllable light source. In the past, we’ve just used bare LEDs with our own cooling, power, control, etc. Not being someone who really understands how electricity works though, it always feels like a hassle to get a reliable setup. For this project, we instead decided to just give a LumeCube a try. They’re far from cheap, but they’re very bright, and hopefully have rock solid reliability.

LumeCube has an iOS app which allows for control over bluetooth. Although it also has a USB port, that’s only used for charging. We wanted to be able to do basic brightness control from within the Python application that will run the rest of the hardware. It doesn’t appear anyone has gone to the trouble of reverse engineering the LumeCube before, so we figured we’d give it a go. There were just two challenges:

  1. We’ve never reverse engineered a Bluetooth communications protocol.
  2. We don’t really understand how Bluetooth works.

Sticking with our philosophy of obtaining minimum-viable-knowledge, we started Googling “how to reverse engineer a bluetooth device” and ended up on this Medium post by Uri Shaked. Uri pointed us towards the “nRF Connect” app, which allows you to scan for Bluetooth devices and enumerate all of their characteristics (look at me using the lingo).

Acting on the assumption that LumeCube wasn’t going to go out of their way to secure their Bluetooth connection, we assumed brightness control would be pretty straightforward. It was just a matter of tracking down the characteristic ID and the structure of the data. To do that, we began by listing all of the characteristics in nRF Connect. Then we would disconnect and launch the official app. From there, we could adjust the brightness, then flip back to nRF Connect, rescan the characteristics, and see what had changed.

The relevant characteristic popped out very quickly. The third byte of 33826a4d-486a-11e4-a545-022807469bf0 varied from 0x00 to 0x64, or 0-100. Writing new values back to that characteristic confirmed that hunch – huzzah!

Once we’d identified the characteristic, it was just a matter of implementing some controls in Python. For that we used Bleak, which offers good documentation and cross-platform support. Below is a quick example that turns brightness to 100 (0x64).

import asyncio
import logging
from bleak import BleakClient

address = "819083F8-A230-4F61-8F94-CB69FF63D340"
LIGHT_UUID = "33826a4d-486a-11e4-a545-022807469bf0"
#logging.basicConfig(level=logging.DEBUG)

async def run(address):
    async with BleakClient(address) as client:
        await client.write_gatt_char(LIGHT_UUID, bytearray(b"\xfc\xa1\x64\x00"), True)
loop = asyncio.get_event_loop()
loop.run_until_complete(run(address))

2015 Alfa Romeo 4C Coupe

I’m selling my 2015 Alfa Romeo 4C Coupe in Rosso Alfa over black leather. The car has 13,262 miles and I’m asking $42,000 OBO.

I’m the original owner for this car – I waited patiently (kind of) for the non-LE cars so that I could spec it myself, and took delivery in August 2015. I’m selling because I bought a 1979 Alfa Spider in the fall, and we don’t have space to keep both “fun” cars.

The car has always been serviced at Alfa Romeo of Minneapolis (the selling dealer) and has followed the Alfa servicing guide. Last year it had its 5 year service, including the timing belt and water pump. It’s only ever needed routine servicing.

I’ve used this car as my daily driver during summer months, as well as for road trips around Minnesota. I had the car fully wrapped in paint protection film after purchasing, so that we could go on adventures without worrying about a stray stone or an unexpected gravel road. It’s a ridiculous car which can’t help but make you giggle, with its absurd turbo whooshes and exhaust growl. Surprisingly comfortable for longer distances as well, with cruise control, bluetooth, and decent seats.

Spec

  • Rosso Alfa over black leather
  • Silver 5-hole 18″/19″ wheels
  • Red calipers
  • “Standard package” (Race exhaust, Bi-Xenon, cruise control, park assist, “premium” audio)
  • Window sticker

Modifications

  • Full body 3M Pro self healing paint protection film (PPF). Covers every panel, including the headlights.
  • Alfaworks UK “Helmholtz” exhaust with carbon tips (reduces highway drone). Sale includes the stock “race” exhaust as well.
  • Carbon fiber shift paddle extensions (snap on / snap off)

Condition

  • The rear tires were replaced in mid-2019 and have about 2500 miles on them. The fronts are original and have plenty of life left.
  • Car is mechanically perfect – it’s never had a problem.
  • There are a few nicks in the paint protection film, where it has sacrificed itself to a stone. They’re detailed in the photos. There’s also a spot at the rear where the film has lifted a bit – this could be trimmed back. I’ve taken the “don’t pull on a scab” approach.
  • There’s one nick in the passenger rear wheel, where I shamefully clipped a curb.

Incident history

When the PPF was originally fitted, a quarter-sized bit of clear coat lifted and had to be repaired (see thread). In 2016, I was an idiot and cracked the passenger side side-sill while putting the car on ramps. The piece was removed and repaired. In 2017, the car was rear-ended in stop-and-go traffic. Repair involved replacing the rear bumper cover and lower diffuser. After repair, the rear bumper was re-PPF’d.

Miscellaneous Other Stuff

The sale includes everything I got with the car – the car cover, charger, manuals, keys, etc. I believe I’ve also got various bits of swag that Alfa sent early owners, which I’m happy to pass on.

For more information, contact me via cmcfadden@gmail.com.

Photos

In addition to the gallery below, you can download the full resolution images.

Faults

Hacking Framework MacOS versions

We recently got bitten by a version of the bug mentioned in my last post here. That’s an issue where codesign uses sha256 hashes instead of sha1, causing crashes on launch on MacOS 10.10.5. In this case however, the framework was a third-party binary, which we couldn’t just recompile. Instead, we needed to hack the MacOS version to trick codesign. To check the version of a framework, you can use otool:

otool -l <framework> | grep -A 3 LC_VERSION_MIN_MACOSX

In this case, it was reporting 10.12. We need 10.10. The nice thing about values like this is that they tend to follow a really specific layout in the binary, since the loader needs to find it. The header for the MacOS loader is pretty readable, and tells us where to look. LC_VERSION_MIN_MACOSX starts with a value of 0x24, and has this structure: `\

struct version_min_command {

uint32_t    cmd;        /* LC_VERSION_MIN_MACOSX or
               LC_VERSION_MIN_IPHONEOS  */
uint32_t    cmdsize;    /* sizeof(struct min_version_command) */
uint32_t    version;    /* X.Y.Z is encoded in nibbles xxxx.yy.zz */
uint32_t    sdk;        /* X.Y.Z is encoded in nibbles xxxx.yy.zz */
};


Pop open your hex editor and search for the first 0x24. The first “12” we find is the MacOS version, and the second is the SDK version. So we just change our 0x0C to a 0x0A and save it. Then we can run the otool command again to confirm the versions. Now, codesign will apply both sha1 and sha256 hashes when we build. Hoorah!

Code signing Gotcha on MacOS 10.12 Sierra

Most Mac developers have, at one time or another, struggled with an issue related to code signing. Code signing is the process by which a cryptographic “signature” is embedded into an application, allowing the operating system to confirm that the application hasn’t been tampered with. This is a powerful tool for preventing forgery and hacking attempts. It can be pretty complicated to get it all right though.

We recently ran into an issue in which a new test build of EditReady was working fine on our development machines (running MacOS 10.12, Sierra), and was working fine on the oldest version of MacOS we support for EditReady (Mac OS X 10.8.5), but wasn’t working properly on Mac OS X 10.10. That seemed pretty strange – it worked on versions of the operating system older and newer than 10.10, so we would expect it to work there as well.

The issue had to do with something related to code signing – the operating system was reporting an error with one of the libraries that EditReady uses. Libraries are chunks of code which are designed to be reusable across applications. It’s important that they be code signed as well, since the code inside them gets executed. Normally, when an application is exported from Xcode, all of the libraries inside it are signed. Everything appeared right – Apple’s diagnostic tools like codesign and spctl reported no problems.

The library that was failing was one that we had recently recompiled. When we compared the old version of the library with the new one, the only difference we saw was in the types of cryptographic hashes being applied. The old version of the hash was signed with both the sha1 and sha256 algorithms, whereas the new version was only signed sha256.

We finally stumbled upon a tech note from Apple, which states

Note: When you set the deployment target in Xcode build settings to 10.12 or higher, the code signing machinery generates only the modern code signature for Mach-O binaries. A binary executable is always unsuitable for systems older than the specified deployment target, but in this case, older systems also fail to interpret the code signature.

That seemed like a clue. Older versions of Mac OS X don’t support sha256 signing, and need the sha1 hash. However, all of our Xcode build targets clearly specify 10.8. There was another missing piece.

It turns out that the codesign tool, which is a command line utility invoked by Xcode, actually looks at the LC_VERSION_MIN_MACOSX load command within each binary it inspects. It then decides which types of hashes to apply based on the data it finds there. In our case, when we compiled the dynamic library using the traditional “configure” and “make” commands, we hadn’t specified a version (it’s not otherwise necessary for this library) and so it defaulted to the current version. By recompiling with the “-mmacosx-version-min=10.8” compiler flags, we were successfully able to build an application that ran on 10.8.

Oh, and what about 10.8? It turns out that versions of Mac OS X prior to 10.10.5 don’t validate the code signatures of libraries.

Parsing and plotting OMNIC Specta SPA files with R and PHP

This is a quick “howto” post to describe how to parse OMNIC Specta SPA files, in case anyone goes a-google’n for a similar solution in the future.

SPA files consist of some metadata, along with the data as little endian float32. The files contain a basic manifest right near the start, including the offset and runlength for the data. The start offset is at byte 386 (two byte integer), and the run length is at 390 (another two byte int). The actual data is strictly made up of the little endian floats – no start and stop, no control characters.

These files are pretty easy to parse and plot, at least to get a simple display. Here’s some R code to read and plot an SPA:

pathToSource <- "fill_in_your_path";
to.read = file(pathToSource, "rb");

# Read the start offset
seek(to.read, 386, origin="start");
startOffset <- readBin(to.read, "int", n=1, size=2);
# Read the length
seek(to.read, 390, origin="start");
readLength <- readBin(to.read, "int", n=1, size=2);

# seek to the start
seek(to.read, startOffset, origin="start");

# we'll read four byte chunks
floatCount <- readLength/4

# read all our floats
floatData <- c(readBin(to.read,"double",floatCount, size=4))

floatDataFrame <- as.data.frame(floatData)
floatDataFrame$ID<-seq.int(nrow(floatDataFrame))
p.plot <- ggplot(data = floatDataFrame,aes(x=ID, y=floatData))
p.plot + geom_line() + theme_bw()

In my particular case, I need to plot them from PHP, and already have a pipeline that shells out to gnuplot to plot other types of data. So, in case it’s helpful to anyone, here’s the same plotting in PHP.

<?php

function generatePlotForSPA($source, $targetFile) {

    $sourceFile = fopen($source, "rb");

    fseek($sourceFile, 386);
    $targetOffset = current(unpack("v", fread($sourceFile, 2)));
    if($targetOffset > filesize($source)) {
        return false;
    }
    fseek($sourceFile, 390);
    $dataLength = current(unpack("v", fread($sourceFile, 2)));
    if($dataLength + $targetOffset > filesize($source)) {
        return false;
    }

    fseek($sourceFile, $targetOffset);

    $rawData = fread($sourceFile, $dataLength);
    $rawDataOutputPath = $source . "_raw_data";
    $outputFile = fopen($rawDataOutputPath, "w");
    fwrite($outputFile, $rawData);
    fclose($outputFile);
    $gnuScript = "set terminal png size {width},{height};
        set output '{output}';

        unset key;
        unset border;

    plot '<cat' binary filetype=bin format='%float32' endian=little array=1:0 with lines lt rgb 'black';";

    $targetScript = str_replace("{output}", $targetFile, $gnuScript);
    $targetScript = str_replace("{width}", 500, $targetScript);
    $targetScript = str_replace("{height}", 400, $targetScript);
    $gnuPath = "gnuplot";
    $outputScript = "cat \"" . $rawDataOutputPath . "\" | " . $gnuPath . " -e \"" . $targetScript . "\"";
    exec($outputScript);
    if(!file_exists($targetFile)) {
        return false;
    }
    return true;
}
?>

Transcoding Modern Formats

Since I’ve been working on a tool in this space recently, I thought I’d write something up in case it helps folks unravel how to think about transcoding these days.

The tool I’ve been working on is EditReady, a transcoding app for the Mac. But why do you want to transcode in the first place?

Dailies

After a day of shooting, there are a lot of people who need to see the footage from the day. Most of these folks aren’t equipped with editing suites or viewing stations – they want to view footage on their desktop or mobile device. That can be a problem if you’re shooting ProRes or similar.

Converting ProRes, DNxHD or MPEG2 footage with EditReady to H.264 is fast and easy. With bulk metadata editing and custom file naming, the management of all the files from the set becomes simpler and more trackable.

One common workflow would be to drop all the footage from a given shot into EditReady. Use the “set metadata for all” command to attach a consistent reel name to all of the clips. Do some quick spot-checks on the footage using the built in player to make sure it’s what you expect. Use the filename builder to tag all the footage with the reel name and the file creation date. Then, select the H.264 preset and hit convert. Now anyone who needs the footage can easily take the proxies with them on the go, without needing special codecs or players, and regardless of whether they’re working on a PC, a Mac, or even a mobile device.

If your production is being shot in the Log space, you can use the LUT feature in EditReady to give your viewers a more traditional “video levels” daily. Just load a basic Log to Video Levels LUT for the batch, and your converted files will more closely resemble graded footage.

Mezzanine Formats

Even though many modern post production tools can work natively with H.264 from a GoPro or iPhone, there are a variety of downsides to that type of workflow. First and foremost is performance. When you’re working with H.264 in an editor or color correction tool, your computer has to constantly work to decompress the H.264 footage. Those are CPU cycles that aren’t being spent generating effects, responding to user interface clicks, or drawing your previews. Even apps that endeavor to support H.264 natively often get bogged down, or have trouble with all of the “flavors” of H.264 that are in use. For example, mixing and matching H.264 from a GoPro with H.264 from a mobile phone often leads to hiccups or instability.

By using EditReady to batch transcode all of your footage to a format like ProRes or DNxHD, you get great performance throughout your post production pipeline, and more importantly, you get consistent performance. Since you’ll generally be exporting these formats from other parts of your pipeline as well – getting ProRes effects shots for example – you don’t have to worry about mix-and-match problems cropping up late in the production process either.

Just like with dailies, the ability to apply bulk or custom metadata to your footage during your initial ingest also makes management easier for the rest of your production. It also makes your final output faster – transcoding from H.264 to another format is generally slower than transcoding from a mezzanine format. Nothing takes the fun out of finishing a project like watching an “exporting” bar endlessly creep along.

Modernization

The video industry has gone through a lot of digital formats over the last 20 years. As Mac OS X has been upgraded over the years, it’s gotten harder to play some of those old formats. There’s a lot of irreplaceable footage stored in formats like Sorensen Video, Apple Intermediate Codec, or Apple Animation. It’s important that this footage be moved to a modern format like ProRes or H.264 before it becomes totally unplayable by modern computers. Because EditReady contains a robust, flexible backend with legacy support, you can bring this footage in, select a modern format, and click convert. Back when I started this blog, we were mostly talking about DV and HDV, with a bit of Apple Intermediate Codec mixed in. If you’ve still got footage like that around, it’s time to bring it forward!

Output

Finally, the powerful H.264 transcoding pipeline in EditReady means you generate beautiful deliverable H.264 more rapidly than ever. Just drop in your final, edited ProRes, DNxHD, or even uncompressed footage and generate a high quality H.264 for delivery. It’s never been this easy!

See for yourself

We released a free trial of EditReady so you can give it a shot yourself. Or drop me a line if you have questions.

2006 Lotus Elise For Sale

I’m selling a 2006 Lotus Elise in Magnetic Blue. It’s got 40,450 miles on it. The car has the touring package, as well as the hardtop and soft top and starshield. All the recalls are done. All the fluids (coolant, oil, clutch/brake) were done in 2013. The brakes and rear tires have about 4000 miles on them.

I bought the car from Jaguar Land Rover here in the Twin Cities in December of 2010. They sold the car originally, and then took it back on trade from the original owner so I’m the second owner of the car and it’s always been in the area.

The car is totally stock – no modifications whatsoever. No issues that I’m aware of. Cosmetically, I think it’s in very nice shape – the starshield at the front has some wax under one of the edges that kind of bothers me, but I’ve always been afraid to start picking at it.

If you’ve got questions about the car, or would like to take a look, let me know. I can be reached at cmcfadden@gmail.com or at 612-702-0779.

Asking $32,000.