Code signing Gotcha on MacOS 10.12 Sierra

Most Mac developers have, at one time or another, struggled with an issue related to code signing. Code signing is the process by which a cryptographic “signature” is embedded into an application, allowing the operating system to confirm that the application hasn’t been tampered with. This is a powerful tool for preventing forgery and hacking attempts. It can be pretty complicated to get it all right though.

We recently ran into an issue in which a new test build of EditReady was working fine on our development machines (running MacOS 10.12, Sierra), and was working fine on the oldest version of MacOS we support for EditReady (Mac OS X 10.8.5), but wasn’t working properly on Mac OS X 10.10. That seemed pretty strange – it worked on versions of the operating system older and newer than 10.10, so we would expect it to work there as well.

The issue had to do with something related to code signing – the operating system was reporting an error with one of the libraries that EditReady uses. Libraries are chunks of code which are designed to be reusable across applications. It’s important that they be code signed as well, since the code inside them gets executed. Normally, when an application is exported from Xcode, all of the libraries inside it are signed. Everything appeared right – Apple’s diagnostic tools like codesign and spctl reported no problems.

The library that was failing was one that we had recently recompiled. When we compared the old version of the library with the new one, the only difference we saw was in the types of cryptographic hashes being applied. The old version of the hash was signed with both the sha1 and sha256 algorithms, whereas the new version was only signed sha256.

We finally stumbled upon a tech note from Apple, which states

Note: When you set the deployment target in Xcode build settings to 10.12 or higher, the code signing machinery generates only the modern code signature for Mach-O binaries. A binary executable is always unsuitable for systems older than the specified deployment target, but in this case, older systems also fail to interpret the code signature.

That seemed like a clue. Older versions of Mac OS X don’t support sha256 signing, and need the sha1 hash. However, all of our Xcode build targets clearly specify 10.8. There was another missing piece.

It turns out that the codesign tool, which is a command line utility invoked by Xcode, actually looks at the LC_VERSION_MIN_MACOSX load command within each binary it inspects. It then decides which types of hashes to apply based on the data it finds there. In our case, when we compiled the dynamic library using the traditional “configure” and “make” commands, we hadn’t specified a version (it’s not otherwise necessary for this library) and so it defaulted to the current version. By recompiling with the “-mmacosx-version-min=10.8” compiler flags, we were successfully able to build an application that ran on 10.8.

Oh, and what about 10.8? It turns out that versions of Mac OS X prior to 10.10.5 don’t validate the code signatures of libraries.

Parsing and plotting OMNIC Specta SPA files with R and PHP

This is a quick “howto” post to describe how to parse OMNIC Specta SPA files, in case anyone goes a-google’n for a similar solution in the future.

SPA files consist of some metadata, along with the data as little endian float32. The files contain a basic manifest right near the start, including the offset and runlength for the data. The start offset is at byte 386 (two byte integer), and the run length is at 390 (another two byte int). The actual data is strictly made up of the little endian floats – no start and stop, no control characters.

These files are pretty easy to parse and plot, at least to get a simple display. Here’s some R code to read and plot an SPA:

pathToSource <- "fill_in_your_path";
to.read = file(pathToSource, "rb");

# Read the start offset
seek(to.read, 386, origin="start");
startOffset <- readBin(to.read, "int", n=1, size=2);
# Read the length
seek(to.read, 390, origin="start");
readLength <- readBin(to.read, "int", n=1, size=2);

# seek to the start
seek(to.read, startOffset, origin="start");

# we'll read four byte chunks
floatCount <- readLength/4

# read all our floats
floatData <- c(readBin(to.read,"double",floatCount, size=4))

floatDataFrame <- as.data.frame(floatData)
floatDataFrame$ID<-seq.int(nrow(floatDataFrame))
p.plot <- ggplot(data = floatDataFrame,aes(x=ID, y=floatData))
p.plot + geom_line() + theme_bw()

In my particular case, I need to plot them from PHP, and already have a pipeline that shells out to gnuplot to plot other types of data. So, in case it’s helpful to anyone, here’s the same plotting in PHP.

<?php

function generatePlotForSPA($source, $targetFile) {

    $sourceFile = fopen($source, "rb");

    fseek($sourceFile, 386);
    $targetOffset = current(unpack("v", fread($sourceFile, 2)));
    if($targetOffset > filesize($source)) {
        return false;
    }
    fseek($sourceFile, 390);
    $dataLength = current(unpack("v", fread($sourceFile, 2)));
    if($dataLength + $targetOffset > filesize($source)) {
        return false;
    }

    fseek($sourceFile, $targetOffset);

    $rawData = fread($sourceFile, $dataLength);
    $rawDataOutputPath = $source . "_raw_data";
    $outputFile = fopen($rawDataOutputPath, "w");
    fwrite($outputFile, $rawData);
    fclose($outputFile);
    $gnuScript = "set terminal png size {width},{height};
        set output '{output}';

        unset key;
        unset border;

    plot '<cat' binary filetype=bin format='%float32' endian=little array=1:0 with lines lt rgb 'black';";

    $targetScript = str_replace("{output}", $targetFile, $gnuScript);
    $targetScript = str_replace("{width}", 500, $targetScript);
    $targetScript = str_replace("{height}", 400, $targetScript);
    $gnuPath = "gnuplot";
    $outputScript = "cat \"" . $rawDataOutputPath . "\" | " . $gnuPath . " -e \"" . $targetScript . "\"";
    exec($outputScript);
    if(!file_exists($targetFile)) {
        return false;
    }
    return true;
}
?>

Transcoding Modern Formats

Since I’ve been working on a tool in this space recently, I thought I’d write something up in case it helps folks unravel how to think about transcoding these days.

The tool I’ve been working on is EditReady, a transcoding app for the Mac. But why do you want to transcode in the first place?

Dailies

After a day of shooting, there are a lot of people who need to see the footage from the day. Most of these folks aren’t equipped with editing suites or viewing stations – they want to view footage on their desktop or mobile device. That can be a problem if you’re shooting ProRes or similar.

Converting ProRes, DNxHD or MPEG2 footage with EditReady to H.264 is fast and easy. With bulk metadata editing and custom file naming, the management of all the files from the set becomes simpler and more trackable.

One common workflow would be to drop all the footage from a given shot into EditReady. Use the “set metadata for all” command to attach a consistent reel name to all of the clips. Do some quick spot-checks on the footage using the built in player to make sure it’s what you expect. Use the filename builder to tag all the footage with the reel name and the file creation date. Then, select the H.264 preset and hit convert. Now anyone who needs the footage can easily take the proxies with them on the go, without needing special codecs or players, and regardless of whether they’re working on a PC, a Mac, or even a mobile device.

If your production is being shot in the Log space, you can use the LUT feature in EditReady to give your viewers a more traditional “video levels” daily. Just load a basic Log to Video Levels LUT for the batch, and your converted files will more closely resemble graded footage.

Mezzanine Formats

Even though many modern post production tools can work natively with H.264 from a GoPro or iPhone, there are a variety of downsides to that type of workflow. First and foremost is performance. When you’re working with H.264 in an editor or color correction tool, your computer has to constantly work to decompress the H.264 footage. Those are CPU cycles that aren’t being spent generating effects, responding to user interface clicks, or drawing your previews. Even apps that endeavor to support H.264 natively often get bogged down, or have trouble with all of the “flavors” of H.264 that are in use. For example, mixing and matching H.264 from a GoPro with H.264 from a mobile phone often leads to hiccups or instability.

By using EditReady to batch transcode all of your footage to a format like ProRes or DNxHD, you get great performance throughout your post production pipeline, and more importantly, you get consistent performance. Since you’ll generally be exporting these formats from other parts of your pipeline as well – getting ProRes effects shots for example – you don’t have to worry about mix-and-match problems cropping up late in the production process either.

Just like with dailies, the ability to apply bulk or custom metadata to your footage during your initial ingest also makes management easier for the rest of your production. It also makes your final output faster – transcoding from H.264 to another format is generally slower than transcoding from a mezzanine format. Nothing takes the fun out of finishing a project like watching an “exporting” bar endlessly creep along.

Modernization

The video industry has gone through a lot of digital formats over the last 20 years. As Mac OS X has been upgraded over the years, it’s gotten harder to play some of those old formats. There’s a lot of irreplaceable footage stored in formats like Sorensen Video, Apple Intermediate Codec, or Apple Animation. It’s important that this footage be moved to a modern format like ProRes or H.264 before it becomes totally unplayable by modern computers. Because EditReady contains a robust, flexible backend with legacy support, you can bring this footage in, select a modern format, and click convert. Back when I started this blog, we were mostly talking about DV and HDV, with a bit of Apple Intermediate Codec mixed in. If you’ve still got footage like that around, it’s time to bring it forward!

Output

Finally, the powerful H.264 transcoding pipeline in EditReady means you generate beautiful deliverable H.264 more rapidly than ever. Just drop in your final, edited ProRes, DNxHD, or even uncompressed footage and generate a high quality H.264 for delivery. It’s never been this easy!

See for yourself

We released a free trial of EditReady so you can give it a shot yourself. Or drop me a line if you have questions.

2006 Lotus Elise For Sale

I’m selling a 2006 Lotus Elise in Magnetic Blue. It’s got 40,450 miles on it. The car has the touring package, as well as the hardtop and soft top and starshield. All the recalls are done. All the fluids (coolant, oil, clutch/brake) were done in 2013. The brakes and rear tires have about 4000 miles on them.

I bought the car from Jaguar Land Rover here in the Twin Cities in December of 2010. They sold the car originally, and then took it back on trade from the original owner so I’m the second owner of the car and it’s always been in the area.

The car is totally stock – no modifications whatsoever. No issues that I’m aware of. Cosmetically, I think it’s in very nice shape – the starshield at the front has some wax under one of the edges that kind of bothers me, but I’ve always been afraid to start picking at it.

If you’ve got questions about the car, or would like to take a look, let me know. I can be reached at cmcfadden@gmail.com or at 612-702-0779.

Asking $32,000.

MOOCs: Solving the wrong problem

Over the last six months, the world of “e-learning” has been totally and completely overrun by the MOOC, or massive open online course. In my years involved with technology in higher education, I’ve never seen another concept suck the oxygen out of the academic technology space in this fashion. While not fundamentally new, the MOOC has totally dominated the e-learning conversation and dramatically shifted higher education agendas.

I’m a proponent of the core concepts of the MOOC. Reducing barriers to knowledge is a good thing. An education model based on keeping knowledge locked up in (forgive me) an ivory tower is fundamentally unsustainable in a digital world. I also believe that large, lecture-based classes can be inefficient uses of student and instructor time, and there are plenty of cases of subpar instruction in higher education.

The dramatic rise of the MOOC within the mainstream press and higher education discourse has resulted in traditional institutions rushing to join the party. It is this action that I believe is shortsighted, poorly reasoned, and potentially destructive.

Let me begin by articulating some of my assumptions. The term “MOOC” (we could have a whole separate article on how horrible the acronym is, but it’s too late now) means different things to different people. To some extent, it has become a “unicorn” term – a way for the uninformed to sound prescient without needing to back their words with knowledge. The types of courses I’m addressing here on are those taught in the traditional large lecture hall, usually introductory courses. These courses typically fill a lecture hall with disinterested freshmen, and often taught by a disinterested graduate student. Coursework generally consists of readings, discussion sections, and multiple choice and short answer tests.

The MOOC movement instead takes the lectures, breaks them into bite-sized chunks, and offers them presented by articulate, informed professors. Students would watch these lectures online, participate in online discussion with other students, and then take standard assessments. Because this type of course can be infinitely scaled, it can be opened to any and all interested parties – traditional students as well as those learning simply “for fun”. A single course can effectively support an unlimited number of students with negligible overhead.

I should note that, although I tend to think about these issues in terms of the institution I’m most familiar with – the University of Minnesota – I believe the lessons and missteps I’ll discuss are being repeated throughout the higher education system.

Dissecting the MOOC

As I stated at the beginning, I think the MOOC concept is perfectly fine. Taken on its own, the typical MOOC outlined above is a reasonable approach to replacing large lecture coursework. If it means lectures are better designed, more polished, and delivered with more passion and energy, that’s even better.

I do take issue with the notion that the MOOC is a righteous democratization of knowledge, opening it up to those learners simply interested for the joy of learning. I think this is a complete red herring argument, presented by MOOC backers to deflect valid criticism. The statistics bear this out: retention rates for MOOCs are consistently in the 10-15% range. It turns out that although most people think learning for fun sounds like a good idea, the reality is that it rarely happens. Moveover, the MOOC format does not lend itself to answering a specific question or providing a specific skill – training and tutorial organizations like lynda.com and the Khan Academy are far better suited to these types of needs. If my end goal is to learn Photoshop or integral calculus, I’m unlikely to participate in an ongoing, sequenced course like a typical MOOC.

That said, I don’t otherwise find serious fault with the way MOOCs are being run by firms like Coursera and Udacity. If they’re able to sustainably produce a product that users are interested in, great.

The higher education response

The response from higher ed, on the other hand, is seriously flawed.

Let’s begin with the language. The companies behind the MOOC use language that assumes that traditional classroom instruction is broken. To hear them describe it, the typical lecture hall is a post-apocalyptic wasteland of bored students and depressed faculty, droning on, with little to no learning taking place. Rather than responding with an impassioned, evidence-based defense of this form of instruction, higher education has by and large accepted this characterization and decided that MOOCs are indeed the answer. Institutions have entered this discussion from a defensive, reactionary position.

Are large lecture courses perfect? Certainly not. In fact, I’m a huge proponent of the blended model, which seeks to shift lecture delivery to polished, segmented, online videos, with class time dedicated to deep dives and discussion. But I do think large lecture courses have value as well. A good lecturer can make the experience engaging and participatory, even in a lecture hall with hundreds of students. And, the simple act of trudging across campus to sit in a lecture hall for 50 minutes acts as a cognitive organizer – a waypoint on a student’s week. Keep in mind, these courses are typically made up primarily of eighteen year old freshmen, struggling with life on campus and figuring out how to organize their time and lives.

These large lecture courses almost always include a group discussion component. This is a time for students to participate in a mediated, directed discussion. More importantly, this is a time for the nurturing and growth that I believe is critical to the undergraduate experience. Seeing classmates, hearing their inane or insightful questions, listening to them articulate the things that they’re struggling with – these are all important but difficult to quantify parts of learning, which are not captured by a solitary or participation-optional online experience. Even a highly functional synchronous online discussion is inherently less likely to go down an unexpected rabbit hole in the exciting and invigorating way that a great classroom discussion can – even in a large lecture hall.

Instead of making the case for traditional courses and shifting the conversation to ways to make them even better (and more affordable), institutions have rushed to turn their own courses into MOOCs, and offer them to the world. This brings me to my next serious concern about this movement.

Why does anyone care about your MOOC? If the MOOC concept is taken to its logical end, the world needs a single instance of any given course. Institutions seem to be unwilling to acknowledge this. Instead, there’s an assumption they’ll build the courses and the world will beat a path to their door. Why would I choose to take a course from the University of Minnesota, when I can take the same course from Harvard or Stanford? Are institutions ready to compete at the degree of granularity this type of environment allows?

The rush to create MOOCs reminds me a bit of the little free library system. I lack the metrics to convincingly argue this point, but as an outside observer, it seems that people are generally more interested in creating the libraries than using them, and it feels like the number of libraries equals or exceeds the number of customers. I believe that the MOOC landrush risks resulting in a similar imbalance. If every school, from the prestigious to the second or third tier, rushes to offer MOOC forms of their courses, there is likely to be an abundance of supply, without the requisite demand.

Heating Buildings is Expensive. MOOCs are by and large the stereotypical “bubble” product. There’s little to no business model behind them, either for their corporate backers or participating institutions. Although that’s probably fine for the companies delivering the courses – their overhead costs are relatively low – it’s a huge issue for institutions with massive infrastructure and staff overhead. If we’re moving to a model where the cost of coursework is dramatically reduced, it presents existential threats for the other missions of the institution, or the very existence of the institution all together. While it’s assumed that institutions will continue to charge as part of the degree-granting process, nobody seems to think that a course delivered via a MOOC should be priced similar to a course delivered in a classroom.

How does the institution benefit? How does offering a MOOC make the institution a better place? Less expensive options for receiving course credit are certainly a benefit for students, but that is a separate issue. Higher education is far too expensive, but MOOCs are not the sole solution. In general, the value proposition for the University is esoteric – simply offering the course supposedly means the world will enhance the content by bringing a wider audience, and its existence will enhance your institution’s standing in the world. These are, at best, hopelessly idealistic. Because there’s no business model to speak of for MOOCs, Universities are left shouldering the costs of creating the courses with little to no expectation of having that value returned.

An alternative path

Having great courses delivered by great instructors with great instructional design is… great. I’d much rather take Introduction to Economics from Paul Krugman than a grad student. Having that type of content available is inarguably better for the world.

I believe the path forward is to leverage the best content, and combine it with the high-touch, in-person experience that makes undergraduate education so important, particularly for traditional students. Mediated discussions, in-person office hours, and writing assignments graded by people with (ostensibly) actual writing skills are the types of growth activities that create the high functioning, well-rounded people our society needs.

It’s also crucial for higher education to begin pushing back against the language of the broken classroom. Although institutions are indeed broken in innumerable ways, by and large, instruction actually works pretty well.

It’s critical as well that a clear distinction is drawn between MOOCs and online education in general. Along with Jude Higdon, I teach an online course which is in many ways the anti-MOOC. Our students do large amounts of writing, which is copiously annotated by the instructors. Students participate in synchronous chat sessions and office hours with the instructors. Although lecture content is reused, the interpersonal interaction is real, genuine, and frequent. This concept is, by design, not scaleable. But I believe the benefits in terms of breadth and depth offered to students by this experience are demonstrably better than that offered by a MOOC. Institutions need to be honest about these tradeoffs.

A MOOC is not a magic bullet. It will not solve higher education’s substantial woes. It will create new woes.

Your MOOC will almost certainly not make a dent in the universe. The world will not beat a path to your door, and you still need to pay to maintain the doors.

Pebble, Day One

My Pebble watch arrived yesterday. This was a project that I kickstarted back in May of 2012. While moderately behind schedule, they’ve delivered a functioning, elegant product which does what it was supposed to.

It’ll take some time for the Pebble ecosystem to develop. Right now, in addition to serving as a watch, the Pebble can control music playback on my iPhone, and display notifications (for example, show me a text message as it comes in). Eventually, I suspect we’ll see whole new types of application for this breed of glance-able, connected device.

Already, I’m finding the notification functionality pretty attractive. It’s great to not have to pull my phone out of my pocket (particularly when all bundled up in the winter) to see who’s calling or check an iMessage. The build quality seems excellent, and the whole device works pretty slick. I’m still getting used to the whole notion of wearing a watch, having not done so since the 90s (!!), but so far I can give it a thumbs up.

IMG_0589

Waiting for Energon Cubes

Energy storage is a key component in our inevitable move away from fossil fuels. If we ever want renewables to take over for base-load demand (having a wind farm keep your fridge running even when the wind isn’t blowing), or drive long distances in plug-in electrics, we’ll need a serious revolution in energy storage.

This is a field I’m very excited about, both in the near term and the long term. It’s an area where there are still big problems to be solved, with lots of opportunities for real ground-up innovation in basic physics, materials science, chemistry and manufacturing.

There’s a need to begin developing our language around energy storage, and to develop a more thorough understanding of the tradeoffs involved. This has been made abundantly clear by the coverage surrounding the battery issues of the Boeing 787 Dreamliner. Most mainstream press has been unable or unwilling to cover the science behind the battery issues, or to accurately explain the decision making that lead to the selection of the type of batteries involved.

When we talk about energy storage, there are two key factors we need to talk about. Energy density and specific energy. Energy density is how much energy you can fit into a given space (megajoules/liter), and specific energy (megajoules/kg) is how much energy you can “fit” in a given mass.

Tesla Roadster.

Tesla Roadster.

Let’s look at a concrete example. The Tesla Roadster relies on a large battery pack, made up of many lithium ion cells. The battery pack weighs 450 kilograms, and has a volume of approximately 610 liters. It stores 53 kilowatt hours of energy (190 MJ). So, it has an energy density of 0.42 MJ/kg and a specific energy of 0.73 MJ/liter.

For comparison, let’s look at the Lotus Elise (I could have said “my Lotus Elise” but I didn’t, because I’m classy), which is fundamentally the same car running on gasoline. It can carry 11 gallons of gasoline in its fuel tank. Gasoline has an energy density of 46 MJ/kg, and a specific energy of 36 MJ/liter (those of you screaming about efficiency, hold on). The 29 kilograms of gasoline in a full tank represent 1334 MJ of energy, approximately 7 times more than the 450 kilogram Tesla battery pack. Frankly, it’s a wonder the Tesla even moves at all!

Lotus Elise.  I think this one is particularly attractive, and probably driven by a very nice person.

Lotus Elise. I think this one is particularly attractive, and probably driven by a very nice person.

Now, it’s important to add one more layer of complexity here. Internal combustion engines aren’t particularly efficient at actually moving your car. They’re very good at turning gasoline into heat. The very best gasoline engines achieve approximately 30% efficiency at their peak, so of that 1334 MJ in the Lotus’ tank, perhaps only 400 MJ are actually used to move the car. The rest is used to cook the groceries that you probably shouldn’t have put in the trunk. The electric drive train in the Tesla on the other hand is closer to 85% efficient.

That’s a quick example of why understanding some of the engineering, science, and math behind energy storage is important – without means for comparison, it can be difficult to grasp the tradeoffs that have been made, and why products end up being designed the way that they are.

I’ll dig deeper into the specific types of battery technologies on the market and the horizon in a future post. At the moment, they’re all within approximately the same ballpark for density and specific energy, and simply offer different tradeoffs in terms of charge times, safety, and longevity.

Batteries are not the only way to store energy though, and aren’t nearly as sexy as some of the alternatives.

Fuel cells have fallen out of vogue a bit over the last few years. While Honda forges on, most of the excitement seems to have been supplanted, for now, with acceptance of the fact that the lack of a large-scale hydrogen distribution network dooms them to a chicken-or-the-egg fate for the foreseeable future. Fuel cells operate by combining stored hydrogen with oxygen from the air, to release energy. Because hydrogen can be made by electrolyzing water, fuel cells are a feasible way of storing energy generated by renewable sources.

Due to the increased efficiency of an electric drivetrain and the high energy density (though lower specific energy) of hydrogen, a fuel cell drivetrain can rival gasoline for overall system efficiency. Unfortunately, they achieve all of this using a variety of exotic materials, resulting in costs that are completely unrealistic (think hundreds of thousands of dollars per car) and look likely to remain there for the foreseeable future. That said, just today saw word of a fuel cell technology-sharing deal between BMW and Toyota – perhaps there’s still some life in this space.

There’s another type of energy storage, which excites me most of all – these are the “use it or lose it” short term energy storage technologies, which are designed primarily to replace batteries in hybrid drivetrains, or to smooth short term power interruptions in fixed installations.

I’d like to explore these further in depth in the future, but for now, a quick survey is appropriate. The technology I’m most interested in is kinetic energy storage in the form of flywheels. At it’s most basic, you take a wheel, get it spinning, and then couple it to a generator to convert the motion back into electricity.

This is an old technology. Traditionally, you used very heavy wheels, spinning relatively slowly. This type of system is sometimes used in place of batteries for short term power in data centers. In the last few years, flywheels have gotten interesting for smaller-scale applications as well, thanks to modern materials sciences. A small amount of mass spinning very fast can store the same amount of energy as a large amount of mass spinning very slowly. Modern materials and manufacturing mean it’s realistic to build a hermetically sealed flywheel which can spin at hundreds of thousands of RPM. Ricardo has done just that, as has Torotrak.

These systems have the advantage of being relatively lightweight, simple and low-cost. While they don’t store a large amount of energy, they’re ideal for regenerative braking and increasing the overall efficiency of an ICE drivetrain.

Another category of energy storage is thermal storage. These are what they sound like – means to store heat (from solar most often) for extended periods of time. This is another old technology, with some interesting new twists. Remember that gasoline engines turn lots of their energy into heat. Some manufacturers are experimenting with systems which can convert some of that heat into energy, using good old fashioned steam.

A final type of storage which doesn’t fit nicely into any category is compressed air. This week, Peugeot-Citroen (PSA) unveiled their compressed air hybrid drivetrain. This system uses compressed air pressurized by an onboard pump, driven through regenerative braking and other “waste” energy capture. While more complex than a flywheel, total energy storage is much greater as well, and PSA is talking of 30% reductions in emissions thanks to this technology. Tata has also experimented with cars using the MDI compressed air drivetrain, which is designed to be “fueld” by an offboard compressor.

As I noted at the beginning, part of what makes me excited about this space is that it’s not a solved problem. There are loads of companies all around the world creating innovative solutions. Most of them will probably fade away, but some have a reasonable chance of replacing or supplementing the “status quo” energy storage options we have today. Interestingly as well, no one country is dominating the research in this space. The UK, in keeping with tradition, has a large number of very small companies working on projects (their “cottage industries” are often actually housed in cottages!), while the US does this sort of development primarily via research institutions, and other countries rely on government-run labs.

Until all are one, bah weep grana weep ninny bon.