Getting Old in a Maturing Industry

I recently returned from the 2013 NAB Expo in Las Vegas. For those who are unfamiliar, the NAB Expo is the biggest trade show and convention for the domestic film and broadcast industry. For one week every year, the Las Vegas Convention Center plays host to 100,000 film, video, and radio professionals, who come to check out the latest gear, learn what’s hot, and just generally hang out with a group of like-minded folks. It’s a “who’s who” of the business, and it’s far more massive than you can possibly imagine.

This was, as best as I can recall, my 12th year attending NAB. Some of those years have been quick hit-and-run visits (fly in in the morning, and out in the evening) and some have been incredibly intense week long affairs, building and manning the booth for Divergent Media. This year was somewhere in the middle – a week in Las Vegas, but only smaller events for the company, and plenty of time to explore the show.

In the aftermath of the show, I’ve been reflecting on the changes I’ve seen, and what it means about the industry.

When I first went to NAB in 2001, there was a sharp divide between the “indie” producer and the professionals. The desktop video revolution was still picking up steam, as DV cameras got better and “affordable” editing software gained pro features. Indie filmmaking involved a lot of clever repurposing, scraping by, and bending rules. Figuring out how to build your own steadicam approximation, repurposing Home Depot lights for your shoot, and assembling firewire drives from bare enclosures to save a few bucks.

There was precious little “prosumer” gear on the market – the pricing model still favored rental houses and big shops. Indies were largely ignored on the show floor. Apple and a few others saw where things were going, but the “big guys” were indifferent or downright disparaging.

Over the following four or five years, the “indie” part of “indie filmmaking” dropped off – everyone was an indie filmmaker, and everyone was a pro. Equipment became much more affordable and much better, software became cheaper, and everyone began to accept the new reality.

This was a particularly exciting time in the industry because we were right on the edge of what technology was capable of. Realtime effects, HD editing, multicam editing – you could see Moore’s Law in action. Each year, the quality of what could be done on a reasonable budget (or done at all) improved immensely.

The bursting of the financial bubble had a big impact on NAB, as it did on every trade show. Attendance dropped, and the pace of innovation slowed. But the bigger change is what began to happen around 2009, and built speed over the following years. The part of the industry we used to call “indie” grew up and became mature, and nothing replaced it to hurl rocks at the old guard. Today, innovation is no longer a matter of “last year that wasn’t possible and now it is,” or “last year this was $10k and now it’s $1k,” but rather “last year that took four steps and now it takes three” or “now it looks 5% better.”

Take, for example, the upstart camera manufacturer Red. Red was the star of the 2006 NAB show – they announced the Red One, a 4K “cinema” camera at a price that indies could potentially afford. They were aiming to disrupt the camera market, and filmmaking in general.

While Red had teething problems, by late 2007 it was a real product and accomplished a lot of what its creators set out to do. People who had been shooting films on $250,000 Sony F950s or actual film cameras began working with this $30,000 rig. Red bet on affordable pixel density, and for a while, they won.

The problem with disrupting a market is that if your competition survives the disruption, you have to compete on an ongoing basis. Now, in 2013, we have 4K (or higher) cameras from every manufacturer, at prices that in some cases substantially undercut Red. There are new kids on the block, like Blackmagic Design. And big vendors like Sony have caught up on pricing, marketing, and features. Whereas Red was once the star of the show, they’re not a bit of an also-ran.

The industry has matured in such a way that it gives filmmakers more choice for lower prices. You can’t pick a bad camera anymore. You can’t pick a bad editor. The industry has caught up with the market it serves. It’s no longer in need of a massive disruption. You don’t need clever tricks or dumpster diving – you just need talent.

It’s not just true in cameras of course – Final Cut Pro disrupted editing, Blackmagic’s revival of Resolve disrupted color correction, the LED revolution disrupted lighting, and obviously distribution is just one long stream of disruption.

The reality is that the film and video industry has grown up, and reached a point of stability. While there may be a new “indie filmmaking” revolution in the future, it doesn’t feel like it’s right around the corner. This is reflected in the NAB Expo. It feels like the demographics are skewing a bit older. I still feel like I’m one of the “young” folks there, even though I’m substantially older than when I first attended. There’s less razzle-dazzle – fewer vendors going to extremes to show up the vendor at the next booth. It’s less circus, less sexy, just business.

Hopped up on Lithium

The ongoing saga of the Boeing 787 Dreamliner has resulted in a surge of partial or completely misleading stories about modern battery technology. While I’m far from an expert in the field, it’s one I follow closely, and I think I can contribute an “interested outsider” perspective on the state of rechargeable batteries and related technologies, circa 2013.

Let’s start by talking terminology. Lithium-Ion is an umbrella term, which represents a whole family of technologies. Simply knowing that a given application (like an airplane) makes use of “lithium-ion batteries” tells you very little about the performance, safety, and reliability characteristics of those batteries.

Battery technology is a materials-science intensive field, so it should come as no surprise that material choice is the key differentiator between batteries in the lithium-ion family. The three core components of a battery are the cathode and anode, and the electrolyte which separates them.

While there are hundreds of combinations of materials in use, depending on the intended application (and the patent pools of their backers), the most meaningful differentiation to be aware of is the types of positive electrodes (cathodes) in use.

The three primary families of cathode materials, and those worth knowing a little something about, are lithium-cobalt, lithium-iron-phosphate, and lithium-manganese. Each has different pros, cons, and risks.

A further note about terminology here – seeing types of electrodes written in this fashion might cause you to think that other terminology, like lithium-polymer, also refers to electrode choice as well. Unfortunately, it’s just confusing terminology. In fact, lithium-polymer refers to the the electrolyte, and a lithium-polymer battery can use any of the above mentioned electrode materials. Your laptop, for example, almost certainly uses lithium-polymer batteries with lithium-cobalt cathodes.

In addition to being track five on Nevermind, lithium is a highly reactive material. If you’re familiar with the reaction of potassium being dropped into a beaker of water, it’s very similar.

Now, a battery doesn’t contain pure lithium. That’s why you’re not on fire right now. The lithium is bonded with another material – that’s the cobalt, iron-phosphate, etcetera. These molecules also include oxygen. When exposed to high temperatures, these bonds can break down, resulting in nice, reactive lithium, along with fire’s friend, oxygen. In the case of a battery, a high-temperature situation can result from poor charging circuitry, short circuits, punctures or other external trauma. Since a battery generally consists of many cells, a single failed cell can easily produce enough heat to initiate a chain reaction.

Lithium-cobalt is the most common type of lithium-ion cathode, and delivers high energy density, relatively low cost manufacturing, and decent longevity when managed properly. The primary downside is that the lithium-cobalt bond is relatively weak, meaning these are generally types of lithium batteries which are at fault when you hear about battery fires. See, for example, the 787.

The most common alternative to lithium-cobalt is lithium-iron-phosphate. The A123 Systems batteries I’ve written about in the past are a derivative of this technology. The lithium-iron-phosphate bond is inherently more stable, even when abused or severely heated. The structure of the lithium-iron-phosphate molecule is such that it takes far more energy to free the lithium. Thus, these batteries are ideal for environments in which safety is key – automotive uses for example.

Now, it’s fair to ask why the Boeing 787 doesn’t use this type of battery. I’m obviously not privy to the internal engineering decisions at Boeing, but I can hazard a guess. First off, the battery design for the 787 was locked in 2005 or 2006. Back then, the technology for lithium-iron-phosphate was relatively immature and volume use wasn’t common. Additionally, for a given power output, a lithium-iron-phosphate battery will be larger and heavier than a corresponding lithium-cobalt design – this would have been even more pronounced in 2005.

In addition, the types of situations in which a lithium-iron-phosphate design is “safer” don’t commonly occur on an aircraft. For example, if the relatively small battery of a 787 is engulfed in flames, there are far, far bigger issues to worry about. The risk in an automotive implementation is that a relatively minor accident that damages the battery pack could cause a thermal runaway condition. There don’t tend to be “relatively minor” accidents involving massive jets. The other types of issues that can cause problems with batteries should be able to be mitigated through external controls – smart chargers with fused links in the case of overvoltage, etcetera. When we finally learn (if we learn) what caused the issues on the 787, I would suspect we’ll find that at least part of the cause was poor design or manufacturing issues surrounding these systems, rather than in the battery cells themselves.

There are a variety of other cathode chemistries in various applications. In particular, lithium manganese oxide and related manganese compounds provide better longevity and performance in harsh environments, but don’t yet excel in general purpose situations.

Supercapacitors represent another, related family of energy storage technologies which occasionally spawns a lot of interest, without necessarily a lot of results. Like all capacitors, the supercapacitor (ne ultracapacitor) stores a static charge using a variety of different materials. A supercapacitor can store energy very quickly, for a relatively long time, and survives a far greater number of charge cycles than a chemical battery. Unfortunately, supercapacitors store a relatively small amount of energy and are thus more appropriate to high-output low-duration implementations. Over time, capacity is improving, but the overlap between supercapacitors and traditional batteries is still relatively small – powertools and a few other small gadgets. Cost is still a limiting factor as well.

Longer term, supercapacitors have a lot of potential in energy recovery applications – for example, regenerative braking. But, beware startups promising orders of magnitude advances in supercapacitor technology. There are many out there making such claims, and none have been able to demonstrate solid evidence of their viability.

The reality is that, barring some “out of left field” advance, battery technology looks set to improve in relatively small steps as materials science advances, nanotech manufacturing processes improve, and overall volume drives down costs. An electric car that can charge in seconds and deliver a 500 mile range seems unlikely in the coming decade. But the more relatively-decent electric cars you buy today, the more realistic that future car becomes. I’m sure Tesla, Nissan, and Fisker would appreciate it as well.

Crowdfunding: a buzz-worthy buzzword

I recently wrote about my attitudes toward Kickstarter, the imperfect but exciting crowdfunding platform. Now, I’d like to turn my attention to a related and similarly exciting use of crowdfunding as an investment vehicle.

(Aside: for those who are interested in Kickstarter, be sure to read this analysis of Kickstarter’s impact on CES from The Verge)

First, if you’re not already familiar with the concept of an accredited investor it’s worth a brief review. In short, if your income tax didn’t go up at the start of 2013, you’re not an accredited investor. Don’t feel bad – we’re much more fun than them.

Normally, in order to purchase equity in a company, the company either needs to be listed on a public exchange, or minimally, the company needed to register a security with the SEC. Needless to say, that presents a non-trivial barrier to entry. Startup companies have been prohibited from raising funds by appealing to the general public. That’s why you can’t get shares in a company when you invest via Kickstarter.

This situation has changed dramatically over the last eighteen months. First, increased interest in the general notion of crowdfunding has resulted in clever “circumvention” of some of these restrictions (more on this in a moment). Second, the JOBS act, passed in March of 2012, creates (or will create) a variety of provisions for direct investment in startups, regardless of personal income status.

It’s important to draw a distinction between this type of crowdfunding and person-to-person lending, as they often get mentioned in the same breath. Person-to-person lending, either as a non-profit endeavor via services like Kiva, or as a for-profit endeavor like Lending Club, connects lenders with borrowers directly. Groups of lenders are connected to loan money to a borrower, who repays the loan to those lenders with a reasonable interest rate (or none at all in some cases). Lenders can, potentially, resell loans on a secondary market, but generally these are long term illiquid investments. Lending Club loans help fund debt consolidation, home and auto refinancing, and other types of general consumer borrowing, and has proven fairly successful.

These types of investments are regulated primarily on a state-by-state basis. Although they’re an interesting alternative to other types of investment, I don’t find them particularly exciting. Instead, I’d like to highlight two startups in the crowdfunding space which I am incredibly excited about, Fundrise and Solar Mosaic.

When discussing Kickstarter, I explained that I invest because I believe the world will be moderately improved if the thing I’m backing exists. In much the same way, both Fundrise and Solar Mosaic provide the opportunity to invest in the creation of a thing – a building in the case of Fundrise, a solar installation in the case of Solar Mosaic.

Both projects offer somewhat similar narratives. They aim to fund things which require large initial capital investments, and offer relatively predictable returns over a fixed period of time. And both projects aim let you “do good” in your community, without having to repress your capitalistic urges entirely.

Solar installations offer a fairly straightforward business proposition. After the initial installation, solar panels produce power at a relatively predictable rate, which can be sold back into the power grid or directly to a customer. Rather than seeking to fund massive generating plants in the desert, Solar Mosaic targets rooftop or other localized solar installs. This type of installation provides a tangible, visible form of renewable energy production directly within communities. By investing via Solar Mosaic, investors have the opportunity to see their investment take a shape, while also generating a reasonable return.

Solar Mosaic is currently limited to small investors in California and New York, as well as accredited investors anywhere. As the SEC sorts out the implementation of the JOBS act, that’s likely to change. They’ve recently moved out of beta and seem to be “going vertical.”

While Solar Mosaic is interesting and exciting, Fundrise falls into the category of please take my money now – I think it’s absolutely brilliant. For a fantastic breakdown of the process which lead to the creation of Fundrise, and the current legal status, check out this in-depth article from The Atlantic Cities. I’d like to offer my own interpretation, as I see Fundrise as a solution to a problem I’ve identified in the past, but never been able to articulate.

I’ve often been confused by the presence of large numbers of abandoned, dillapidated buildings right in the heart of cities with sky-high property values. How can property sit vacant in a place like San Francisco or Washington DC?

When you dig a bit deeper though, it becomes clear that you’re dealing with a classic “bootstrapping” problem. The value of land is so high that the value of the building becomes relatively insignificant. A piece of land with a cat-infested, burned-out shell of a building may only cost marginally less than a piece of land with a mixed use retail structure. Investors therefore have little incentive to take on the risk and complexity of buying and rehab-ing the “ugly” property.

In cities with lower property values, non-profits, foundations or neighbor-revitalization groups might step in to renovate a property. But, when the cost of entry is in the millions or tens of millions of dollars range, even the most well-heeled foundation can only have limited impact.

Fundrise circumvents this problem by seeking investment from individual community members or others interested in revitalization. Collectively, the investors purchase the property and fund the renovation or construction. The property is then leased to a preselected tenant or tenants. Investors are then repaid via rent paid by the tenant, as well as the increase in the value of the property.

While it doesn’t make sense in all cities (including, very likely, my own), I consider this a potentially transformative solution to a vexing problem.

Taking a step back, it’s worth asking whether all of these crowdfunding and person-to-person lending platforms represent a meaningful “alternative” to traditional investing, or whether they simply supplement the standard vehicles.

Much has been written about Gen Y being scared of investing due to the fallout from the financial bubble bursting. Thirty-somethings are leaving their money tucked under mattresses or in low-yield (essentially no-yield) savings accounts.

As a certified thirty-something, I don’t share these fears. However, I’m also a person who values rationality. I want to invest in a company based on a belief that the company has a bright future. Should that belief prove accurate, I’d like to see my investment grow. Over the long term, that may still be the way the markets behave, but the degree of irrationality over the last five years has shaken my confidence. A political party willing to risk government default combined with an incredibly unclear global economic outlook means that investing feels far more like rolling a dice than it did even ten years ago.

Solar Mosaic, Fundrise, and a whole raft of other companies in this space provide an opportunity to see your investment take shape on a community level, and to know that it will succeed or fail based on far more localized, micro-economic variables. They’re not risk-free by any stretch, but at least if they fail you can take solace in the fact that they’ll feel very bad about it.

Accepting Fracking

Or “How I stopped worrying and learned to love the injection of sand-derived proppants at a rate necessary to exceed the pressure gradients of shale.”

I’m an environmentally-minded liberal. I understand the science of global climate change, and understand that it is occurring at an alarming rate. I know that fossil fuels are finite, and that energy and energy derived issues (mass migration from global warming, drought) are going to be the basis for wars and chaos in the coming century. I acknowledge that I lead an embarrassingly energy inefficient life as compared to the rest of the world.

I want to live in a world in which all of our energy needs are met exclusively by windfarms on the plains and solar farms in the southwest, and in which we drive cars powered by electricity or hydrogen derived from electrolysis.

I am also, unfortunately, a realist and a bit of a pessimist. I’ve come to accept that we’re unlikely to make the jump from the current energy landscape to the ideal I’ve outlined in anything approaching the near term. I also believe that some of the issues I’ve outlined can’t wait. I’ve – grudgingly – accepted that nuclear (fission) will not gain prominence anytime soon. For all these reasons, I’ve come to accept fracking.

Let’s not candy-coat things. Fracking is really bad. It pollutes groundwater. It may cause earthquakes. It has the potential to free radioactive materials. Burning natural gas contributes to climate change, and the methane which can leak during fracking is an even worse contributor to the greenhouse effect than CO2.

All of this is very uncool. In a world of black and white absolutes, fracking would be a nonstarter.

But, big parts of reality are uncool. It’s easy to get hung up debating the ways in which fracking is worse than the environmental ideal – in which our choices are either “perfect” or “evil”. But, all the while, coal and oil power plants are spewing massive amounts of carbon, and people are dying in conflicts over oil.

The harms of fracking are largely localized and in my opinion are outweighed by the benefits. Burning natural gas contributes to the greenhouse effect, but far, far less (on a volume per kilowatt basis) than coal or oil. The other toxic materials emitted (sulfer, mercury) are negligible. Because I believe that slowing climate change is the number one imperative facing humanity over the medium-term, the tradeoffs of fracking-derived natural gas seem acceptable to me.

That said, there are a number of downsides and issues which need to be addressed.

Fracking makes energy derived from natural gas far too cheap. While this has had many positive short-term implications for the American economy, including helping drive the “insourcing” boom, it creates a playing field on which other types of energy cannot compete. It also encourages profligate energy use. Emily Bazelon commented on a recent episode of the Slate Political Gabfest that our incredibly wasteful energy consumption is one of the key social issues we’ll live to regret, and I couldn’t agree more.

Natural gas relies on existing distribution systems, and does very little to encourage the reconstruction that our power grid so desperately needs. It does nothing to disrupt the model of consumer-adjacent power generation – powerplants located near population centers instead of near energy sources. A shift away from this model is critical for realistic renewable utilization, and natural gas gets us no closer.

So, as an environmentally-conscious liberal who has accepted fracking, how can I continue to have a meaningful voice in the energy conversation? There are a few things I’ll push for with my voice, my vote, and my dollars.

We need a carbon tax. We need to push up the cost of energy derived from fracking, to account for the environmental impacts of not only the burning of natural gas, but also the fracking process. We need a carbon tax which makes the worst climate change offenders intolerably expensive, even in the short term. And we need to use the revenue from that carbon tax to fund the reconstruction of our power grid and the continued development of renewables. We need to stop ceding the solar technology industry to the Chinese and the Germans.

Is accepting fracking merely a case of selling out liberal causes? I don’t think so. But I think it’s important that we apply the pressure necessary to make sure that fracking is a transitional energy source. It goes a long ways towards stopping the bleeding – a United States powered entirely by natural gas is in a far better place environmentally, geopolitically and financially than our current state. It’s just not the place we need to create for our grandchildren.

(Ideally we’d accept nuclear as well, but I’m far too pessimistic to reasonably expect that. And, you know, as long as the plants aren’t in my backyard.)

Knowing Enough

I’m a fraud. I’ve convinced people that I know far more than I actually do. They seem to have bought it. By and large, this has served me well and I’ve embraced it.

This post was originally going to be “knowing enough math,” targeted at developers. That post will have to wait though, as I want to write a bit about the ways in which I try to force myself to encounter a breadth of subject matter and perspectives.

Let’s parse some terms first. I’m not sure we have a word to fully describe the notion I’m interested in. “Curiosity” seems to imply a certain degree of superfluousness or a lack of intellectual depth. “Polymath” is a term which, I suspect, cannot be used in describing oneself without sounding seriously pretentious. And that’s coming from me.

To me, “knowing enough” means having a depth of knowledge about a given topic, sufficient to allow you to know that there’s more to know. Donald Rumsfeld is, on the whole, a disastrously awful train wreck of a human being. That said, I give him credit for popularizing the notion of “known unknowns” – knowing the things you don’t know.

Much has been written about the dangers of a hyperpersonalized and siloed web. In addition to the detrimental impacts this has on our communities and relationships, it also means we’re less likely to be exposed to surprises or to be delighted by the unexpected.

Especially for those of us involved in technology, it’s exceedingly easy to live in this bubble. Daily stops at The Verge, Engadget, Daring Fireball and TechCrunch provide a set of very similar perspectives on the same news, and can easily eat up a day of casual browsing.

Fortunately, there exist some great options for being exposed to a diverse range of subjects. Traditional print publications like The Economist or The New Yorker provide both depth and breadth. I’m often surprised by the number of technology stories that I first learn about from the Economist, because they’re just far enough outside my daily bubble to have been totally off my radar.

The last few years have also seen the creation of a range of high quality “curation” websites, which aim to pull together interesting material from across the web. If you’re not already reading Kottke, Longform, and The Feature, I’d highly recommend it.

Things get really interesting when you take it one step further. Embrace the internet time-sink a little bit. Dive into wikipedia and build some subject matter expertise. Don’t just read the short article about flywheel energy storage (blog post coming soon!), check out the websites of the companies mentioned or take a look at the papers which are cited.

Conferences are another great way – though often more domain-specific – to get exposed to information which may have no immediate benefit for you. One of my favorite things about attending WWDC each year is the opportunity to attend sessions which have no relevance to work I’m doing at the moment. Sitting in a session about game optimization on iOS has no bearing on the work I do day-to-day. But inevitably, concepts I’m exposed to in sessions like this will circle back and become relevant in the future.

This is where the concept of “known unknowns” comes into play. When I sit in a random session at WWDC, I’m not aiming to learn everything there is to know about mobile OpenGL implementation or normalizing data from an accelerometer. What I do pick up on is the broad concepts – I’ll remember that, should I need to read data from an accelerometer at some point in the future, there are techniques for normalizing the data. From there, I can dive in and do the necessary research on specifics. The internet isn’t great at helping you take that first step – knowing that there’s a solution out there. Once you have an inkling of a solution, the internet can take over and provide the necessary depth.

I suspect none of this is interesting to most people. And, I suspect it comes across as deeply condescending to almost everyone. But the frequency with which I encounter people who eschew the acquisition knowledge on a variety of topics (sports, politics, entertainment, and so on) tells me that it’s a worth talking about. And, it’s the holidays, so nobody will likely read this anyways.

Happy New Year to you and yours.

Learning to Count in Chinese

This week brought news that a bankruptcy judge has approved the sale of the assets of A123 Systems Inc. to Wanxiang Group of China. This makes me sad – not because I’m a protectionist, afraid of the Chinese – but because of what it symbolizes about the future of energy research and engineering in the United States.

Let’s start with a quick review of A123 Systems. A123 designs and manufactures rechargeable batteries based around lithium nanophosphate technology. In the future, I’m hoping to do more writing about different energy storage technologies, but it’s worth briefly exploring this technology. Lithium nanophosphate is an evolution of traditional lithium iron phosphate batteries. Nanophosphate batteries have a wider state of charge, meaning they’re able to output usable voltage deeper into their discharge cycle. Additionally, they’re more robust across a range of charge/discharge environments and cycles. Finally, they’re relatively safe and resistant to runaway conditions. They’re not exceedingly energy dense (lithium cobalt oxide is better), but for a variety of applications it’s a pretty cool technology.

The A123 technology grew out of research conducted at MIT, and the company principles are MIT researchers. The technology has been used in electric vehicles (the Fisker Karma), high-density energy grid storage, and a variety of industrial applications that benefit from the particulars of this technology.

So, if the technology is relatively robust, why is the company being sliced up by a bankruptcy court in Delaware? A123 certainly had some missteps along their path to commercialization. More fundamentally, as many energy startups have discovered, building a high technology, manufacturing-centric company from scratch is extremely capital intensive. A123 was simply unable to generate enough business quickly enough, and other options for funding did not appear in time.

Ok, a new business didn’t sell enough stuff, so they went away. Pretty normal, right? Heck, most businesses don’t survive. Why get so bent out of shape about this one?

I think the bankruptcy of A123, and the subsequent asset sale, is a canary in the coalmine, a harbinger of things to come.

Green technology, and anything even tangentially related to green technology, has become incredibly politicized. The country experienced a brief flurry of excitement around green technology from 2007 through 2011, spurred in part by a loan program from the Department of Energy. The failure of Solyndra, struggles at other energy startups, and a frigid political climate made the Department of Energy extremely conservative about doling out large loans to green startups. A number of startups that had built assumptions about DOE financing into their business plans suddenly found those resources unavailable.

In the past, these companies may have been able to find more traditional investment through capital markets. Unfortunately, two factors have conspired to make that all but impossible for many of these startups. First, the fallout from the 2008 financial collapse dramatically increased the difficult in finding capital on the scale necessary for these types of endeavors – finding a few million dollars to fund a smartphone app startup with a quick exit strategy is one thing, finding a billion dollars to build a manufacturing pipeline is a bit more difficult.

A far larger long term issue is the dramatic decline in domestic energy costs in the United States, as a result of the increased availability of natural gas from fracking. While the economic argument in favor of green energy was once relatively easy to make, given a bit of hand waving and optimistic projections, the low cost of natural gas has made that argument much more difficult. Because we don’t put a price on carbon in this country, these factors remove much of the economic incentive to invest in green technologies, given a limited opportunity for substantial short-term competitiveness in the domestic energy market.

While I enjoy low energy prices and cool smartphone apps as much as anyone, I’m also aware that the Earth is warming, fossil fuels are being depleted, and world energy use is rising.

As I discussed when writing about Kickstarter, making things is fundamentally difficult. If you think building an espresso machine from scratch is complicated, try building a car. Companies in this space need huge amounts of capital over long timespans, and many of them will fail. Some will fail because they have bad ideas. What worries me is the ones that fail because the ideas haven’t been given enough time or support.

A123 is, I believe, an example of the latter. Proven, working technology, already in the marketplace. An iterative improvement on other proven technology. And, most importantly, a collection of smart scientists and engineers continuing to iterate on future improvements. While we can all hope for massive leaps forward (see lithium-air), the history of successful “innovation” looks much more like A123 than EEStor.

So, A123 went bankrupt, and Wanxiang bid approximately double what Johnson Controls was willing to offer. Some members of Congress have now stepped in with some typical anti-Chinese protectionist arguments, rooted in a distorted understanding of the deal and the technology. While there’ll likely be hearings and protestation, it seems likely that this deal will go through.

The Chinese, like the rest of the world, are operating on a much more realistic understanding of the future energy landscape. It’s laughable to imagine a world in which things like high capacity batteries, photovoltaics, or wind turbines are declining in importance. And yet, we as a nation are choosing to cede our stakes in those technologies to other nations. Once again, this isn’t about protectionism or nationalism, as I have no jingoistic qualms about buying Chinese batteries or Korean solar cells. This is about global competitiveness, economic viability, and maintaining the places we love.

A handful of other “green” companies are on the brink. Fisker is currently scrambling for funding, having seen their future massively disrupted by the suspension of their DOE loans. Tesla, while in a far more robust financial situation, will require large amounts of external capital for the foreseeable future. MiaSole and Solibro have already been sold to China. I’m worried about who’s next.

It’d be more fun to light cigars

Kickstarter, the dominant player in the crowd-funding market, has seen rapid growth and adoption by a wide range of creative endeavors. Dollars committed have nearly quadrupled each year since launch.

With that growth, there’ve been an ever-increasing number of Kickstarter horror stories. Some, like Eyez by ZionEyez appear to have been outright fraud. Others, like Geode are well-intentioned projects that dramatically underestimate complexity and eventually give up. And for the vast majority, proposed timelines prove woefully ambitious.

I’ve been backing projects on Kickstarter for about a year, so it seemed like a good time to reflect on the ups and downs, and think about where it’s going from here. We covered some of this on episode 36 of Divergent Opinions but I want to dive a bit deeper. If you want an insider’s perspective on the Kickstarter process, I highly recommend It Will Be Exhilarating by Studio Neat.

I’ve personally backed 16 Kickstarter projects. Of those, 12 have been successful in achieving their funding goals. That’s a bit above the Kickstarter average of approximately 43% of projects meeting their goals. The total amount I’ve contributed is $604.

I treat Kickstarter primarily as a marketplace for interesting items. I’ll dig into different ways to think about Kickstarter in a bit. But my primarily motivation behind backing projects is because I want the item which is offered as a reward at a given funding level. To a lesser extent, I occasionally back projects for purely altruistic purposes. I suspect that I’m in a majority with this thought process.

Of the projects that have successfully been funded, 9 promised some sort of physical deliverable (books, gadgets, etc). Out of those 9, none have delivered, and 5 have missed their promised deadlines, in some cases by a substantial amount. Of the projects with digital deliverables (music, ebooks), 2 have delivered and none are behind schedule.

That simple statistic provides a pretty good insight into the ramp in complexity between creating a work (writing a book for example) and actually delivering physical copies of that work to people around the world.

So, of my successful projects, 16% have delivered on their promise, representing just 8% of the funds I’ve contributed. This doesn’t come across as a particularly solid investment.

But. I still fund things. Why is that?

Innovation is hard. Having a new idea and seeing it through to fruition is hard. Working out how to ship thousands of widgets around the world is hard. Hell, having boxes made to hold thousands of widgets is hard.

I fund projects because I want the things they’re promising. But more than that, I want the things they’re promising to exist in the world. And I want to support the people who feeling passionately about creating those things.

Let’s look at one project in particular, the ZPM Espresso Machine. Of the projects I’ve backed, this is the most severely behind schedule. The promise is a cafe-grade espresso machine for a reasonable amount of money. This is an incredibly ambitious project, involving not just manufacturing, but manufacturing an object that involves high pressure, heat, liquids and computer control.

At this point, I’d realistically guess that there’s substantially less than a 50% chance that this project will deliver a working, reliable machine to me. And yet, I’m not upset. Yes, I’d like to have that espresso machine, and I contributed funds to the project with the mindset of purchasing the object. More importantly, I want an object like this to exist. The creators saw a gap in the espresso machine market – cheap machines that produce slightly strong coffee are available starting at under $20, real espresso machines cost closer to $1000. While they may not succeed with this project, they’re pushing that market forward, making it clear that there’s a desire, and hopefully learning a lot so that next time, they can succeed.

Kickstarter is approximately three years old. Three years ago, it would have been laughable to think that a couple grad students without offices or factories could produce products that could sit on store shelves alongside products from Krups, Belkin, or Timex. It’s slightly less laughable now. And it’s getting more serious all the time.

I believe that we’re entering a new stage of Kickstarter-backed product development. There’s been enough time for a degree of expertise, specialization, and institutional knowledge to begin to be developed to support these projects. Kickstarter itself is beginning to get more savvy about filtering out projects that are unlikely to succeed. And projects themselves are beginning to appreciate the importance of bringing in outside experts early on.

While it may be too late for these projects, both the ZPM Espresso Machine and the Digital Bolex (of which I’m not a backer) have recently brought in outside project management and production professionals. This, to me, should be de rigueur for any serious hardware Kickstarter project.

Kickstarter, the company, needs projects to be successful. Not just successfully funded, but it needs them to successfully deliver on promises. They’re likely to face substantial legal challenges, in addition to a loss of credibility in the marketplace, if they don’t. And yet, they’ve always taken a hands-off approach to the projects that use their platform. Unlike Quirky, which shepherds projects through the process, Kickstarter is simply a website and a payment platform. I suspect they’ll need to reevaluate that if they want to survive.

In the meantime, I’ll continue to back projects that represent ideas which I believe will make the world a slightly more pleasurable place to be. I’ll continue to support people with big ideas. I know that, barring cases of outright fraud, the frustration I feel towards a project that fails to deliver is nothing compared to the frustration felt by the creators who’ve spent every waking moment trying to follow through on their commitments. In many cases, these projects represent years or decades of dreaming, prototyping, and sketching. While it may seem old-hat in Internet time, we’re still in the very early days of these sorts of projects, and I believe the future is bright.

Onshored, Outsourced, Overhyped

Apple CEO Tim Cook recently discussed plans to bring some Mac manufacturing to the US in 2013. Much of the coverage has focused on the labor aspects of this move – what it says about wages for American workers versus Chinese workers.

While labor costs are surely one part of any decision to move manufacturing, I haven’t seen enough exploration of other aspects, particularly as they relate to the specifics of Mac manufacture.

Tim Cook has repeatedly stated that labor costs are a relatively minor part of the decision to locate in China. Despite being an avowed Apple fanboi, the lefty in in me knows to take anything a CEO says about labor with a grain of salt. If labor in China was double the cost of labor in California, you can be certain that any other logistical issues would be dealt with. However, it’s clear that there are many more issues at play, and the importance of labor cost disparities will be increasingly diminished over time.

While the people sitting on the assembly lines in Foxconn factories doing the day-to-day manufacturing represent relatively unskilled labor, Apple has pointed out in the past that their manufacturing operations in China rely on a huge number of on the ground engineers. These are the sorts of people that develop the manufacturing process, figure out quality control, and make sure that Apple can build tens of millions of iPhones and iPads each quarter.

So, what’s changed? Has the US suddenly trained the number of engineers necessary to run these types of operations domestically? Has some great leap in manufacturing changed that dynamic? I think not. So what other reasons would Apple have to move some manufacturing back to the US, and how could it be feasible?

First, it’s important to note that Tim Cook explicitly said “an existing Mac product line” will be the target of the on-shoring move. (It’s worth noting, though not relevant here, that Mr. Cook also made it clear that Apple won’t be running this operation directly, but will be funding manufacturing partners, as they have in China.) The iOS devices ship in such massive scale that it seems unlikely that the end-to-end manufacturing capabilities (everything from making the cardboard boxes to the earbuds) will be available outside China anytime soon. So, which Mac makes the most sense to move back to the US?

Proudly trading my fanboi hat for my lefty hat, I’m going to operate on the assumption that even Apple doesn’t make decisions like this for altruistic reasons. There’s a business case that’s been made, and that’s what is being acted on. With that in mind, we can review the broad categories of existing product lines and make some determinations.

First, let’s take the laptop lines. To my mind, the laptop lines, and in the particular the Air line, have more in common with the iPad and iPhone lines than with the desktop Mac lines. They’re manufactured in similar ways, with relatively few customization options, are relatively compact, and ship in huge numbers. To me, if the plan were to move Macbook production to the US, we would also hear about similar plans for the iOS device line – it would indicate a dramatic shift in the manufacturing processes in use.

The Mac Mini is, in many ways, a screenless Macbook. It’s a candidate, but given the low profile of the device it seems unlikely.

We’ve already seen indications that some iMacs are being assembled in the US. If you’re not familiar with the differences between Assembled in the USA and Made in the USA, it’s worth taking a moment to read up. While there’s some gray-area here, the assembly of some iMacs in the US seems unlikely to be the end goal. The central feature of the iMac – its gorgeous, large LCD – is unlikely to be made in the US anytime soon. LCD fabrication facilities are long term investments, and it appears that Apple has recently made large investments in LCD manufacturing facilities overseas. The late 2012 iMacs also rely on specialized welding technology for the manufacture of the enclosure. Because this process requires substantial capital investment, which has clearly taken place in China, it’s safe to guess that they won’t be scrapping that investment in 2013 for a domestic move.

That leaves us with the near-dead, oft-forgotten Mac Pro line. We know that something Mac Pro-ish is coming in 2013. Why does the Mac Pro make sense to manufacture domestically?

First, it’s a low-volume product. It’s likely to stay a low-volume product. The market for trucks is unlikely to change substantially in the near future.

The Mac Pro is the only “traditionally” manufactured product in the Apple line. Because of the larger enclosure, the build process is closer to the assembly of a Dell or HP tower than to the assembly of a Mac Mini. This dramatically reduces the scale of the engineering problems that need to be solved for volume manufacture. Components are mounted in an enclosure and connected with cables. The Mac Pro also uses far more “off the shelf” components than any other Apple product. CPUs come from Intel (made, largely, in the US). Graphics cards come from either AMD or nVidia. Commercial spinning disk drives, standard-sized RAM, and full-size power supplies round out the product. While we can’t guess what the future Mac Pro will look like, it seems unlikely to deviate all that radically. The Mac Pro doesn’t include a screen. The components it does use from overseas are compact and easy to transport, and are manufactured by existing, known suppliers.

Domestic manufacturing of the Mac Pro also has a variety of logistical benefits. The small volume means ramping an all-new manufacturing operation from zero is reasonably achievable. The reduced seasonal variability of the Mac Pro line means the more rigid American labor market is less of an issue.

The Mac Pro is by far the bulkiest product in the Apple line. Approximately 540 Mac Pros can fit in a standard 40 foot shipping container. Shipping costs have been relatively volatile in 2012, due to global economic uncertainties and carryover capacity oversupply from the 2008 financial collapse. At the moment, the cost to move a 40 foot container from Shanghai to the West Coast is approximately $2000. While $4 per unit represents a relatively small cost in overall purchase price terms, this represents a large fixed cost on this product line as compared to other Apple products. For comparison, the same shipping container can hold more than 120,000 iPhone 5 boxes, representing just one penny per iPhone.

The Mac Pro is also (I believe) the product most likely to be custom-configured. Buyers have far more customization options, from drives to ram to more esoteric options like fibre channel cards. Customized products represent inventory risks. By moving manufacturing to the United States, Apple can likely reduce inventory costs associated with caching configurations. They could, conceivably, move to a just-in-time approach, wherein your specific Mac Pro doesn’t start down the line until the purchase has occurred, without having to rely on air freight from Asia.

There are a variety of intangible benefits as well – for example, having product manufacture in the US may make it easier to keep manufacturing decisions a secret.

If this is accurate, and we see a domestically manufactured Mac Pro in 2013, I think it’s likely we’ll see a gradual shift in other Mac products, beginning with the iMac and ending with (least likely) the laptops. Given the current landscape, it continues to be exceedingly unlikely that your iPhone will have “Made in the USA” stamped on it any time soon.

So, does the shift of some manufacturing to the US represent a change in heart at Apple? Is it suddenly responding to those who say they’d happily pay more for a domestically manufactured product? Is it about creating jobs in the US? I suspect not. It’s business.

Why iFrame is a good idea

I’ve seen some hilariously uninformed posts about the new Apple iFrame specification. Let me take a minute to explain what it actually is.

First off, as opposed to what the fellow in the Washington Post writes, it’s not really a new format. iFrame is just a way of using formats that we’ve already know and love. As the name suggests, iFrame is just an i-frame only H.264 specification, using AAC audio. An intraframe version of H.264 eh? Sounds a lot like AVC-Intra, right? Exactly. And for exactly the same reasons – edit-ability. Whereas AVC-Intra targets the high end, iFrame targets the low end.

Even when used in intraframe mode, H.264 has some huge advantage over the older intraframe codecs like DV or DVCProHD. For example, significantly better entropy coding, adaptive quantization, and potentially variable bitrates. There are many others. Essentially, it’s what happens when you take DV and spend another 10 years working on making it better. That’s why Panasonic’s AVC-Intra cameras can do DVCProHD quality video at half (or less) the bitrate.

Why does iFrame matter for editing? Anyone who’s tried to edit video from one of the modern H.264 cameras without first transcoding to an intraframe format has experienced the huge CPU demands and sluggish performance. Behind the scenes it’s even worse. Because interframe H.264 can have very long GOPs, displaying any single frame can rely on dozens or even hundreds of other frames. Because of the complexity of H.264, building these frames is very high-cost. And it’s a variable cost. Decoding the first frame in a GOP is relatively trivial, while decoding the middle B-frame can be hugely expensive.

Programs like iMovie mask that from the user in some cases, but at the expensive of high overhead. But, anyone who’s imported AVC-HD video into Final Cut Pro or iMovie knows that there’s a long “importing” step – behind the scenes, the applications are transcoding your video into an intraframe format, like Apple Intermediate or ProRes. It sort of defeats one of the main purposes of a file-based workflow.

You’ve also probably noticed the amount of time it takes to export a video in an interframe format. Anyone who’s edited HDV in Final Cut Pro has experienced this. With DV, doing an “export to quicktime” is simply a matter of Final Cut Pro rewriting all of the data to disk – it’s essentially a file copy. With HDV, Final Cut Pro has to do a complete reencode of the whole timeline, to fit everything into the new GOP structure. Not only is this time consuming, but it’s essentially a generation loss.

iFrame solves these issues by giving you an intraframe codec, with modern efficiency, which can be decoded by any of the H.264 decoders that we already know and love.

Having this as an optional setting on cameras is a huge step forward for folks interested in editing video. Hopefully some of the manufacturers of AVC-HD cameras will adopt this format as well. I’ll gladly trade a little resolution for instant edit-ability.