The 2013 Academy Awards ceremony has brought the chaos in the VFX industry to the forefront. This week on Divergent Opinions, Mike and I covered this story, as well as a bunch of cool research.
Over the last six months, the world of “e-learning” has been totally and completely overrun by the MOOC, or massive open online course. In my years involved with technology in higher education, I’ve never seen another concept suck the oxygen out of the academic technology space in this fashion. While not fundamentally new, the MOOC has totally dominated the e-learning conversation and dramatically shifted higher education agendas.
I’m a proponent of the core concepts of the MOOC. Reducing barriers to knowledge is a good thing. An education model based on keeping knowledge locked up in (forgive me) an ivory tower is fundamentally unsustainable in a digital world. I also believe that large, lecture-based classes can be inefficient uses of student and instructor time, and there are plenty of cases of subpar instruction in higher education.
The dramatic rise of the MOOC within the mainstream press and higher education discourse has resulted in traditional institutions rushing to join the party. It is this action that I believe is shortsighted, poorly reasoned, and potentially destructive.
Let me begin by articulating some of my assumptions. The term “MOOC” (we could have a whole separate article on how horrible the acronym is, but it’s too late now) means different things to different people. To some extent, it has become a “unicorn” term – a way for the uninformed to sound prescient without needing to back their words with knowledge. The types of courses I’m addressing here on are those taught in the traditional large lecture hall, usually introductory courses. These courses typically fill a lecture hall with disinterested freshmen, and often taught by a disinterested graduate student. Coursework generally consists of readings, discussion sections, and multiple choice and short answer tests.
The MOOC movement instead takes the lectures, breaks them into bite-sized chunks, and offers them presented by articulate, informed professors. Students would watch these lectures online, participate in online discussion with other students, and then take standard assessments. Because this type of course can be infinitely scaled, it can be opened to any and all interested parties – traditional students as well as those learning simply “for fun”. A single course can effectively support an unlimited number of students with negligible overhead.
I should note that, although I tend to think about these issues in terms of the institution I’m most familiar with – the University of Minnesota – I believe the lessons and missteps I’ll discuss are being repeated throughout the higher education system.
Dissecting the MOOC
As I stated at the beginning, I think the MOOC concept is perfectly fine. Taken on its own, the typical MOOC outlined above is a reasonable approach to replacing large lecture coursework. If it means lectures are better designed, more polished, and delivered with more passion and energy, that’s even better.
I do take issue with the notion that the MOOC is a righteous democratization of knowledge, opening it up to those learners simply interested for the joy of learning. I think this is a complete red herring argument, presented by MOOC backers to deflect valid criticism. The statistics bear this out: retention rates for MOOCs are consistently in the 10-15% range. It turns out that although most people think learning for fun sounds like a good idea, the reality is that it rarely happens. Moveover, the MOOC format does not lend itself to answering a specific question or providing a specific skill – training and tutorial organizations like lynda.com and the Khan Academy are far better suited to these types of needs. If my end goal is to learn Photoshop or integral calculus, I’m unlikely to participate in an ongoing, sequenced course like a typical MOOC.
That said, I don’t otherwise find serious fault with the way MOOCs are being run by firms like Coursera and Udacity. If they’re able to sustainably produce a product that users are interested in, great.
The higher education response
The response from higher ed, on the other hand, is seriously flawed.
Let’s begin with the language. The companies behind the MOOC use language that assumes that traditional classroom instruction is broken. To hear them describe it, the typical lecture hall is a post-apocalyptic wasteland of bored students and depressed faculty, droning on, with little to no learning taking place. Rather than responding with an impassioned, evidence-based defense of this form of instruction, higher education has by and large accepted this characterization and decided that MOOCs are indeed the answer. Institutions have entered this discussion from a defensive, reactionary position.
Are large lecture courses perfect? Certainly not. In fact, I’m a huge proponent of the blended model, which seeks to shift lecture delivery to polished, segmented, online videos, with class time dedicated to deep dives and discussion. But I do think large lecture courses have value as well. A good lecturer can make the experience engaging and participatory, even in a lecture hall with hundreds of students. And, the simple act of trudging across campus to sit in a lecture hall for 50 minutes acts as a cognitive organizer – a waypoint on a student’s week. Keep in mind, these courses are typically made up primarily of eighteen year old freshmen, struggling with life on campus and figuring out how to organize their time and lives.
These large lecture courses almost always include a group discussion component. This is a time for students to participate in a mediated, directed discussion. More importantly, this is a time for the nurturing and growth that I believe is critical to the undergraduate experience. Seeing classmates, hearing their inane or insightful questions, listening to them articulate the things that they’re struggling with – these are all important but difficult to quantify parts of learning, which are not captured by a solitary or participation-optional online experience. Even a highly functional synchronous online discussion is inherently less likely to go down an unexpected rabbit hole in the exciting and invigorating way that a great classroom discussion can – even in a large lecture hall.
Instead of making the case for traditional courses and shifting the conversation to ways to make them even better (and more affordable), institutions have rushed to turn their own courses into MOOCs, and offer them to the world. This brings me to my next serious concern about this movement.
Why does anyone care about your MOOC? If the MOOC concept is taken to its logical end, the world needs a single instance of any given course. Institutions seem to be unwilling to acknowledge this. Instead, there’s an assumption they’ll build the courses and the world will beat a path to their door. Why would I choose to take a course from the University of Minnesota, when I can take the same course from Harvard or Stanford? Are institutions ready to compete at the degree of granularity this type of environment allows?
The rush to create MOOCs reminds me a bit of the little free library system. I lack the metrics to convincingly argue this point, but as an outside observer, it seems that people are generally more interested in creating the libraries than using them, and it feels like the number of libraries equals or exceeds the number of customers. I believe that the MOOC landrush risks resulting in a similar imbalance. If every school, from the prestigious to the second or third tier, rushes to offer MOOC forms of their courses, there is likely to be an abundance of supply, without the requisite demand.
Heating Buildings is Expensive. MOOCs are by and large the stereotypical “bubble” product. There’s little to no business model behind them, either for their corporate backers or participating institutions. Although that’s probably fine for the companies delivering the courses – their overhead costs are relatively low – it’s a huge issue for institutions with massive infrastructure and staff overhead. If we’re moving to a model where the cost of coursework is dramatically reduced, it presents existential threats for the other missions of the institution, or the very existence of the institution all together. While it’s assumed that institutions will continue to charge as part of the degree-granting process, nobody seems to think that a course delivered via a MOOC should be priced similar to a course delivered in a classroom.
How does the institution benefit? How does offering a MOOC make the institution a better place? Less expensive options for receiving course credit are certainly a benefit for students, but that is a separate issue. Higher education is far too expensive, but MOOCs are not the sole solution. In general, the value proposition for the University is esoteric – simply offering the course supposedly means the world will enhance the content by bringing a wider audience, and its existence will enhance your institution’s standing in the world. These are, at best, hopelessly idealistic. Because there’s no business model to speak of for MOOCs, Universities are left shouldering the costs of creating the courses with little to no expectation of having that value returned.
An alternative path
Having great courses delivered by great instructors with great instructional design is… great. I’d much rather take Introduction to Economics from Paul Krugman than a grad student. Having that type of content available is inarguably better for the world.
I believe the path forward is to leverage the best content, and combine it with the high-touch, in-person experience that makes undergraduate education so important, particularly for traditional students. Mediated discussions, in-person office hours, and writing assignments graded by people with (ostensibly) actual writing skills are the types of growth activities that create the high functioning, well-rounded people our society needs.
It’s also crucial for higher education to begin pushing back against the language of the broken classroom. Although institutions are indeed broken in innumerable ways, by and large, instruction actually works pretty well.
It’s critical as well that a clear distinction is drawn between MOOCs and online education in general. Along with Jude Higdon, I teach an online course which is in many ways the anti-MOOC. Our students do large amounts of writing, which is copiously annotated by the instructors. Students participate in synchronous chat sessions and office hours with the instructors. Although lecture content is reused, the interpersonal interaction is real, genuine, and frequent. This concept is, by design, not scaleable. But I believe the benefits in terms of breadth and depth offered to students by this experience are demonstrably better than that offered by a MOOC. Institutions need to be honest about these tradeoffs.
A MOOC is not a magic bullet. It will not solve higher education’s substantial woes. It will create new woes.
Your MOOC will almost certainly not make a dent in the universe. The world will not beat a path to your door, and you still need to pay to maintain the doors.
This week on Tech Chatter, we discuss the “juicy middle” of cloud service migration for higher ed – individual applications hosted via a Software as a Service (SaaS) model. What are the pros, what are the cons, what are the potential pitfalls?
My Pebble watch arrived yesterday. This was a project that I kickstarted back in May of 2012. While moderately behind schedule, they’ve delivered a functioning, elegant product which does what it was supposed to.
It’ll take some time for the Pebble ecosystem to develop. Right now, in addition to serving as a watch, the Pebble can control music playback on my iPhone, and display notifications (for example, show me a text message as it comes in). Eventually, I suspect we’ll see whole new types of application for this breed of glance-able, connected device.
Already, I’m finding the notification functionality pretty attractive. It’s great to not have to pull my phone out of my pocket (particularly when all bundled up in the winter) to see who’s calling or check an iMessage. The build quality seems excellent, and the whole device works pretty slick. I’m still getting used to the whole notion of wearing a watch, having not done so since the 90s (!!), but so far I can give it a thumbs up.
This week on Divergent Opinion, we round up the news from the week, with a focus on some new and some not-so-new codec news.
The ongoing saga of the Boeing 787 Dreamliner has resulted in a surge of partial or completely misleading stories about modern battery technology. While I’m far from an expert in the field, it’s one I follow closely, and I think I can contribute an “interested outsider” perspective on the state of rechargeable batteries and related technologies, circa 2013.
Let’s start by talking terminology. Lithium-Ion is an umbrella term, which represents a whole family of technologies. Simply knowing that a given application (like an airplane) makes use of “lithium-ion batteries” tells you very little about the performance, safety, and reliability characteristics of those batteries.
Battery technology is a materials-science intensive field, so it should come as no surprise that material choice is the key differentiator between batteries in the lithium-ion family. The three core components of a battery are the cathode and anode, and the electrolyte which separates them.
While there are hundreds of combinations of materials in use, depending on the intended application (and the patent pools of their backers), the most meaningful differentiation to be aware of is the types of positive electrodes (cathodes) in use.
The three primary families of cathode materials, and those worth knowing a little something about, are lithium-cobalt, lithium-iron-phosphate, and lithium-manganese. Each has different pros, cons, and risks.
A further note about terminology here – seeing types of electrodes written in this fashion might cause you to think that other terminology, like lithium-polymer, also refers to electrode choice as well. Unfortunately, it’s just confusing terminology. In fact, lithium-polymer refers to the the electrolyte, and a lithium-polymer battery can use any of the above mentioned electrode materials. Your laptop, for example, almost certainly uses lithium-polymer batteries with lithium-cobalt cathodes.
Now, a battery doesn’t contain pure lithium. That’s why you’re not on fire right now. The lithium is bonded with another material – that’s the cobalt, iron-phosphate, etcetera. These molecules also include oxygen. When exposed to high temperatures, these bonds can break down, resulting in nice, reactive lithium, along with fire’s friend, oxygen. In the case of a battery, a high-temperature situation can result from poor charging circuitry, short circuits, punctures or other external trauma. Since a battery generally consists of many cells, a single failed cell can easily produce enough heat to initiate a chain reaction.
Lithium-cobalt is the most common type of lithium-ion cathode, and delivers high energy density, relatively low cost manufacturing, and decent longevity when managed properly. The primary downside is that the lithium-cobalt bond is relatively weak, meaning these are generally types of lithium batteries which are at fault when you hear about battery fires. See, for example, the 787.
The most common alternative to lithium-cobalt is lithium-iron-phosphate. The A123 Systems batteries I’ve written about in the past are a derivative of this technology. The lithium-iron-phosphate bond is inherently more stable, even when abused or severely heated. The structure of the lithium-iron-phosphate molecule is such that it takes far more energy to free the lithium. Thus, these batteries are ideal for environments in which safety is key – automotive uses for example.
Now, it’s fair to ask why the Boeing 787 doesn’t use this type of battery. I’m obviously not privy to the internal engineering decisions at Boeing, but I can hazard a guess. First off, the battery design for the 787 was locked in 2005 or 2006. Back then, the technology for lithium-iron-phosphate was relatively immature and volume use wasn’t common. Additionally, for a given power output, a lithium-iron-phosphate battery will be larger and heavier than a corresponding lithium-cobalt design – this would have been even more pronounced in 2005.
In addition, the types of situations in which a lithium-iron-phosphate design is “safer” don’t commonly occur on an aircraft. For example, if the relatively small battery of a 787 is engulfed in flames, there are far, far bigger issues to worry about. The risk in an automotive implementation is that a relatively minor accident that damages the battery pack could cause a thermal runaway condition. There don’t tend to be “relatively minor” accidents involving massive jets. The other types of issues that can cause problems with batteries should be able to be mitigated through external controls – smart chargers with fused links in the case of overvoltage, etcetera. When we finally learn (if we learn) what caused the issues on the 787, I would suspect we’ll find that at least part of the cause was poor design or manufacturing issues surrounding these systems, rather than in the battery cells themselves.
There are a variety of other cathode chemistries in various applications. In particular, lithium manganese oxide and related manganese compounds provide better longevity and performance in harsh environments, but don’t yet excel in general purpose situations.
Supercapacitors represent another, related family of energy storage technologies which occasionally spawns a lot of interest, without necessarily a lot of results. Like all capacitors, the supercapacitor (ne ultracapacitor) stores a static charge using a variety of different materials. A supercapacitor can store energy very quickly, for a relatively long time, and survives a far greater number of charge cycles than a chemical battery. Unfortunately, supercapacitors store a relatively small amount of energy and are thus more appropriate to high-output low-duration implementations. Over time, capacity is improving, but the overlap between supercapacitors and traditional batteries is still relatively small – powertools and a few other small gadgets. Cost is still a limiting factor as well.
Longer term, supercapacitors have a lot of potential in energy recovery applications – for example, regenerative braking. But, beware startups promising orders of magnitude advances in supercapacitor technology. There are many out there making such claims, and none have been able to demonstrate solid evidence of their viability.
The reality is that, barring some “out of left field” advance, battery technology looks set to improve in relatively small steps as materials science advances, nanotech manufacturing processes improve, and overall volume drives down costs. An electric car that can charge in seconds and deliver a 500 mile range seems unlikely in the coming decade. But the more relatively-decent electric cars you buy today, the more realistic that future car becomes. I’m sure Tesla, Nissan, and Fisker would appreciate it as well.
We’re incredibly excited to announce Phosphor, our brand new app. Phosphor makes it easy to put animations and other types of motion on the web, without requiring plugins or special browser video support. We’re eager to see how it gets utilized.
On this episode of Divergent Opinions, we cover the ins and outs of the Phosphor development process, the motivation behind the app, and some of the ways we hope people will use it.
Energy storage is a key component in our inevitable move away from fossil fuels. If we ever want renewables to take over for base-load demand (having a wind farm keep your fridge running even when the wind isn’t blowing), or drive long distances in plug-in electrics, we’ll need a serious revolution in energy storage.
This is a field I’m very excited about, both in the near term and the long term. It’s an area where there are still big problems to be solved, with lots of opportunities for real ground-up innovation in basic physics, materials science, chemistry and manufacturing.
There’s a need to begin developing our language around energy storage, and to develop a more thorough understanding of the tradeoffs involved. This has been made abundantly clear by the coverage surrounding the battery issues of the Boeing 787 Dreamliner. Most mainstream press has been unable or unwilling to cover the science behind the battery issues, or to accurately explain the decision making that lead to the selection of the type of batteries involved.
When we talk about energy storage, there are two key factors we need to talk about. Energy density and specific energy. Energy density is how much energy you can fit into a given space (megajoules/liter), and specific energy (megajoules/kg) is how much energy you can “fit” in a given mass.
Let’s look at a concrete example. The Tesla Roadster relies on a large battery pack, made up of many lithium ion cells. The battery pack weighs 450 kilograms, and has a volume of approximately 610 liters. It stores 53 kilowatt hours of energy (190 MJ). So, it has an energy density of 0.42 MJ/kg and a specific energy of 0.73 MJ/liter.
For comparison, let’s look at the Lotus Elise (I could have said “my Lotus Elise” but I didn’t, because I’m classy), which is fundamentally the same car running on gasoline. It can carry 11 gallons of gasoline in its fuel tank. Gasoline has an energy density of 46 MJ/kg, and a specific energy of 36 MJ/liter (those of you screaming about efficiency, hold on). The 29 kilograms of gasoline in a full tank represent 1334 MJ of energy, approximately 7 times more than the 450 kilogram Tesla battery pack. Frankly, it’s a wonder the Tesla even moves at all!
Now, it’s important to add one more layer of complexity here. Internal combustion engines aren’t particularly efficient at actually moving your car. They’re very good at turning gasoline into heat. The very best gasoline engines achieve approximately 30% efficiency at their peak, so of that 1334 MJ in the Lotus’ tank, perhaps only 400 MJ are actually used to move the car. The rest is used to cook the groceries that you probably shouldn’t have put in the trunk. The electric drive train in the Tesla on the other hand is closer to 85% efficient.
That’s a quick example of why understanding some of the engineering, science, and math behind energy storage is important – without means for comparison, it can be difficult to grasp the tradeoffs that have been made, and why products end up being designed the way that they are.
I’ll dig deeper into the specific types of battery technologies on the market and the horizon in a future post. At the moment, they’re all within approximately the same ballpark for density and specific energy, and simply offer different tradeoffs in terms of charge times, safety, and longevity.
Batteries are not the only way to store energy though, and aren’t nearly as sexy as some of the alternatives.
Fuel cells have fallen out of vogue a bit over the last few years. While Honda forges on, most of the excitement seems to have been supplanted, for now, with acceptance of the fact that the lack of a large-scale hydrogen distribution network dooms them to a chicken-or-the-egg fate for the foreseeable future. Fuel cells operate by combining stored hydrogen with oxygen from the air, to release energy. Because hydrogen can be made by electrolyzing water, fuel cells are a feasible way of storing energy generated by renewable sources.
Due to the increased efficiency of an electric drivetrain and the high energy density (though lower specific energy) of hydrogen, a fuel cell drivetrain can rival gasoline for overall system efficiency. Unfortunately, they achieve all of this using a variety of exotic materials, resulting in costs that are completely unrealistic (think hundreds of thousands of dollars per car) and look likely to remain there for the foreseeable future. That said, just today saw word of a fuel cell technology-sharing deal between BMW and Toyota – perhaps there’s still some life in this space.
There’s another type of energy storage, which excites me most of all – these are the “use it or lose it” short term energy storage technologies, which are designed primarily to replace batteries in hybrid drivetrains, or to smooth short term power interruptions in fixed installations.
I’d like to explore these further in depth in the future, but for now, a quick survey is appropriate. The technology I’m most interested in is kinetic energy storage in the form of flywheels. At it’s most basic, you take a wheel, get it spinning, and then couple it to a generator to convert the motion back into electricity.
This is an old technology. Traditionally, you used very heavy wheels, spinning relatively slowly. This type of system is sometimes used in place of batteries for short term power in data centers. In the last few years, flywheels have gotten interesting for smaller-scale applications as well, thanks to modern materials sciences. A small amount of mass spinning very fast can store the same amount of energy as a large amount of mass spinning very slowly. Modern materials and manufacturing mean it’s realistic to build a hermetically sealed flywheel which can spin at hundreds of thousands of RPM. Ricardo has done just that, as has Torotrak.
These systems have the advantage of being relatively lightweight, simple and low-cost. While they don’t store a large amount of energy, they’re ideal for regenerative braking and increasing the overall efficiency of an ICE drivetrain.
Another category of energy storage is thermal storage. These are what they sound like – means to store heat (from solar most often) for extended periods of time. This is another old technology, with some interesting new twists. Remember that gasoline engines turn lots of their energy into heat. Some manufacturers are experimenting with systems which can convert some of that heat into energy, using good old fashioned steam.
A final type of storage which doesn’t fit nicely into any category is compressed air. This week, Peugeot-Citroen (PSA) unveiled their compressed air hybrid drivetrain. This system uses compressed air pressurized by an onboard pump, driven through regenerative braking and other “waste” energy capture. While more complex than a flywheel, total energy storage is much greater as well, and PSA is talking of 30% reductions in emissions thanks to this technology. Tata has also experimented with cars using the MDI compressed air drivetrain, which is designed to be “fueld” by an offboard compressor.
As I noted at the beginning, part of what makes me excited about this space is that it’s not a solved problem. There are loads of companies all around the world creating innovative solutions. Most of them will probably fade away, but some have a reasonable chance of replacing or supplementing the “status quo” energy storage options we have today. Interestingly as well, no one country is dominating the research in this space. The UK, in keeping with tradition, has a large number of very small companies working on projects (their “cottage industries” are often actually housed in cottages!), while the US does this sort of development primarily via research institutions, and other countries rely on government-run labs.
Until all are one, bah weep grana weep ninny bon.