The 2013 Academy Awards ceremony has brought the chaos in the VFX industry to the forefront. This week on Divergent Opinions, Mike and I covered this story, as well as a bunch of cool research.
Over the last six months, the world of “e-learning” has been totally and completely overrun by the MOOC, or massive open online course. In my years involved with technology in higher education, I’ve never seen another concept suck the oxygen out of the academic technology space in this fashion. While not fundamentally new, the MOOC has totally dominated the e-learning conversation and dramatically shifted higher education agendas.
I’m a proponent of the core concepts of the MOOC. Reducing barriers to knowledge is a good thing. An education model based on keeping knowledge locked up in (forgive me) an ivory tower is fundamentally unsustainable in a digital world. I also believe that large, lecture-based classes can be inefficient uses of student and instructor time, and there are plenty of cases of subpar instruction in higher education.
The dramatic rise of the MOOC within the mainstream press and higher education discourse has resulted in traditional institutions rushing to join the party. It is this action that I believe is shortsighted, poorly reasoned, and potentially destructive.
Let me begin by articulating some of my assumptions. The term “MOOC” (we could have a whole separate article on how horrible the acronym is, but it’s too late now) means different things to different people. To some extent, it has become a “unicorn” term – a way for the uninformed to sound prescient without needing to back their words with knowledge. The types of courses I’m addressing here on are those taught in the traditional large lecture hall, usually introductory courses. These courses typically fill a lecture hall with disinterested freshmen, and often taught by a disinterested graduate student. Coursework generally consists of readings, discussion sections, and multiple choice and short answer tests.
The MOOC movement instead takes the lectures, breaks them into bite-sized chunks, and offers them presented by articulate, informed professors. Students would watch these lectures online, participate in online discussion with other students, and then take standard assessments. Because this type of course can be infinitely scaled, it can be opened to any and all interested parties – traditional students as well as those learning simply “for fun”. A single course can effectively support an unlimited number of students with negligible overhead.
I should note that, although I tend to think about these issues in terms of the institution I’m most familiar with – the University of Minnesota – I believe the lessons and missteps I’ll discuss are being repeated throughout the higher education system.
Dissecting the MOOC
As I stated at the beginning, I think the MOOC concept is perfectly fine. Taken on its own, the typical MOOC outlined above is a reasonable approach to replacing large lecture coursework. If it means lectures are better designed, more polished, and delivered with more passion and energy, that’s even better.
I do take issue with the notion that the MOOC is a righteous democratization of knowledge, opening it up to those learners simply interested for the joy of learning. I think this is a complete red herring argument, presented by MOOC backers to deflect valid criticism. The statistics bear this out: retention rates for MOOCs are consistently in the 10-15% range. It turns out that although most people think learning for fun sounds like a good idea, the reality is that it rarely happens. Moveover, the MOOC format does not lend itself to answering a specific question or providing a specific skill – training and tutorial organizations like lynda.com and the Khan Academy are far better suited to these types of needs. If my end goal is to learn Photoshop or integral calculus, I’m unlikely to participate in an ongoing, sequenced course like a typical MOOC.
That said, I don’t otherwise find serious fault with the way MOOCs are being run by firms like Coursera and Udacity. If they’re able to sustainably produce a product that users are interested in, great.
The higher education response
The response from higher ed, on the other hand, is seriously flawed.
Let’s begin with the language. The companies behind the MOOC use language that assumes that traditional classroom instruction is broken. To hear them describe it, the typical lecture hall is a post-apocalyptic wasteland of bored students and depressed faculty, droning on, with little to no learning taking place. Rather than responding with an impassioned, evidence-based defense of this form of instruction, higher education has by and large accepted this characterization and decided that MOOCs are indeed the answer. Institutions have entered this discussion from a defensive, reactionary position.
Are large lecture courses perfect? Certainly not. In fact, I’m a huge proponent of the blended model, which seeks to shift lecture delivery to polished, segmented, online videos, with class time dedicated to deep dives and discussion. But I do think large lecture courses have value as well. A good lecturer can make the experience engaging and participatory, even in a lecture hall with hundreds of students. And, the simple act of trudging across campus to sit in a lecture hall for 50 minutes acts as a cognitive organizer – a waypoint on a student’s week. Keep in mind, these courses are typically made up primarily of eighteen year old freshmen, struggling with life on campus and figuring out how to organize their time and lives.
These large lecture courses almost always include a group discussion component. This is a time for students to participate in a mediated, directed discussion. More importantly, this is a time for the nurturing and growth that I believe is critical to the undergraduate experience. Seeing classmates, hearing their inane or insightful questions, listening to them articulate the things that they’re struggling with – these are all important but difficult to quantify parts of learning, which are not captured by a solitary or participation-optional online experience. Even a highly functional synchronous online discussion is inherently less likely to go down an unexpected rabbit hole in the exciting and invigorating way that a great classroom discussion can – even in a large lecture hall.
Instead of making the case for traditional courses and shifting the conversation to ways to make them even better (and more affordable), institutions have rushed to turn their own courses into MOOCs, and offer them to the world. This brings me to my next serious concern about this movement.
Why does anyone care about your MOOC? If the MOOC concept is taken to its logical end, the world needs a single instance of any given course. Institutions seem to be unwilling to acknowledge this. Instead, there’s an assumption they’ll build the courses and the world will beat a path to their door. Why would I choose to take a course from the University of Minnesota, when I can take the same course from Harvard or Stanford? Are institutions ready to compete at the degree of granularity this type of environment allows?
The rush to create MOOCs reminds me a bit of the little free library system. I lack the metrics to convincingly argue this point, but as an outside observer, it seems that people are generally more interested in creating the libraries than using them, and it feels like the number of libraries equals or exceeds the number of customers. I believe that the MOOC landrush risks resulting in a similar imbalance. If every school, from the prestigious to the second or third tier, rushes to offer MOOC forms of their courses, there is likely to be an abundance of supply, without the requisite demand.
Heating Buildings is Expensive. MOOCs are by and large the stereotypical “bubble” product. There’s little to no business model behind them, either for their corporate backers or participating institutions. Although that’s probably fine for the companies delivering the courses – their overhead costs are relatively low – it’s a huge issue for institutions with massive infrastructure and staff overhead. If we’re moving to a model where the cost of coursework is dramatically reduced, it presents existential threats for the other missions of the institution, or the very existence of the institution all together. While it’s assumed that institutions will continue to charge as part of the degree-granting process, nobody seems to think that a course delivered via a MOOC should be priced similar to a course delivered in a classroom.
How does the institution benefit? How does offering a MOOC make the institution a better place? Less expensive options for receiving course credit are certainly a benefit for students, but that is a separate issue. Higher education is far too expensive, but MOOCs are not the sole solution. In general, the value proposition for the University is esoteric – simply offering the course supposedly means the world will enhance the content by bringing a wider audience, and its existence will enhance your institution’s standing in the world. These are, at best, hopelessly idealistic. Because there’s no business model to speak of for MOOCs, Universities are left shouldering the costs of creating the courses with little to no expectation of having that value returned.
An alternative path
Having great courses delivered by great instructors with great instructional design is… great. I’d much rather take Introduction to Economics from Paul Krugman than a grad student. Having that type of content available is inarguably better for the world.
I believe the path forward is to leverage the best content, and combine it with the high-touch, in-person experience that makes undergraduate education so important, particularly for traditional students. Mediated discussions, in-person office hours, and writing assignments graded by people with (ostensibly) actual writing skills are the types of growth activities that create the high functioning, well-rounded people our society needs.
It’s also crucial for higher education to begin pushing back against the language of the broken classroom. Although institutions are indeed broken in innumerable ways, by and large, instruction actually works pretty well.
It’s critical as well that a clear distinction is drawn between MOOCs and online education in general. Along with Jude Higdon, I teach an online course which is in many ways the anti-MOOC. Our students do large amounts of writing, which is copiously annotated by the instructors. Students participate in synchronous chat sessions and office hours with the instructors. Although lecture content is reused, the interpersonal interaction is real, genuine, and frequent. This concept is, by design, not scaleable. But I believe the benefits in terms of breadth and depth offered to students by this experience are demonstrably better than that offered by a MOOC. Institutions need to be honest about these tradeoffs.
A MOOC is not a magic bullet. It will not solve higher education’s substantial woes. It will create new woes.
Your MOOC will almost certainly not make a dent in the universe. The world will not beat a path to your door, and you still need to pay to maintain the doors.
This week on Tech Chatter, we discuss the “juicy middle” of cloud service migration for higher ed – individual applications hosted via a Software as a Service (SaaS) model. What are the pros, what are the cons, what are the potential pitfalls?
My Pebble watch arrived yesterday. This was a project that I kickstarted back in May of 2012. While moderately behind schedule, they’ve delivered a functioning, elegant product which does what it was supposed to.
It’ll take some time for the Pebble ecosystem to develop. Right now, in addition to serving as a watch, the Pebble can control music playback on my iPhone, and display notifications (for example, show me a text message as it comes in). Eventually, I suspect we’ll see whole new types of application for this breed of glance-able, connected device.
Already, I’m finding the notification functionality pretty attractive. It’s great to not have to pull my phone out of my pocket (particularly when all bundled up in the winter) to see who’s calling or check an iMessage. The build quality seems excellent, and the whole device works pretty slick. I’m still getting used to the whole notion of wearing a watch, having not done so since the 90s (!!), but so far I can give it a thumbs up.
This week on Divergent Opinion, we round up the news from the week, with a focus on some new and some not-so-new codec news.
The ongoing saga of the Boeing 787 Dreamliner has resulted in a surge of partial or completely misleading stories about modern battery technology. While I’m far from an expert in the field, it’s one I follow closely, and I think I can contribute an “interested outsider” perspective on the state of rechargeable batteries and related technologies, circa 2013.
Let’s start by talking terminology. Lithium-Ion is an umbrella term, which represents a whole family of technologies. Simply knowing that a given application (like an airplane) makes use of “lithium-ion batteries” tells you very little about the performance, safety, and reliability characteristics of those batteries.
Battery technology is a materials-science intensive field, so it should come as no surprise that material choice is the key differentiator between batteries in the lithium-ion family. The three core components of a battery are the cathode and anode, and the electrolyte which separates them.
While there are hundreds of combinations of materials in use, depending on the intended application (and the patent pools of their backers), the most meaningful differentiation to be aware of is the types of positive electrodes (cathodes) in use.
The three primary families of cathode materials, and those worth knowing a little something about, are lithium-cobalt, lithium-iron-phosphate, and lithium-manganese. Each has different pros, cons, and risks.
A further note about terminology here – seeing types of electrodes written in this fashion might cause you to think that other terminology, like lithium-polymer, also refers to electrode choice as well. Unfortunately, it’s just confusing terminology. In fact, lithium-polymer refers to the the electrolyte, and a lithium-polymer battery can use any of the above mentioned electrode materials. Your laptop, for example, almost certainly uses lithium-polymer batteries with lithium-cobalt cathodes.
Now, a battery doesn’t contain pure lithium. That’s why you’re not on fire right now. The lithium is bonded with another material – that’s the cobalt, iron-phosphate, etcetera. These molecules also include oxygen. When exposed to high temperatures, these bonds can break down, resulting in nice, reactive lithium, along with fire’s friend, oxygen. In the case of a battery, a high-temperature situation can result from poor charging circuitry, short circuits, punctures or other external trauma. Since a battery generally consists of many cells, a single failed cell can easily produce enough heat to initiate a chain reaction.
Lithium-cobalt is the most common type of lithium-ion cathode, and delivers high energy density, relatively low cost manufacturing, and decent longevity when managed properly. The primary downside is that the lithium-cobalt bond is relatively weak, meaning these are generally types of lithium batteries which are at fault when you hear about battery fires. See, for example, the 787.
The most common alternative to lithium-cobalt is lithium-iron-phosphate. The A123 Systems batteries I’ve written about in the past are a derivative of this technology. The lithium-iron-phosphate bond is inherently more stable, even when abused or severely heated. The structure of the lithium-iron-phosphate molecule is such that it takes far more energy to free the lithium. Thus, these batteries are ideal for environments in which safety is key – automotive uses for example.
Now, it’s fair to ask why the Boeing 787 doesn’t use this type of battery. I’m obviously not privy to the internal engineering decisions at Boeing, but I can hazard a guess. First off, the battery design for the 787 was locked in 2005 or 2006. Back then, the technology for lithium-iron-phosphate was relatively immature and volume use wasn’t common. Additionally, for a given power output, a lithium-iron-phosphate battery will be larger and heavier than a corresponding lithium-cobalt design – this would have been even more pronounced in 2005.
In addition, the types of situations in which a lithium-iron-phosphate design is “safer” don’t commonly occur on an aircraft. For example, if the relatively small battery of a 787 is engulfed in flames, there are far, far bigger issues to worry about. The risk in an automotive implementation is that a relatively minor accident that damages the battery pack could cause a thermal runaway condition. There don’t tend to be “relatively minor” accidents involving massive jets. The other types of issues that can cause problems with batteries should be able to be mitigated through external controls – smart chargers with fused links in the case of overvoltage, etcetera. When we finally learn (if we learn) what caused the issues on the 787, I would suspect we’ll find that at least part of the cause was poor design or manufacturing issues surrounding these systems, rather than in the battery cells themselves.
There are a variety of other cathode chemistries in various applications. In particular, lithium manganese oxide and related manganese compounds provide better longevity and performance in harsh environments, but don’t yet excel in general purpose situations.
Supercapacitors represent another, related family of energy storage technologies which occasionally spawns a lot of interest, without necessarily a lot of results. Like all capacitors, the supercapacitor (ne ultracapacitor) stores a static charge using a variety of different materials. A supercapacitor can store energy very quickly, for a relatively long time, and survives a far greater number of charge cycles than a chemical battery. Unfortunately, supercapacitors store a relatively small amount of energy and are thus more appropriate to high-output low-duration implementations. Over time, capacity is improving, but the overlap between supercapacitors and traditional batteries is still relatively small – powertools and a few other small gadgets. Cost is still a limiting factor as well.
Longer term, supercapacitors have a lot of potential in energy recovery applications – for example, regenerative braking. But, beware startups promising orders of magnitude advances in supercapacitor technology. There are many out there making such claims, and none have been able to demonstrate solid evidence of their viability.
The reality is that, barring some “out of left field” advance, battery technology looks set to improve in relatively small steps as materials science advances, nanotech manufacturing processes improve, and overall volume drives down costs. An electric car that can charge in seconds and deliver a 500 mile range seems unlikely in the coming decade. But the more relatively-decent electric cars you buy today, the more realistic that future car becomes. I’m sure Tesla, Nissan, and Fisker would appreciate it as well.