It’s been an interesting year for Google’s most famous side project. After emerging from the company’s suitably mysterious X Lab in April, Glass appeared across the roundtable from Charlie Rose, gave conference attendees a skydiver’s eye view at Google I/O, strutted down the catwalk at New York Fashion Week and shared the stage with California Governor Jerry Brown as he signed a bill into law allowing self-driving cars on the state’s roads.
Yet, there’s still more that we don’t know about Google Glass than we know about it, despite its status as the highest-profile attempt at making wearable computing the next big thing. Public demonstrations of the tech have so far only hinted at its full potential. The promise of Glass echoes that of wearable computing in general, a promise that’s remained largely unfulfilled despite decades of research driven by everyone from the military to DIYers.
That’s not to say those years haven’t been eventful; the very definition of wearable computing has changed during that time. Most recently, it’s become intertwined with the idea of augmented reality. It’s a relatively new term, but in the broadest sense it’s something that goes back decades — even centuries. Eyeglasses and sunglasses restore or enhance our vision, electricity and the light bulb free us from a dependence upon daylight, and the automobile and other means of transportation have expanded the space we consider home, to name just a few examples. But the wearable technologies of today, and those promised for the future, are augmentations of a different sort: not just augmentations of ourselves and our surroundings, but of existing technologies.
The Key Elements
Wearable computing promises to extend that always-on connection even further and, potentially, change the nature of what it means to be “connected.”
Today, wearable computing is largely considered to be an evolution of the smartphone, which is close to being a “wearable” technology itself. But the real history of wearable computing as we know it goes back quite a bit further — well before the first cellphones, let alone the first smartphones. In those early devices, bulky and obtrusive as they were, we could see the key roots of the modern wearable computer: the PC and the camera.
The influence of the computer, and the personal computer in particular, is difficult to overstate. As researcher and game designer Ian Bogost articulated particularly well in a recent essay on Alan Turing for The Atlantic, the computer is not a device designed for a specific task, but a device designed to simulate other devices, or “just a particular kind of machine that works by pretending to be another machine.” The more advanced and capable computers become, the more devices they are able to simulate and, ultimately, replace.
ENIAC, the first general-purpose electronic computer.
That’s become clearer than ever with the advent of the personal computer, which in recent decades has drawn people away from the television, the radio, the calculator and countless other devices. More recently, we’ve seen that shift again with smartphones and tablets pulling people away from PCs, telephones, cameras and video game consoles. In each case, the new technology replacing the old has taken on a more central role in people’s lives. Whereas the personal computer became a hub in the home, the smartphone has become a source of ever-present connectivity and a near-constant accessory. Wearable computing promises to extend that always-on connection even further and, potentially, change the nature of what it means to be “connected.”
Just as important is that other key device: the camera. As portability and an always-available (or mostly available) internet connection separated the smartphone from the personal computer, a constantly active camera is one of the key factors that distinguishes many of today’s wearable computers from the smartphone. That wasn’t always immediately evident, as many early wearable computing efforts were focused on specific tasks. Indeed, the device widely considered to be the first wearable computer, conceived in 1955 and ultimately tested in 1961 by Edward Thorp, was designed to give its wearer the upper hand at roulette. It was certainly wearable — built into a shoe and controlled by a toe tap, with an earphone providing musical tones for output — and it did perform a basic computing task (timing both the roulette ball and wheel), but general purpose it was not. As we moved into the era of the PC, though, we soon saw new notions of the wearable computer, and clear indications that the camera would be its killer app.
Without a camera, a wearable computer is just that: a computer you can wear. It’s more portable, always accessible and opens up new possibilities of its own, but it isn’t that far removed from the traditional notion of a PC. It has a screen, an input device or two and applications for a variety of tasks. With a camera, a wearable computer doesn’t just become a device able to capture pictures and record; it becomes able to constantly monitor its surroundings (something further aided by GPS and various sensors). That makes a name like “Glass” all the more appropriate. The screen in front of your eye is less of a “screen” than a clear view of your environment with an overlay on top of it; a new way to look at things, rather than something new to look at.
Wearables Take Shape
The importance of the camera to wearable computing is nowhere more evident than in the work of Steve Mann, an MIT alum and key pioneer in the field. Mann built his first wearable device — a backpack-mounted computer with a camera viewfinder attached to a helmet — in 1981, and he hasn’t let up since. While that device stretched the definition of “wearable,” Mann would continually adapt his systems over the years, shrinking them down and making them less cumbersome with each variation.
Those systems included the many variations of what he dubbed the WearComp, which helped establish the archetypal image of the wearable computer: a small computer, generally worn on the hip, a portable input device and a wearable display. It also, of course, had a camera, which, by the mid-1990s, Mann was using to broadcast live video to the web. Later, he would miniaturize the display even further with his EyeTap device, which looks remarkably like Google’s Glass — albeit with decidedly more of a DIY flavor. These days, Mann is venturing into cyborg territory, with a display he claims is now “permanently attached” and doesn’t come off his skull without “special tools.” That detail became something of an issue when employees at a McDonald’s in France tried to pull the system off of him earlier this year — an event that brought more attention to wearable computing than just about anything outside of Google Glass recently. Beyond that, Mann’s also been using what he calls a mind mesh, or brain-computer interface, and is involved with a company called InteraXon working on thought-controlled computing.
That same progression toward something more wearable than luggable (minus the more cyborg-minded efforts) can also be seen in the work of Thad Starner, a contemporary of Mann’s at MIT, who has also been donning his own devices for decades now. Those were first based on the Hip-PC design from Doug Platt — essentially a homebuilt 286 computer worn, more or less, on the hip. The devices also made use of Reflection Technology’s Private Eye head-mounted display (a popular option among early enthusiasts), along with a Twiddler one-handed keyboard for input.
Starner’s broader approach to wearable computing has been quite a bit different than Mann’s over the years, though, initially aiming to provide something closer to a personal assistant than a computer designed to interact with one’s surroundings. His work has provided augmentations of sorts, however, including the Remembrance Agent, which he developed with Bradley Rhodes. That device constantly monitored what the user was doing and provided a list of relevant documents — effectively augmenting human memory, in the words of Starner. Mann explored this type of augmentation in his book written with Hal Niedzviecki, Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable Computer, noting a shift from “smart things” to “smart people.”
In that respect, both Mann and Starner (and others in the field) also owe a debt to Douglas Engelbart, who not only invented the chorded keyboard used in many early wearable computers, but wrote the landmark paper “Augmenting Human Intellect.” In it, Engelbart draws on the work of Vannevar Bush and his pre-hypertext idea of a “memex” system to explore new ways we can augment our thinking.
Much of Mann and Starner’s other contributions also boil down to the simple idea that a wearable computer should be worn all (or most) of the time. That’s an idea now echoed by Google, and one that will ultimately need to be broadly accepted for wearable computing to have anywhere near the success of smartphones or tablets. It’s one that Google clearly thinks is possible, and a notion extended by Mann, who suggests in Cyborg that “one day we will all feel naked without our wearable computer.” That can already be said for many people and their smartphones. Incidentally, Starner has since gone on to work on Glass at Google, while Mann continues to focus on his own efforts serving as a tenured professor at the University of Toronto.
Naturally, not all pioneering work in wearable computing has been done in the academic world. Indeed, DARPA has been exploring the technology since the 1990s and, along with industrial use, military applications have proven to be among the most practical applications of wearable computing in the pre-consumer era.
In most instances, these go back to the devices designed for specific tasks, but they also provide clear examples of AR as we understand it today. Initiatives like the US Army’s Land Warrior program offered up many of the archetypal elements of wearable computing, including a heads-up display that offered maps, thermal vision and an improved targeting system to soldiers. That program was canceled in 2007, but the equipment went on to see some use in Iraq. Similar “future soldier” efforts continue in the US and a number of other countries around the world.
…there’s another technology developed concurrently with wearable computing that was also once promised to be the next big thing: virtual reality.
On a different front, there’s another technology developed concurrently with wearable computing that was also once promised to be the next big thing: virtual reality. While the two are closely linked in some ways (both offer wearable displays and input devices), they are completely removed in others, with VR focusing more on an inward view of the digital world versus an outward projection. Still, VR is hardly a relic of the recent past. Earlier this year, the Oculus Rift — a relatively low-cost wearable display with motion-tracking capabilities — reignited buzz with a successful Kickstarter project, offering hope of a new future for the technology in gaming.
Valve’s Michael Abrash also sees a continued place for virtual reality in the near-term as a sort of stopgap technology until full-fledged AR becomes feasible, explaining in a recent blog post that “interaction with the real world and especially with other people is why AR is the right target in the long run.” He added, however, that it “makes sense to do VR now, and push it forward as quickly as possible, but at the same time to continue research into the problems unique to AR, with an eye to tilting more and more toward AR over time as it matures.”
Of course, many of the most popular images of wearable computing come not from the real world, but from science fiction. For many, their first image of a head-mounted display and AR came from movies like The Terminator and RoboCop, both of which offered a cyborg’s perspective, with continually updated information laid on top of their field of vision. The 1980s and ’90s also gave us the cyberpunk futures provided by the likes of William Gibson and Neal Stephenson, which featured a different sort of cyborg: one who was augmented, but still mostly human.
Wearable computing continues to be a recurring theme in science fiction today. Eran May-raz and Daniel Lazo’s short film Sight garnered a fair amount of attention earlier this year with its vision of a hyper-augmented future replete with computers that have receded into nothing more than a pair of contact lenses (something not quite as far-fetched as it sounds). That trend also naturally extends to video games, with titles like Deus Ex: Human Revolution bringing cyberpunk-style visions of augmented humans back to the fore.
The Google Factor
It was another short video, but not (quite) a science fiction one, that brought wearable computing more attention than ever earlier this year. Google officially unveiled Project Glass with its “One Day” video, showing not the gear itself, but instead what the wearer sees: everything from simple reminders to directions to video calls that all appear to simply float in the user’s field of vision. It admittedly showed far more than what its prototypes are capable of, but according to Google, it’s not too far from what we’ll eventually see. The gear itself also seems to be fairly impressive, even in its current state: self-contained, comparatively discreet and able to capture (relatively) high-quality still images and video.
In addition to Starner, the project has drawn a number of experts in the field to the secretive X Lab, including current project lead Babak Parviz, who previously worked on contact lens displays. By all accounts, though, it’s Google co-founder Sergey Brin who is the driving force behind the effort — as evidenced by his enthusiasm for the project during the big Glass demonstration / stunt at Google I/O. Brin also seems to have thought about some of the broader implications of wearable computing — talking at length, for instance, about the ways it could lead to some genuinely new types of photography. It could be another instance of a new medium shaping the message.
For all their similarities, though, there are some marked differences between Google’s Glass and Steve Mann’s wearable computing efforts.
For all their similarities, though, there are some marked differences between Google’s Glass and Steve Mann’s wearable computing efforts. That’s perhaps most evident in the ways they promise to let the wearer interact with their surroundings. Whereas Google is pursuing an approach that “doesn’t come between you and the physical world,” as Parviz said in a Wired interview, Mann sees wearable computers as offering something closer to a “mediated reality” — one that allows wearers to tailor their environment to suit themselves; even blocking ads and billboards in real life, just as an ad blocker filters ads on the web. Mann himself calls this type of mediation “Personal Imaging,” and says in Cyborg that it will be “one of the most far-reaching and important aspects of the coming wearable cybernetics revolution.”
Despite those different approaches, though, both efforts represent a shift away from the traditional notion of computing in one key respect. As Starner explained in an interview with Technology Review earlier this year, one of the key things he’s hoping to do with Glass is “make mobile systems that help the user pay more attention to the real world as opposed to retreating from it.” That inevitably raises a number of other questions about how the technology will change our lives. Will we wonder what we’re missing if we venture into a new place without our wearable computer?
Of course, while all the attention it has garnered may cause some to suspect otherwise, Google is far from the only company that has been experimenting with wearable computing. Xybernaut and Via Inc. were two early providers of ready-made wearable computers in the 1990s. They saw some limited success in the industrial and enterprise markets but little from their efforts to reach a broader consumer audience. They did bring some all-too-rare media attention to wearable computers, though, providing an alternative to the DIY route for those interested in dabbling in the field. Xybernaut would ultimately fall far from its status as a leader in a then-small industry, however, by drawing fraud charges from the SEC in 2005 and filing for bankruptcy shortly thereafter.
More recently, companies like Motorola and Kopin have continued to focus on industrial-minded applications for wearable computing (still one of the more viable markets), and countless others have sold or attempted to sell standalone wearable displays over the years, albeit with little success. Google will also have some competition when it gets around to releasing Glass — Vuzix is promising to release its own set of Android-based “Smart Glasses” in mid-2013, and earlier this year Olympus announced a heads-up display designed to be paired with a smartphone.
Bridging the Gap: Towards a Wearable Future
Wearable computing may end up being the next big thing, but it still isn’t just one thing.
Wearable computing may end up being the next big thing, but it still isn’t just one thing. Much of what is actually now winding up in consumers’ hands, are devices that are not full-fledged computers, but things like smart watches and fitness monitors, which offer portions of the functionality promised by the wearables of the future. Such products also come in less obtrusive and more fashionable form factors, resulting in broader consumer appeal than the sci-fi inspired heads-up displays and cybernetics developed by the likes of Google, Mann and Starner.
Take the Pebble Smartwatch, for example — a Kickstarter success story and a consumer product-in-progress keenly anticipated by its nearly 69,000 backers and the tech industry alike. Pebble functions as a watch, fitness computer and media player, but can also be seen as a sort of smartphone satellite that serves as your phone’s secondary screen by providing notifications and remote control capabilities. Pebble works as a standalone device, but reaches its full potential when paired with an iPhone or Android handset.
Additionally, there’s another subset that’s more apparel than computer and is entirely dependent upon coupling with external devices. Adidas miCoach and Nike+ technology are two examples that have established a significant user base with sensor-laden garments and shoes. For now, the technology tracks fitness information like heart rate, distance traveled and elevation gained during workouts using sensors woven into the fabric and a small external pod packed with an accelerometer, GPS, magnetometer and gyroscope.
Despite their consumer acceptance, the systems are still in their infancy and in the process of being fine-tuned both in terms of hardware design and how the gathered data is used. Making the technology even more wearable is one big part of that. Simon Drabble, director of Adidas miCoach, said the goal is to reach a point where there’s “no longer a consideration of ‘Am I putting on wearable technology or am I just putting on a normal piece of clothing or footwear?'” In other words, the key, as he sees it, is for people to don wearable technology without giving it a second thought.
That’s a goal shared by companies like Massachusetts-based mc10, which is focused on making devices more comfortable to wear. It’s developed what it calls “conformal electronics,” which are thin, flexible integrated circuits that can stretch and twist. This has enabled the creation of a biometric sensor — in a stretchy sticker form factor — with the potential to read vital signs, sense concussions, monitor seizures and more.
David Icke, CEO and founder of mc10, sees these conformal electronics as a necessary part of the future. His company’s Biostamp is the size of a Band-Aid, but is thin enough to be hardly noticed by its wearer. He said that today “most people are working with rigid, boxy sensors, and that’s not ideal for high adoption or compliance. If you really want to make it ubiquitous, if you just have a sticker you can apply and forget, that’s where you want to go.” Using the same flexible circuit technology as that Biostamp, mc10 is also developing flexible micro solar cells to help address one of the greatest limitations of current mobile technology: power, or more accurately, not having enough of it.
Saving power has become a major focus for portable device designers, and it’s largely the effort to increase silicon efficiency that’s allowed both the simultaneous expansion of screen resolution and the slimming of chassis to occur in parallel. In contrast, rechargeable batteries have improved at a far slower rate. Brooks Kincaid, an ex-Googler turned co-founder of Imprint Energy, is out to change that. Imprint’s developing a new battery technology that’s both robust and flexible enough to be used in wearable ways. The company’s a couple years away from a commercial product, but said its zinc-based chemistry will allow for higher energy density and cheaper production than the lithium polymer cells that power most present-day gadgets. What’s more, these flexible batteries are printable and non-volatile — so they’re stable sans packaging — which opens up a host of potential battery form factors.
“Thinness and flexibility are key issues for any wearable device,” Kincaid told us, and Imprint’s tech is being crafted with the importance of such characteristics in mind. “If we can create batteries that are thin, dynamically flexible and customizable to different shapes and sizes, then we’ll be able to provide device manufacturers more design freedom.”
“Nobody wants to wear a bunch of technology bolted on their bodies.”
The freedom afforded by the flexible technologies promised by mc10, Imprint and others will be critical moving forward, according to Jennifer Darmour, a designer with the Artefact Group, who has worked with the likes of Microsoft, HTC and Google. She echoed the same view held by Drabble: the key to unlocking a broader market is to fully integrate the technology with our clothing, as opposed to merely attaching devices to ourselves and what we wear.
“If you’re asking me to wear it, it’s gotta look good,” she said. “Nobody wants to wear a bunch of technology bolted on their bodies.”
She and Drabble are hardly alone in this belief. “Smart clothing” has become a considerably broader field in recent years, with the introduction of everything from a Microsoft Research-designed dress that displays tweets to more practical applications like The North Face’s jackets with inbuilt PMP controls and the aforementioned sports apparel replete with sensors. Of course, wearable technology doesn’t always necessarily mean “wearable computing,” as new advances have also made new types of materials possible. That’s something even William Gibson recently pondered in a recent piece on the future of fashion for The Wall Street Journal:
The real future of clothing, of course, belongs to unsettling, change-driving new technologies. To nano-beaded fabrics that clean and re-groom themselves as they hang in your closet. To relatively weightless materials, packable as silk, cool or cozy as required. To the function-based repurposing of natural-wonder materials, like silk and cashmere. To the realm of performance materials, technical fabrics, many of which are currently produced in Switzerland–and are as expensive per yard as decent Italian leather. This sort of innovation feels like part of the actual future that’s arrived slightly early, the opposite of futuristic.
For many, a future filled with more wearable devices is inevitable. They will be part of a wider world of connected devices that “permeates our cities, our dwellings, our objects, our clothing, and eventually our bodies,” as Joseph Paradiso of the MIT Media Lab put it. Some, like Google’s Parviz, are especially optimistic about that pace of change — he told Wired earlier this year it’s his expectation that “in three to five years it will actually look unusual and awkward when we view someone holding an object in their hand and looking down at it. Wearable computing will become the norm.” Still, others are considerably more skeptical.
If there is a consensus on one thing it’s that, as Brin has said, the technology simply needs to “get out of the way” for it to become widely accepted. If it is to move forward, the wearable technology of the future will be comfortable, fashionable and unobtrusive, and provide us with valuable data about ourselves and the world around us in useful, easy-to-understand ways. It will also, undoubtedly, raise new issues of privacy, and new fears that we are becoming too dependent upon and too consumed with our technology.
But even Google Glass is still in the future, and it remains to be seen if it or a future device will bring wearable computing close to the level of acceptance that smartphones and tablets have achieved in the past decade. Glass has already helped the cause in one key respect, though: it’s gotten more people talking, and excited about using a wearable computer than ever before, and that’s no small feat for a technology that has largely been confined to experimental research, battlefields and science fiction.
Photographs From Top: Sergey Brin (David Paul Morris/Bloomberg via Getty Images); ENIAC (APIC/Getty Image); Steve Mann (AP Photo/Charles Krupa); Alex Pentland, Steve Mann, Thad Starner and Rehmi Post (Pam Berry/The Boston Globe via Getty Images); Screenshot from Robocop; Pebble smartwatch; mc10 Biostamp; Sergey Brin and Diane Von Furstenberg (AP Photo/Seth Wenig)
This article first appeared in Distro Issue #70.