Google Glass is for hipsters.
At least, that’s the impression you get if you look at the content that’s been posted online. Promotional videos feature young, photogenic, tech-friendly twentysomethings, grinning as they swing from trapezes, kite-surf across azure waters, or scale iconic rock faces.
Glass is part of a broader category of products that enable augmented reality (AR); indeed, Glass may be a false start, or at the least an expensive experiment. AR is the great, great-grandchild of attempts to display information on car windshields or inside fighter jets.
We’ve had AR games that use a smartphone’s camera to superimpose images on the world for a few years, and recently companies like Recon started out making heads-up displays inside ski goggles, and have recently introduced viable contenders to Glass; Oculon is doing the same thing. Google’s acquisition of Magic Leap is a clear sign that they’re in this for the long game, as is Facebook’s purchase of Oculus Rift.
This time around, however, AR is catching on. That’s because AR has always been about context—adding new information to the world around us—and we finally have the abundant data, portable computing, and ubiquitous connectivity to make that context useful.
While today Glass and its ilk look like AR for the one percent, I suspect the reality will be quite different. Most of the killer apps for augmented reality are far less glamorous. Consider four examples:
- If you’re a picker in a warehouse, you’re constantly retrieving and replacing boxes of shoes. A heads-up display could help you pick the most efficient route (Zappos warehouse workers walk an average of fifteen miles a day.) It could also identify what’s in boxes at a glance by scanning printed codes.
- If you’re a maintenance worker for a utility, goggles could help you service complex tangles of wiring, plumbing, and electricity beneath the street. They’d show you what was live, and which ones connected to various apartments.
- If you’re an inspector, glasses that record and annotate will make it easier to record defects, log your work, and compare what’s changed since previous visits—whether you’re checking a factory floor or a restaurant kitchen.
- If you’re building a home, goggles could overlay a 3D blueprint of the house, showing you where to frame a wall or drive a nail as the house went up around you. Later, the homeowner can don their own glasses, seeing the joists and beams and knowing where to place a shelf. No more stud finders.
Need more convincing? Look at the DAQRI smart helmet. The future of AR is blue collar jobs.
And there, as they say, is the rub.
Recent years have seen an economic return without a corresponding return in employment numbers. A visit to a bookstore or big-box retailers shows that we’re replacing the front office with digital channels; similarly, in the back office everything from supply chains to HR to treasury is becoming software.
AR isn’t just a display technology. Because it’s interactive, it’s also a source of reading. When you read a tablet, it reads you back. When you learn from smart goggles, you’re making them smarter. That factory worker isn’t just being more efficient—she’s recording her efficiency. Goggles are the new punch clocks.
Efficiency might sound like a noble goal, but it’s a race humans are ill-equipped to win. Today’s workplace pits humans against machines, and the machines are getting better every day.
Consider Boeing’s wing-painting robots.
Inside a sealed building in Boeing’s widebody-jet assembly plant, two robotic machines glide along tracks on either side of a 106-foot 777 wing laid flat, their heads reaching out like animatronic dinosaurs nibbling at the giant wing.
These new robot-painting machines can wash, apply solvent to remove dirt, rinse and then spray two different paint types. They reach even into complex spaces inside the open wing root that must be painted for corrosion protection.
Manually, it takes a team of painters 4½ hours to do the first coat. The robots do it in 24 minutes with perfect quality. Boeing began using the machine in February. By midsummer, all 777 wings will be painted this way.
A recent article in the Atlantic concludes that the robots have already taken many blue-collar jobs. The piece cites research that suggests a decline in the cost of technology means we invest more in machines, and less in people.
Paul Krugman suggests a similar fate for jobs, and speculates that automation is one reason why manufacturing work is returning to U.S. shores—unlike humans, robots are cheap wherever you put them.
In Race Against The Machine, MIT’s Erik Brynjolfsson and Andrew McAfee conclude that we aren’t in a period of technological stagnation, but rather one in which technology is accelerating, and taking many traditional jobs with it.
In the workplace, AR serves two functions. The first is to enable workers to do their jobs better, boosting effectiveness with access to a prosthetic brain. But as we’ve seen, AR is also tracking worker efficiency, which will inevitably be compared to machine efficiency. And those machines are damned efficient. Have a look at some of ABB’s picker robots in action across a variety of industries. Then consider that these robots don’t get sick, don’t ask for a raise, and can work around the clock.
There’s another, more subversive, reason why Blue Collar AR leads inevitably to a decline in jobs. Not only does a heads-up display record a worker’s efficiency, it also records how they do the job. That means it’s building an abundant data set for machines to analyze. Feed enough videos of enough people doing the same task, and measure the results, and you can learn the best way to do it—at which point, a machine can do it better.
Just wait until unions figure out where AR is headed. They’ll riot.
Where this leads is a concentration of capital with those who own the data and the means of production, away from those who control the labor and the workforce.
It doesn’t have to be this way, of course. It doesn’t have to be bad. When the industrial era gave us steam power, it freed up muscle, and the eventual consequence was niceties like the weekend and the 40-hour work-week.
We look at unemployment as a failure, a problem to be addressed. The reality is that it should be a success, a freeing of humans from drudgery. Maybe an idle workforce is a surplus instead of a drag, and maybe we should be retraining it and adjusting our society’s expectations of productivity.
Workers in the US enjoy far less vacation time than in other countries. According to a post in Payscale, US workers don’t even take the time off that they’ve earned.
Computing has given us the ability to make workers efficient, record how they do things, prove their inefficiency against machines, and ultimately replace them. It’s a massive readjustment of the balance between labor and capital. Our expectations have to adjust accordingly.
As Nike CEO Mark Parker said, “technology has shrunk the world so we can grow it anew.” In the face of a robotic workforce, a software-driven back-office, and digital channels, we need to decide how we’d like to regrow it—and that regrowth will likely include a very different split between people and capital.
Update: This Guardian piece by Nichole Gracely sums up what it feels like to work in a highly regulated, sensor-driven environment.
According to Amazon’s metrics, I was one of their most productive order pickers – I was a machine, and my pace would accelerate throughout the course of a shift. What they didn’t know was that I stayed fast because if I slowed down for even a minute, I’d collapse from boredom and exhaustion.
Amazon’s pretty far ahead of the curve here, having acquired robot maker Kiva outright. The company’s core competitive advantage is to be ruthlessly efficient at logistics; here’s what a Kiva-enabled warehouse looks like.