A picture of me standing at a lectern, working on a laptop computer, on the stage of the FWD50 digital government conference

Hi! I’m Alistair. I write surprisingly useful books, run unexpectedly interesting events, & build things humans need for the future.

This trifecta drives the next decade of tech

The next decade of technology rests on three big pillars. They look independent today, but they’re inextricably intertwined. And when they finally work in concert, it will be nothing short of transformative for our species.

If that seems a bit breathless and overblown, hear me out: The convergence of big data, smart agents, and new interfaces is coming, and it’ll change how we interact with other people, the world around us, and even ourselves.

Big Data


noun_136161_ccToday big data is enterprise technology. It’s used to analyze markets, risk, fraud, consumers, energy sources, and more. But soon, it will be a consumer tool. Every decade, some big technology finds its way from the military-industrial complex to common use, because it’s simply too valuable to pass up. That’s how we got the Internet, smartphones, and personal computers.

Already, we have feeds of data—Facebook, photo streams, our email inboxes. And dozens of other data sources that are about us, but not owned by us, paint a hitherto unthinkably precise personal history of each of us. Stitch together phone records, bank transactions, tax filings, doctors’ visits, and even music playlists, and you have a perfect life feed.

But the vast majority of this personal data is something we’ll never look at once it’s saved. How many Facebook posts have you revisited? How many Flickr pictures do you browse? Few of our Tweets, uploads, or messages get a second glance. That means for consumers, big data becomes a life feed we never look at. In fact, Christopher Nguyen (who built GMail) pointed out to me that the sole purpose of Big Data is to give machines something to look at. If it’s to be useful, we need something to chew on it for us, separating the wheat from the chaff.

And that thing is a personal agent.

Smart agents

noun_55508_ccWe’ve abdicated huge swaths of memory to machines already, from mapping cities to remembering birthdays, appointments, and phone numbers. The more we turn over to them, the more they can help. Google Now is able to tell me when to leave for my appointment because it knows where I am, what’s in my calendar, and what road conditions are like. It’s a Stone Soup of knowledge, filled by tidbits from each of us. And we want the machines; help.

“Your mind is for having ideas,
not holding ideas,” says David Allen, the founder of the Getting Things Done movement. In this context, smart agents become a specialized form of artificial intelligence that knows us better than we know ourselves, or at least, is able to think about the ideas we aren’t holding at any given time.

They can manage our attention; we’re jungle-surplus hardware, lacking discipline and prone to addiction, wired to reinforce the obvious neural pathways, driven by squirts of dopamine and the firing of nearby neurons. Those agents can go beyond memory, helping us to overcome our biological liabilities, making better, wiser decisions about health, finance, behavioural economics, and more. They’ll have a ton of vital insight to share with us.

But what should they do when they’ve figured out something important?

They should interrupt us.

New interfaces

noun_84881_ccThe way these agents interface with us is going to change dramatically. Already, mobile devices are moving towards voice instructions and spoken feedback. And there’s a lot of talk about immersive environments—augmented and virtual reality*—with companies like Facebook, Google and Microsoft investing heavily in companies like Oculus and Magic Leap as well as home-grown tech like Glass and Hololens. Even with all this investment, the implications aren’t properly understood.

Disney’s Bei Yang thinks we should broaden the definition of virtual reality by realizing that “VR is really about the human body as an input/output mechanism. It’s about spoofing inputs into the human perceptual system to create desired effects.”

I think of interruption as the new interface. Interruption can come in many forms: A tap or buzz on your skin, interrupting normal touch; a sound or voice in your ear, interrupting normal audio; some photons on your retina, interrupting normal sight.

But interruption alone is annoying. It’s interruption with context that matters. That’s why this trifecta is so powerful: The agent measures your reaction to its interruption. It learns. It quickly becomes the most amazing butler ever, an Alfred to your Batman—unfailingly polite, always discreet, even though it knows all your secret identities.

The early versions of this are all around us: One of the reasons Slack is entitled to such a high valuation is that it’s the platform with which specialized AI joins the workforce; But there’s also Apple’s 3D touch on a screen; patterns of haptic touch on a wristwatch, even Bluetooth headset cues.

So new interfaces become smart agents with context.

Conclusions

I love the idea of smart agents, backed by data science, insinuating themselves into our lives like this. To be sure, the ethical dilemmas and security risks are manifold. But ultimately, it’s a cognitive upgrade—perhaps one that helps us manage our lives, and the planet, better.

The whole is far greater than the sum of its parts, and when you start looking at tech this way, you see the individual components—a notification screen; an automated financial advisor; a new force-feedback touchscreen—as steps along the path.

3 foundations cycle

The singularity isn’t a switch; it’s a thousand tiny nudges combining data science, machine learning (or more specifically, narrow-domain AI) and contextual interruption.


 

*I’m conflating the two terms here, because virtual reality is a superset of augmented reality, in that one of the realities it can render and add to is the real-world one. They’re different, of course; Ori Inbar is adamant that AR “has to emerge from the real world and relate to it, should not distract you from the real world; and must add to it.” Clearly VR goes beyond that definition.

 


Posted

in

,

by

Comments

4 responses to “This trifecta drives the next decade of tech”

  1. DaveD Avatar
    DaveD

    Regarding interruptions: Also see NoUI or backpocket apps (by Golden Krishna – http://www.nointerface.com/book/ ).

    I’m most excited about the long term vision where we send signals directly to the brain.

  2. Ann Wuyts Avatar

    “But the vast majority of this personal data is something we’ll never look at once it’s saved,” yep. I mostly revisit those things when I need something (bank statements, email threads, and even tweets to retrieve an URL – searching those is horror though). The only thing I occasionally revisit is my Flickr feed (nostalgia) and blog (resources). The issue with these feeds, why we don’t revisit, is often also that they have

    1.) Only temporary use (eg your Visa statements, check if they are correct and then you used to toss them, now they are saved digitally)
    2.) Not created for us in the first place (eg Facebook updates, we post them for others more than ourselves I believe, same with the record of Visa statements, it is for gov & banking use more than for us)

    Some efforts are being undertaken to making this useful to us for a longer period of time (not the least, I believe, because that will give companies a reason to keep storing that data) — for example banking dashboards which show your changing spending pattern over time etc, and Facebook tries with ‘a year ago’ and such nostalgia reminders. (Main goal there is to increase interaction).

    However, I do wonder how much those ‘long term’ feeds can contribute to our *now* lives. For smart agents to be of use, to give relevant suggestions and interruptions, I believe they mostly need to be aware of the details of the past three months, maybe a high-level overview of the past few years. (For example holiday destination pattern for a travel website.)

    But that whole stream of data of the past 10 years of my life? Hardly relevant anymore. Facebook’s reoccuring assessment of the ‘highlights’ of my life/year? Hilarious! It’s the little details of the last few weeks which guide my decisions and contain hints at my interests and objectives at the moment. Smart agents will need to look ‘deeper’ at more varied information gathered in a shorter period of time.

    (Which basically means most services still don’t have an excuse to store our information indefinitely in such detail, unless we explicitly push it to them for that reason (FB, flickr). Imho, there is little to no value to saving 5-year old records, not for the companies, and not for the users.

    So volume, variety, veracity, but most of all velocity — not just in processing, but also in data scope. What’s relevant moves in and out quickly.

  3. […] pursuing the same approach as we do. The closest thing was a theory of Alistair Croll (see THIS TRIFECTA DRIVES THE NEXT DECADE OF TECH) which he presented during the data summit. So I have contacted him, and he confirmed that our […]

  4. David Aferiat Avatar

    The smart agents are coming into focus. They are the bots that will soon manage our interaction with smartphones as we move away from the app/OS model. We’re on this path ourselves strategically as we bring machine based learning and trading tech to mobile at Trade-Ideas.com.