When Is a New Tech ‘Ahead of Its Time’

By Clive Thompson│ onezero.medium.com│6 min
It’s a tricky thing to figure out
When a bunch of techies are beavering away on a new tool, it can be tricky to figure out:
Is this a genuinely useful new thing? Or is it just some expensive prototype, a pipe dream that’ll never take off?
You see this question raised lately in the debate over “Web3” — the idea that the existing Internet is too monopolized, and a new one should be built using decentralized blockchains. But the question is perennial. All throughout history, people have debated whether nascent technologies will ever be viable enough to take off.
For example, the early mobile phone was mocked as something so wildly expensive that only self-important finance blowhards would ever find it useful. (Seriously: At a party in NYC in 1996, I pulled out a mobile phone I’d rented for a weekend for some in-the-field reporting; everyone at the party, all young folks employed in new media, laughed and laughed.) In contrast, the Segway was heavily touted — by some of the biggest innovators in technology, with actual sales records, like Steve Jobs and Jeff Bezos — as an invention so catalytic that cities would be re-engineered around it. Whoops.
So, how do you tell one from the other? When you see a new, nascent technology being proffered by its inventor, how do you know whether it’s something that could become huge?“Young girl taking a Kodak picture of her doll” via Library of Congress
Separating desire from viability
There are two related questions here: a) Could this new prototype ever work well enough and affordably enough that it could be in wide(r) use? And more alchemically, b) does it offer enough people a sufficiently interesting and useful new ability that they’d change their behavior around it? Do we desire this new thing?
I think b) is, of the two, the much harder question to answer. There are a lot of convoluted reasons why a technology becomes desirable. Sometimes it’s because the tech solves a problem that’s low on Maslow’s pyramid, like clean-water engineering. Everyone wants that. (Indeed, many technologies that are critical to basic existence are often infrastructural and civic.) But even with many consumer technologies — i.e. when you’re buying something that isn’t for basic survival — you can detect when a new tech triggers a novel, previously latent desire.
Personal cameras did that. In the late 19th century, people were very familiar with photography, but the demand for owning and carrying around a camera wasn’t obvious until the Brownie came out. Suddenly, everyday people discovered photography was delightful for personal expression, and a way to document the arc of their lives.
But other times in consumer tech, b) is much trickier to discern. GPS chips in our phones: Did people really want that? On the one hand, GPS gives your phone enormous utility, as with turn-by-turn maps. On the other hand, GPS lets authorities track your every move, which most people find icky. Worse, the market tends to seal off options, making it difficult to know whether people really prefer the current state of affairs. It’s nearly impossible to buy a phone now that doesn’t have GPS, or which has a trustworthy hardware “off” switch; and even if you turn your GPS off in your software options, apps (like maps) nag you to turn it back on. These lock-ins are what make determining b) such an analytical swamp.
So let me stick with a) for second, and to the extent that it’s possible, ponder it separately from b).
To wit: How can you tell when a new technology — expensive, buggy, a Rube Goldberg prototype — will ever be viable enough so we can even get to b)?“Wright Brothers” via the National Archives
Is there a roadmap? The Wright flyer vs. the jetpack
While thinking about this recently, I hit upon this blog post from last year by Benedict Evans. He chews over this question by looking at barriers. When a tech is nascent, what’s preventing it from becoming viable?
One of Evans’ core points is that you need to look at the technical roadmap for a new technology. If it’s janky and buggy and expensive, is there a clear way to seeing how it could improve?
He compares the Wright brothers’ flyer to early jetpacks…
The Wright Flier could only fly 200 metres, and the Rocket Belt could only fly for 21 seconds. But the Flier was a breakthrough of principle. There was no reason why it couldn’t get much better, very quickly, and Blériot flew across the English Channel just six years later. There was a very clear and obvious path to make it better. Conversely, the Rocket Belt flew for 21 seconds because it used almost a litre of fuel per second — to fly like this for half a hour you’d need almost two tonnes of fuel, and you can’t carry that on your back. There was no roadmap to make it better without changing the laws of physics. We don’t just know that now — we knew it in 1962.
As Evans notes, for a technology to have serious technical promise, it usually has a set of proximal next steps. Early microprocessors were expensive and janky, but one could imagine the proximal industrial processes that would make them cheaper, and make it possible to produce them in sizeable enough quantities — at which point you get the personal computer. These were all steps of magnitude but not direction; of quantity but not essential quality.
So if you can articulate the technical roadmap, then yeah, the new gewgaw could make it.Jetson personal flying car
Waiting for a piece of the puzzle to emerge
But things get complex here quickly, too. What if the success of a new technology relies on a particular component being invented/perfected, without which the whole will never work?
Evans quotes a pre-Wright inventor from the 19th century noting that he, too, had all the ideas in place for an airplane. The problem was he didn’t yet have a useful power source. He only had the steam engine to work with, which was too heavy to use in flight. Once the lighter internal-combustion engine emerged, suddenly the Wright-style flyer was possible.
That makes the question of “is this new tech ever gonna work” more complicated, right? Consider the jetpack again. It has, right now, no “roadmap” to viability because power sources are too heavy. It’s like the pre-Wright airplane. But what if a power source with a wildly superior weight-to-power ratio suddenly emerged? Then jetpacks might well quite become suddenly viable.
That sounds pie-in-the-sky, and something that’s foolish to bet on. Frankly, knowing what I know right now, I wouldn’t bet on it. Innovating in power sources means innovating with basic chemistry, and as the electric-car battery folks know, that’s damn hard stuff. Plus, jetpacks have all manner of hazardous exhausts, making them likely utterly impractical for safe everyday use.
On the other hand, a personal flying device that uses lots of small rotors — a personal drone, as it were — has no hot-exhaust problem. It’s jetpack-adjacent. And the explosion and encheapification of drone tech in the last decade (rotors, motors, batteries, self-guidance AI and sensors) means people are actually building these things… and they’re not, on the surface, anywhere near as impractically nuts as jetpacks are/were.
When innovation is happening somewhere *else* — and the dominoes topple
A constellation of smaller, seemingly unrelated innovations can, in other words, suddenly transport a tech from being “pie in the sky” to “totally viable.”
That’s what happened with deep learning. The algorithms for layered neural nets — the same ones we use today — were known in the 90s. But back then they were considered feeble and useless for practical purposes; computers weren’t powerful enough. Over the next two decades, though, the computer-gaming industry brought the cost of GPUs way, way down, while the emergence of cloud computing — for stuff like webmail, online docs, photo-sharing, and the like — put massive computing power on tap. By the early 2010s, Geoff Hinton and his team at the University of Toronto were able to throw so much compute at deep learning that it finally worked.
Deep learning seemed to go — quite suddenly — from “never gonna work” to “holy crap, it works great.” In reality, with 20/20 hindsight, we can see that all those subsystems, all that catalytic technology, was slowly coming together. At the time, though, that was a lot harder to see.
I don’t really have a firm conclusion here; I’m just thinking out loud. But this question is really a corker.