A few things I’ve noticed about how technology works
One of my earliest memories is my father bringing home the family’s first computer: a Commodore 64. I was much too young to use it at the time; it was primarily for my older brother. But as soon as I was old enough, I spent countless hours staring at that blue screen (to be fair, 60-70% of that time was probably just waiting for things to load). I remember playing Pong on an Atari. I remember playing Shadowgate on my friend’s Apple IIGS. I remember the first time I played Super Mario Brothers. When I took ‘typing’ class in junior high, we learned on the new Macintosh Classic. When I started college in 1996, the school had just installed ethernet(!) connections in every dorm on campus.
I’ve had a long relationship with digital technology. First simply as a user, then as a novice developer and designer, and always as a curious observer. Over that time I’ve become increasingly obsessed. Like the evolution of technology itself, my interest was slow and incremental at first, and then suddenly fast and boundless.
Since understanding this stuff became my job when I joined Undercurrent in 2008, some ideas about technology have become lenses and points of view that I return to as I try to makes sense of how things are changing, and what can be done.
This is a digital world, so none of this is etched in stone. But from what I’ve seen so far, these things seem to be true.
In her 2014 TED talk Margaret Gould Stewart, Director of Product Design at Facebook, shared that it took the designer over 280 hours to refine the pixels on a redesign of the little "like" button. 280 hours of designing, testing, and iterating in order to perfect a small but vital element of the social web that is seen on average 22 billion times a day across over 7.5 million websites.
When people describe the best way to develop technology products it often sounds a lot like the scientific method: hypothesis, test, learn. This is a great place to start; when done well, and fast, it goes a long way to solving problems that users actually care about. But, intelligence and brute logic can only get you so far when your success depends on the behavior of real live human beings. The difference between promising and legendary depends on intuition, creativity, and taste. The best technology companies in the world find ways to balance these forces.
In the early, make-or-break months at Pandora, their CTO had the entire company vote on what each of them thought it "would be stupid for us not to do in the next 90 days”. Pandora’s a great example of bringing together computer and human intelligence. The service’s uncanny ability to know what its listeners want to hear is the result of machine learning, observing the unfathomably huge data set of user behavior, and real live music experts who help the computers to understand music.
In an increasingly computerized world, it’s easy to forget the strange things that make us human, and fully appreciate the role of that humanness in how we do?—?or don’t?—?interact with technology. We find ourselves surrounded by screens, and forget that humans still play a critical role in creating what’s on those screens. As Frank Chimero writes in this brilliant essay, "What screens want needs to match up with what we want."
In 2004 it was difficult to predict whether Friendster, MySpace, or Facebook would be most successful. Looking back, it turns out that the true measure of your wisdom wasn’t whether or not you bet correctly, but rather whether or not you bet at all.
The people who skeptically asked, "Is this social networking thing really going to stick around?" are the ones who got left behind. The other people?—?call them brave, foolish, or just optimistic?—?who simply jumped in and started learning are the ones leading the way today. Over and over again technology has shown us that it will find its purpose. Just give it time.
Here are a few useful questions to ask about technologically driven change:
On the other hand, 99% of the time, this turns out to be a dangerously useless question to ask:
History shows that all of these questions are difficult to answer accurately. The fact that they’re all hard to answer, however, doesn’t make them all equal. The difference between the first group and the last question is that the first group of questions pushes you to imagine what could be. It pushes the people asking the questions toward exploration and action. It inspires critical thinking and creative problem solving.
On the other hand, questioning possibility itself provides perilous cover for inaction. It’s cheap comfort for those asking the question, allowing them to tell themselves and each other that maybe nothing needs to be done at all.
Wouldn’t that be nice. I sure bet Steve Ballmer wishes that were true…
Former Microsoft CEO Steve Ballmer laughs off the iPhone shortly after its launch in early 2007
For decades, the automotive industry seemed impenetrable to upstarts. Tesla barely made it out of their start-up phase, but now they’ve got the safest car ever tested, the best rated car ever by Consumer Reports, and they’re blowing other auto stocks out of the water. Just because the car business is hard, doesn’t mean it can’t be completely transformed by a former software engineer.
The Michael Porter-approved walls that you’ve been carefully building around your business for the past 50 years won’t protect you from technological disruption. On an elemental level, digital technology transforms a world of disparate information into a universal language of 1's and 0's. This is why digital doesn’t respect boundaries. Regardless of the industry, digitization will uncover inefficiencies and create value by enabling information to move more easily, between people<->people, people<->machines, and machines<->machines.
When Google began, they unlocked knowledge trapped in books, libraries, newspapers, and people’s brains as it was becoming digitized and enabled it to be shared and accessed more easily. In line with their stated mission?—?"to organize the world’s information and make it universally accessible and useful"?—?Google has disrupted knowledge itself. But, they didn’t stop there. Media empires toppled as advertisers flocked to a new service that provided a more effective method for putting the right message in front of the right person at the right time. Looking ahead, Google’s investments in artificial intelligence, robotics, and even aerospace, should be setting off alarm bells at all kinds of companies that wouldn’t have guessed even 1 year ago that they’d find themselves competing head to head with Google. ("Aren’t they just a search company?" Ha.)
The mobile messaging app WhatsApp was launched in 2009, and was acquired by Facebook five years later for $19 billion (that’s roughly 1/3 the market cap of Ford Motor Company). Many people were baffled by the size of the deal. I say look at the number that matters: as of December 2013, WhatsApp had over 400 million active users per month (the population of the United States is roughly 313 million).
This isn’t a new idea. If no one uses your product, you don’t have a business. The difference today, however, is how easy it is to find out if anyone wants to use your product before you’ve gone too far in the wrong direction. Questions about revenue, competition, or market strategies might be perfectly legitimate. But, good answers to any of those questions all hinge on making something that people actually use. That’s why the source of advantage is now speed?—?how quickly can you find out if you have something that people will use?
It’s no accident that Amazon is one of the most successful companies in the world right now, and continues to enjoy stratospheric stock performance in spite of razor thin profits. As noted in Jeff Bezos’ 2013 Letter to Shareholders (a perennial must read), Amazon has developed an experimentation platform called "weblab", on which they’re able to run almost 2,000 experiments per year. This is one of the ways that Amazon is able to quickly pilot and validate new ways to add value to the customer experience. In 2013, they prototyped, validated, and launched a simple solution called "Ask an Owner" that routes product questions from users considering a purchase to customers who have already purchased the item. In less than a year since launch, millions of these questions were asked and answered.
Unlike Amazon, most big companies only know how to make big bets. With that baggage weighing them down, they seek certainty in an uncertain world. Well-intentioned executives build a gauntlet of questions, intended to ensure success, that end up killing innovation: "What’s the business model?" "Will this scale?" "What’s the plan for year one?" "What’s the ROI?" None of it matters. Your job as a business leader is to answer this question first, and fast: Are people using it?
In 2009, Jack Dorsey’s friend James McKelvey, a artisan glass-blower, could only accept cash at his studio. The overhead costs and complexity of becoming a credit card merchant were too much trouble for such a small business. Recognizing the user need and the market opportunity, Jack and James went to their local Techshop and a month later had a working prototype of what would eventually become Square, the disruptive payments company now valued at over $8 billion.
Everywhere you look, the costs of experimentation are approaching zero. The wide availability of tools, resources, and development frameworks makes the decision to launch a website or mobile application practically a non-issue. The question is no longer "Can we make it?" but rather "What should we make?" And while this trend has been happening in the world of software for decades, it’s now crossing over into the world of hardware, too, as the means of production become digitized.
When Bre Pettis and his friends at NYC Resistor wanted an affordable 3D printer to serve their hardware hacking needs in 2009, they found that it would be exponentially less expensive to use open-source hardware designs, expired patents, and their own crude production facilities to build 3D printers themselves than it would be to buy any of the versions currently available on the market. At the time, the leading manufacturers of 3D printers, 3D Systems and Stratasys, only offered "professional" grade machines, costing a minimum of hundreds of thousands of dollars. 4 years later Stratasys acquired MakerBot for over $400 million. It would have cost Stratasys a lot less than that to build their own version of the MakerBot in 2009, but the company lacked the necessary ability / will / vision to do so.
This new reality changes the equation on both sides, for the disruptor as well as those threatened by disruption. The barriers to experimentation have fallen for everyone. Outside large, established organizations people with little to no industry experience can go from idea to viable solution with a tiny fraction of the time and cost that it would have taken even 10 years ago. But, even more importantly, outsiders can pursue these opportunities without decades of institutional bureaucracy and organizational complexity slowing them down.
This is why the cost of inaction is so much higher than people think. It’s only a matter of time until someone discovers the breakthrough solution that will disrupt your business. And that timeline is shrinking by the day. Companies who want to thrive and grow rather than die of obsolescence, must break down internal barriers to experimentation, and fight to create environments where new ideas can flourish as easily inside as they can outside.
Gabe Newell, co-founder and CEO at Valve, in an interview with the Washington Post, January 2014:
We had to think about if we’re going to be in a business that’s changing that quickly, how do we avoid institutionalizing one set of production methods in such a way that we can’t adapt to what’s going to be coming next.
The set of those requirements led to decisions about not having titles, not having organization structures, and things like that because as useful as they are in the short-term in the long-term they really end up hurting you a lot.
The companies that are leading our economy and shaping our future are working in a completely new way. At Undercurrent, we call these companies responsive organizations, defined by a new set of operating values:
With over 240 million active users, Twitter is a global broadcasting platform unlike anything the world has ever seen. Unsurprisingly, the differences in how it operates as compared to how a legacy media company like CBS operates are profound. Why does Twitter only need 2,700 employees to do what they do, and CBS needs over 20,000? Obviously, CBS is a much larger and more diversified business. The stock market may choose CBS at the moment; but if we look beyond their income statements, has CBS or Twitter made a bigger impact on our world since Twitter’s launch in 2007?
Why is it that Twitter is able to make such a relatively huge dent in the universe with its team of 2,700, while CBS is merely able to sustain their business with a team of over 20,000?
When Twitter’s SVP of Engineering joined Twitter in 2012, he recognized that they would need to hold on to a few critical organizational principles if they wanted to maintain their speed and success at scale:
1. Build strong teams first. Assign them problems later.
2. Keep teams together.
3. Go modular. Remove dependencies.
4. Establish a short, regular ship cycle.
This is just a tiny glimpse of how Twitter and CBS are different. But, this difference is representative of a fundamentally different mental model for how to organize and operate a global organization.
Spotify, another 21st century global media company, continually reevaluates its operating practices as it seeks to improve its ability to innovate and ship faster than their rivals. In 2012, agile coaches Henrik Kniberg and Anders Ivarsson published "Scaling Agile at Spotify" (PDF), which lays out a contemporary approach to organizing people and capabilities in service of user needs. Their elemental team units, called "squads", are designed to feel like and operate like mini startups, with the autonomy to ship?—?and learn?—?fast.
As Albert Einstein put it so eloquently, "We cannot solve our problems with the same thinking we used when we created them." The same goes for how we work.
You can’t use yesterday’s way of working to build tomorrow’s solutions.
Today we’re excited to announce we’ve closed the acquisition of Waze. This fast-growing community of traffic-obsessed drivers is working together to find the best routes from home to work, every day.
Here’s something special about digital things: they don’t get used up when they get used. In economics, things like this are known as a "non-rival" goods: "a good whose consumption by one consumer" does not prevent "simultaneous consumption by other consumers". For example, you and I can both read the same Wikipedia article, watch the same streaming movie on Netflix, or listen to the same song on iTunes, simultaneously without conflict.
This makes digital things particularly amenable to network effects, when the value of a product increases as more people use it.
As more users watch, rate, and upload videos to YouTube, the site gets better for everyone. As more people post and follow each other on Tumblr, the site gets better for everyone. As more people join Facebook, connecting and sharing with each other, the site gets better for everyone. Almost any popular website you can name benefits from network effects in some way.
And now, as hardware becomes simply "software wrapped in plastic", as Brad Feld likes to say, these same possibilities and expectations can be applied far beyond the bounds of a web browser. When Makerbot was acquired by Stratasys, much of the company’s value was actually contained in Thingiverse, Makerbot’s community of engineers and hardware hackers. Chris Anderson’s company 3D Robotics is creating a platform for unmanned aerial vehicles that gets better as more people become users. The software inside Tesla’s cars will make the cars smarter over time as more drivers take to the road.
If you’re the creator of a digital product or service, you can’t take this opportunity for granted. Figuring out how users can create value for each other is now a fundamental and critical aspect of building successful products for a digital world.
Sir Tim Berners-Lee:
When I say I invented the web, I really just put together the last few pieces out of a construction kit, which had already been made. So I was looking at a problem of there being lots and lots of different sorts of systems for keeping information and there was already hypertext systems and there was already the internet. The internet had spread across America, and it had just gotten into Europe, so when I was looking at the problem of all these different information systems being incompatible, so you couldn’t easily get information from one to the other, I realized that we could make them all look like one big virtual information system just by taking the ideas of hypertext on one side and then using internet protocols to connect all the computers, we would make what I call, for better or worse, World Wide Web: W W W.
Good, worthwhile ideas are abundant. There have never been more of them; it’s never been easier to find them; and it’s never been easier to bring them to life. It shouldn’t be a surprise, then, that innovation?—?the act of turning novel ideas into something valuable?—?has become simultaneously essential and elusive. This dilemma stems from a misconception about the nature of innovation in a digital world. Innovation is not dependent on originality.
This means that companies that depend on innovation need to change their approach. Hiding your most brilliant minds away in a high-security research fortress will only hinder your ability to discover and build upon valuable ideas growing outside your walled garden. And on top of that, the real challenge is who can get their ideas in front of real users fastest. Who can ship first, so that the creators can observe, gather data, learn, iterate, and integrate more good ideas to make their product better.
The community-powered consumer product company Quirky isn’t successful because of the originality of their ideas. While the products are novel, the company’s success comes from how easily and quickly they are able to take all the good ideas that are floating around out there and bring them to life. They’ve created a platform that enables anyone with a good idea to become an inventor by sharing their idea with the rest of the community and enabling that community to fertilize that seed with thousands of other good ideas. Their process allows them to go from idea to real products on retail shelves in a fraction of the time it takes more traditional competitors.
This spring, when their internet-enabled air conditioner (a joint-venture with GE) became the number 1 selling air conditioner on Amazon, it wasn’t because no one had ever thought of it before. It was because they were shipping it 90 days after the idea was brought to them.
(For more on this topic, I highly recommend chapter 5 of The Second Machine Age.)
On August 25, 1991 a Finnish student posted a humble note to a Usenet newsgroup. He invited people to join his new project, that in his words was "just a hobby, won’t be big and professional like gnu". 23 years later, Linux, the open-source operating system originally conceived of by Linus Torvalds has evolved into one of the world’s most popular operating systems and the preferred system for servers, high-security environments, and supercomputers. Linux achieved this success with a large, diverse and disorganized collection of contributors, free and open access to its source code, and constant iteration and variation of the product.
This is exactly the kind of approach that would have terrified a traditional corporation. Who’s in charge? Who’s responsible for quality assurance? How are we going to keep track of bugs? How can we trust that people won’t steal it or corrupt it? How will we possibly keep track of who’s working on what, when? Unlike its most notable competitor, Microsoft Windows, Linux embraced complexity and adaptivity in its development. And since then, other winning operating systems, including iOS and Android, have also found success by creating systems that enable independent contributors to flourish with limited top-down control.
Whether it’s by design or by accident, organizations of scale can no longer avoid complexity in today’s world. As people?—?and increasingly everything else around us?—?become connected, the nature of our relationships, both human and machine, become very complex very quickly. Whether we’re talking about most of the world’s population, or a few hundred thousand employees of a global corporation, the scale of our interconnectedness is simply too much for any human, or even group of humans, to comprehend. It is literally impossible for anyone to even observe accurately, let alone attempt to control or design from the top down.
Like a 21st century Midas, digital technology makes everything it touches complex.
In her book Complexity: A Guided Tour, Melanie Mitchell defines these systems we see springing up around us by the following shared attributes:
1. Complex collective behavior: Consisting of large networks of individual components (ants, B cells, neurons, stock-buyers, Website creators), each typically following relatively simple rules with no central control or leader. It is the collective actions of vast numbers of components that give rise to the complex, hard-to-predict, and changing patterns of behavior that fascinate us.
2. Signaling and information processing: All these systems produce and use information and signals from both their internal and external environments.
3. Adaptation: All these systems adapt—that is, change their behavior to improve their chances of survival or success—through learning or evolutionary processes.
Our goal should be to maximize these attributes, and make them as efficient and effective as possible.
Seeking simplicity in the face of complexity is simply a faster route to obsolescence.
Rather than figuring out how to fight complexity, we need to figure out how to make it work for our benefit. Rather than seeking to turn a complex system into a simple system, the goal should be to transform the complex system into a complex adaptive system.
Describing the significance and scale of the changes we’re witnessing, Andrew McAfee and Erik Brynjolfsson write in The Second Machine Age:
Computers and other digital advances are doing for mental power— the ability to use our brains to understand and shape our environments— what the steam engine and its descendants did for muscle power.
The scale of the technologically driven changes we’re witnessing is more profound than we even realize. And the oxygen fueling this transformative blaze is information.
While some say that information wants to be free, I like to think that information wants to flow freely. Imagine information as water, and digitization as gravity. The force of digitization pulls information along, often enabling it to run in unpredictable directions, as the information seeks out equilibrium. It must get where it’s going, and you have to try really hard to hold it back.
AirBnB and Uber are both great examples of this phenomenon in action. Uber set free trapped information about who needs a ride and where they are at any given time, and combines that information in real time with who’s offering a ride and where they are. As gps-enabled phones become prevalent, this information was waiting for someone to put it to good use. AirBnB did something similar with places to stay. Who’s got a place? Who needs a place? The information was there, it just wasn’t flowing very easily to the right place. Each of these companies has proven to be fundamentally destabilizing to the industries they operate in (taxis and hotels).
In 2010, then-CEO of Google Eric Schmidt famously stated that the world was creating as much information every 2 days as it had in the entire time from the dawn of civilization up until 2003. Since then, that mind-blowing statistic has only multiplied. We are literally generating so much data now that we need to invent a new metric unit?—?beyond the yottabyte?—?to adequately quantify the data set.
These two aspects of digital information?—?the amount of information and the ease with which that information is shared?—?may be the biggest, and least appreciated, difference between the future and the past.
As McAfee and Brynjolfsson describe it:
This surge in digitization has had two profound consequences: new ways of acquiring knowledge (in other words, of doing science) and higher rates of innovation.
Science is the process of analyzing the observable world, and sorting through the data to separate the known from the unknown. Never before has humanity had so much observable data to sort through.
And this is where the intelligent machines come in. At the same time that we are beginning to generate data sets that are far beyond human comprehension, we are also making leaps forward in the capabilities of artificial intelligence.
I can’t even claim to be a novice in this advanced field, but I do believe that the possibilities shown by projects like Siri, IBM’s Watson, and Google’s self-driving car, are giving us a glimpse of a monumental change. (And if you don’t believe me, refer back to #2: Our inability to predict the future doesn’t make it any less inevitable.)
If you’re responsible for a business or industry, and wondering where the weaknesses or opportunities lie, look closely at the cracks and crevices where information is currently trapped, and help it to flow more freely.
None of these ideas are mine alone. They are built upon the work and great thinking of many other people who are all infinitely more brilliant than I am. In addition to my inspiring colleagues at Undercurrent (including Aaron Dignan, Clay Parker Jones, Bud Caddell, Jordan Husney, and others) here are some books that I consider to be seminal: