Webmergence
Robert Laughlin won the 1998 Nobel Prize in physics. Hence when this major player in the scientific community published a book entitled A Different Universe: Reinventing Physics from the Bottom Down in 2005, it was immediately a big thing. In this book, Robert Laughlin argues that the most fundamental laws of nature are “emergent”: it means that they result from the global behavior of large agglomerations of matter, without to owe anything to the laws that apply to any individual component. For example, the laws that control the behavior of a sand hill are not predictable from any materials science principle applied to a lone sand grain. In the same way, the behavior of an avalanche – where the surface snow flows as if it was a liquid above the lower snow layer – cannot be understood through the modeling of individual snow grains; this behavior “emerges” from complex interactions between the moving layer and the support layer.
When I heard about Robert Laughlin’s emergence theory, I immediately remembered that in 2001, when making the first demonstrations of the Ligne de vie, I was used to saying that while putting at work public information systems in the medical domain, we would have to take into account that such a wide scale network would eventually reach the complexity level where “the whole is more than the sum of its parts”. And the example that I usually gave was precisely the sand hill.
This “Proust madeleine cake”, added to the consciousness that, from this time on, the web has constantly been evolving toward more and more interaction between its members (a thread that is even reinforced by the web 2.0 wave), immediately made me ask myself if it would be consistent to apply the emergence theory to the laws that rule the mass-behavior on the Internet.
I guess that it is probably true, and, what is more, it is possible to give an example that is really characteristic of this uncoupling between a mass law and the behavior of individuals: the “Long Tail” phenomenon.
The phrase The Long Tail was first coined by Chris Anderson in an October 2004 Wired Magazine article. It describes the aspect of sales graphs observed when selling goods on the Internet. Instead of the usual half-bell curve that we can observe for traditional stores, because best sellers account for a massive part in it, this graph exhibits a very wide distribution, where seldom sold products represents a larger surface than success stories. As it was described by an Amazon employee, “We sold more books today that didn’t sell at all yesterday than we sold today of all the books that did sell yesterday.”
It could be sensible to explain the Long Tail by the simple evidence that traditional stores’ shelves have a fixed length and storekeepers must give the priority to fast selling products. On the contrary, “e-shelves” have a virtually unlimited size. But shelves length is not enough, because anyway best sellers are always on the first pages of any web site while unknown titles are relegated in the back-store.
The real explanation of the Long Tail must be found in internauts interaction. In order to lead to a compulsory additional buying, when you want to buy something on Amazon and many other web sites, you are displayed with the products that were also selected by other internauts that already bought the same thing. It is highly efficient because if I like a book and I am warmly advised to read another, totally unknown, book by those who also liked the former one, then I will probably be teased enough to buy it, even if I have never heard of it before.
Amazon didn’t create this “other readers advices” functionality in order to build a Long Tail Effect, they just wanted to have their web site become more user friendly and create compulsory purchases. In the same way, Internauts didn’t really look for unknown works (who can already read all the best sellers? which works, besides, have been tailored in order to please the mass). The Long Tail “emerged” from the multiplicity of interactions among the web community. It is a typical example of what we could call “webmergence”.
Of course, a theory can never be built from a single example (or even 1000 examples), but there is undoubtedly something to be dug there, and we can already foresee the possible consequences of webmergence.
Webmergence stipulates that the laws that will rule mass behaviors on the Internet can be guessed neither from individual behavior nor from technical tricks. On the contrary, when a mechanism will instigate the interaction of a sufficient number of people, we can expect to see unexpected meta-behavior emerge: the laws of Internet’s physics.
In the medical domain, for example, we can deduce from this that it is actually useless to build from scratch a huge information system in order to avoid iatrogenic injuries or medical acts redundancy. On the contrary, what should be done is to ease the creation of a bunch of systems that both can be widely distributed and allow a sufficient interaction between their users; then help those whose emerged global behaviors are in the expected direction.
Robert Laughlin’s theory declares as obsolete the constructivist approach inherited from the industrial revolution. Webmergence could set the limits of a similar utopia in the far more modern world of information systems.