Originally published in The Journal of New Media & Culture (Winter 2007)
At a recent conference, I overheard one communications professional ask another, “What is our Web 2.0 strategy?”
I wondered what exactly they meant by a Web 2.0 strategy, despite my background and experience with the Web and new media, and, soon after, decided to register for the official Web 2.0 conference, set to take place November 7-9 in San Francisco to learn more. I visited the conference Web site and found a quote from Ross Mayfield, “Web 1.0 was commerce. Web 2.0 is people,” as well as an impressive guest roster featuring a mix of voices from both traditional and new media firms. However, when I went to sign up, the $3,000 event was sold out. I then began to search for a comparable event and stumbled upon the Web site for ‘Web 2point1,’ where I learned that the non-profit organization running the site had chosen that particular moniker after being threatened with legal action by O’Reilly Media (who coined the term Web 2.0) and CMP.
I couldn’t help but note the irony that the Web 2.0 conference was cost-prohibitive to most ordinary folks, and that a non-profit had been sued by the organizers of the Web 2.0 conference for attempting to use the same name. After all, isn’t Web 2.0 supposed to be about participatory media and collaborative development? Isn’t it about people?
What exactly is Web 2.0, and will it replace what we now know as the Web and the way in which we all communicate, as many seem to claim?
It may very well be true that in three thousand years, historians will view the development and widespread adoption of the Internet as a landmark event in the history of human communications, but whether that development will be responsible for providing a ‘new mind for an old species’ (see below) is not only questionable thinking, it is dangerous. The problem with the rhetoric of technological revolutions is that it loudly proclaims, as if in glowing neon letters, that this era is something new; the past is no more; and that the technological innovation will usher us into a utopian era. What typically follows is that, unwittingly, we find ourselves repeating the same patterns of behavior, waiting for the next revolutionary technology to come around and change the world.
Consider the following:
In the summer of 2005, Kevin Kelly, the founding executive editor of Wired magazine, wrote that hyperlink technology was unleashing a new era of communicative participation “found nowhere else on the planet or in history.” In fact, he heralded the present media landscape as the start of a revolutionary era in human social interactions:
Three thousand years from now, when keen minds review the past, I believe that our ancient time, here at the cusp of the third millennium, will be seen as another such era. In the years roughly coincidental with the Netscape IPO, humans began animating inert objects with tiny slivers of intelligence, connecting them into a global field, and linking their own minds into a single thing. This will be recognized as the largest, most complex, and most surprising event on the planet. Weaving nerves out of glass and radio waves, our species began wiring up all regions, all processes, all facts and notions into a grand network. From this embryonic neural net was born a collaborative interface for our civilization, a sensing, cognitive device with power that exceeded any previous invention. The Machine provided a new way of thinking (perfect search, total recall) and a new mind for an old species. It was the Beginning.
— Kelly (2005)
Kelly’s prophetic proclamation reminded me of another passage I had recently read in Scientific American that hailed a “new organization of society:”
A state of things in which every individual, however secluded, will have at call every other individual in the community, to the saving of no end of social and business complications, of needless goings to and fro, of disappointments, delays, and a countless host of those great and little evils and annoyances which go so far under present conditions to make life laborious and unsatisfactory.
— 1880 Scientific American (cited in Marvin, 1998)
Interestingly, more than 100 years separates these respective pennings. The Scientific American article was published on February 14, 1880 and focused on the telephone’s impact on society. And today, the notion that the telephone has helped make the machinations of modern life possible is not in question. On the contrary, however, the idea that the telephone has had a revolutionary effect on society, eliminating the ‘evils and annoyances’ that make our life ‘laborious and unsatisfactory’ is, in fact, in question. When I read this passage from Scientific American, I couldn’t help but wonder how the wide-eyed journalist who had written the article might have felt if he suddenly found himself bombarded by telemarketers all day? Did Kelly consider how Al Qaeda used the “collaborative interface for our civilization” to access information on the structural design of the World Trade Center (Booth & Dunn 2002) and, additionally, to plan the attack? (US Institute of Peace, 2004)?
Yes, technology changes our lives, but it does not replace or necessarily improve what’s been used in the past, nor does it become part of everyday life without other factors playing a significant role.
Defining Web 2.0
Whereas Kelly is often thought of as the lyrical voice of Web 2.0, it is Tim O’Reilly who has taken a more pragmatic approach to its development and is credited for coining the phrase Web 2.0 (and subsequently claiming the term as his possession). His white paper, What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software, is considered one of the more defining explanations of Web 2.0 and not surprisingly focuses on the technological and market aspects of the Web’s newest trends.
According to O’Reilly, Web 2.0 is a ‘gravitational core’ of standards and all prominent sites are located at ‘varying distances from the core.’ Web 2.0 applications share six fundamental characteristics:
- The Web as Platform
- Harnessing Collective Intelligence
- Data is the Next Intel Inside
- End of the Software Release Cycle
- Lightweight Programming Models
- Software Above the Level of a Single Device
In order to define Web 2.0, he offers a series of the “old” vs. the “new” to help differentiate between 1.0 and 2.0. His first comparison of Web 1.0 and 2.0 relates to what he terms “standard bearers,” or Netscape and Google. According to O’Reilly, Netscape intended to use the dominance of its Web browser to corner the market on Web server products, similar to what Microsoft did with the operating system and software applications. “In the end,” he writes, “both web browsers and web servers turned out to be commodities, and value moved ‘up the stack’ to services delivered over the web platform.” According to O’Reilly, Google is fundamentally different from Netscape in that it is delivered as a service instead of a product, and its key value is its data.
Web 2.0 should not seek to define itself in contrast to or separate from 1.0
O’Reilly’s metaphor of value moving up a stack seems to contradict the notion of the “old” vs. the “new.” The concept of a stack implies a layered relationship between different technologies. In O’Reilly’s own words, “Google happens in the space between browser and search engine and destination content server, as an enabler or middleman between the user and his or her online experience.” In this statement, O’Reilly acknowledges that Google plays a role in a broader media process that also integrates the Web browser. Without Google, the online experience may not be as dynamic, yet without the Web browser, the online experience would be impossible. Comparing the business practices of Google and Netscape is one thing; comparing their products, as landmark technologies that are representative of their respective eras, seems counterintuitive. For two landmark technologies to truly be compared and contrasted there should be something intrinsically similar about them. And from what I can tell, these two technologies don’t fit that bill.
What’s more, categorizing earlier Web technologies as ‘Web 1.0’ and new Web technologies as ‘Web 2.0’ ignores the manner in which old technologies mesh with new. It also marginalizes much of the powerful thinking from onlookers, activists, engineers and developers who helped to shape the Internet that we have and use today. Finally, defining Web 2.0 as a revolutionary phase in the development of the Internet fails to acknowledge that this new dawn in the history of the Web is really a part of a cycle. Indeed, there are certain cyclical patterns underlying the development of technologies that we easily miss when we place the focus on the recent or present.
For instance, O’Reilly’s claim that the foremost tenet of Web 2.0 is that the ‘Web becomes a platform should resound with those familiar with the history of the computer. As Haddon (1998) explains, the early visions for the computer were shaped in a mainframe paradigm, in which one of the foremost visions was a centralized mainframe providing various machine-driven services that were then passed to the user through various telecommunications channels. Ultimately, this model was replaced by the personal computer running local applications. Now, in an ironic twist of fate, this old vision for the mainframe resonates with the current vision of ‘Web 2.0.’
Similarly, components of Web 2.0, or what makes Web 2.0 different from Web 1.0, are not necessarily new, if you take a closer look at the Web’s history. For instance, O’Reilly claims that Web 2.0 harnesses ‘collective intelligence.’ Those familiar with the Web’s past would know that this ‘collective intelligence’ has accompanied the Internet since its inception, symbolized most evidently via the open source movement, which has helped shape the Web since its birth, as well as through early communities, such as the WELL. Those familiar with the Web’s history would also recognize that excessive optimism in the Web’s collaborative nature often overshadows its potential negative consequences, which we’ve seen recently with spamming, cyber-stalking, flaming and the over-commercialization of Web communities.
To take a more recent example, a recent edition of the Economist (2006) published a great article on the massive multiplayer online role-playing game (MMORPG) Second Life, referred to as “the best example of Web 2.0” because of its focus on user-generated content and “celebration of creativity.” While reading the article I was struck by the resemblance between the development of MMORPGs and that of multi-user domains (MUDs), which is typically recognized as its progenitor. What initially began as an action-oriented text-based role-playing game eventually began to take on a more social form, exemplified by Tiny-MUD, developed by Jim Aspnes at Carnegie Mellon University, and provided users with the ability to code and create new objects and landscapes. Given that, are the denizens of the Web 2.0 MMORPG Second Life somehow more creative than the denizens of the Web 1.0 Tiny-MUD? The question, which I hope is rhetorical, begs yet another: why are MUDs not mentioned when we talk about Web 2.0 MMORPGs, such as Second Life? Are there not things we can learn by looking back to explore the social aspects of MUDs? Take for example, academic Sherry Turkle, who is well known for her research in the social impact of MUDs. Referring to Second Life as Web 2.0, while discounting the MUD as ‘Web 1.0,’ ignores the potential applications of her work on the new technology, which would be a major mistake.
Web 2.0 needs to place less emphasis on the technological and more on the social
The notion that the development of certain technologies is inevitable, or that the social influence of a technology is inevitable, is an over-simplification at best; and most likely, a myth. Simply because a technology is available and potentially serves a real purpose does not mean that it is likely to, at least not without the influence and adoption of other factors. The great British cultural theorist Raymond Williams is, perhaps, the most effective at arguing this openness. According to Williams, technologies are inherently social products; they are the result of inventions, related developments and even non-technological advances or shifts in society and culture. As Raymond Williams (1974) explains, “The invention of television was no single event or series of events. It depended on a complex of inventions and developments in electricity, telegraphy, photography, and motion pictures, and radio.” Furthermore, there were numerous non-technological developments, such as efforts from lobby groups that shaped what came to be known as television. Similarly, the Internet that we see today is the gradual outcome of innovations across network technology, microchips, software development and personal computers, among others, but also in non-technological areas, such as X and Network Neutrality, a key issue that will help determine the eventual shape of the Internet.
William’s theory goes against the inherent belief that Web 2.0 technologies will inevitably turn the Web into a “global brain,” as O’Reilly explains. Besides being shortsighted, this determinist line of thought is, potentially, dangerous and marginalizes the role of human agency in the innovation process. The notion that a ‘Machine’ will provide a ‘new way of thinking’ assumes a unidirectional flow of influence: from the technology that we use to our behavioral patterns as humans, which neglects the role of social factors in determining the shape of a technology. Clearly, instead of simply glorifying the technology, assuming that it’s moving in the right direction, there needs to be a renewed focus on the non-technological elements of Web 2.0.
Web 2.0 should be about the people
In just under 10 years, we have used various terms, from ‘reaping’ to ‘harnessing,’ to define the Web’s technological and social potential. The first real substantial work on virtual communities is commonly attributed to Howard Rheingold‘s work, aptly titled ‘The Virtual Community.’ Four years after he published his works, John Hagel and Arthur Armstrong from McKinsey & Company responded with ‘Net Gain,’ in which they outlined what they perceived to be the “real” value of online communities: their potential as a commodity (1997). Ironically, within the first several paragraphs, the authors state, “like every communications network, the Internet is all about establishing and reinforcing connections between people,” before shifting gears and outlining the “real” value of online communities. As the book description on the McKinsey & Company Web site reads: “Net Gain served as a manifesto for a new generation of competitors seeking to reap the rewards of the on-line economy.”
The only defining characteristic of Web 2.0, as explained by O’Reilly, specific to its users, is the ‘harnessing of collective intelligence.” As O’Reilly explains, current Web technologies have allowed companies to “make their mark on the Web” by “harnessing collective intelligence,” but what is the benefit for those people whose intelligence is being harnessed? The use of the word ‘harness’ is limiting and ironic, in a sense. The verb ‘harness’ is a derivative of a word initially used to describe a “stable gear consisting of an arrangement of leather straps fitted to a draft animal so that it can be attached to and pull a cart,” according to the Princeton WordNet dictionary. Are we harnessing collective intelligence or embracing it? If we are to listen to the words of Ross Mayfield – “Web 1.0 was commerce. Web 2.0 is people,” then we should embrace it.
In late 2005, Madden and Lenhart (Pew, 2005) reported that 57 percent of teens who use the Internet could be considered content producers in some way or another. In 2006, Fox and Lenhart (Pew) reported that 12 million American adults keep a Web log. These are significant trends, yet are neglected in much of the talk of Web 2.0. If Web 2.0 is to hold any water, it must begin to more fully consider the Internet’s social aspects. For example, what is the significance of the Internet for the teens that are currently content producers? How are technologies that are classified as Web 2.0 affecting the way in which we go about our daily lives? These are the types of questions we must be asking.
Conclusion
When initially brainstorming ideas for an article on Web 2.0, our chief technology officer Bob Schmidt made the valid point: the use of ‘2.0’ is ironic. O’Reilly claims that in Web 2.0, “None of the trappings of the old software industry are present. No scheduled software releases, just continuous improvement.” Well, the 2.0 signifier is representative of the software release cycle, isn’t it?
Do the tenets of Web 2.0 not apply to the offline world?
In a true Web 2.0 world a non-profit organization shouldn’t have to worry about being threatened with legal action for using the name “Web 2.0.” In a true Web 2.0 world, the name Web 2.0 wouldn’t be trademarked. The proof of whether Web 2.0 is representative of a new era in communications will be in the pudding. If we are truly linking ourselves into a global brain, and the technology is leading us in that direction, we should see real world impact on the way we live and behave. If Web 2.0 holds water, we should be able to embrace its principles in the way we work and how we communicate.
References
Booth, K & Dunne, T 2002, ‘Worlds in Collision’, in Worlds in Collision, Palgrave MacMillan, New York
Carr, Nicholas (2005), ‘The amorality of Web 2.0’, Rough Type, viewed 19 September, 2006, <http://www.roughtype.com/archives/2005/10/the_amorality_o.php>
Corbin J, 2002, Al-Qaeda: In Search of the Terror Network that Threatens the World, Thunder’s Mountain Press/Nation Books, New York
The Economist (2006), ‘Living a Second Life’, The Economist, 28 April.
Fox, S & Lenhart, A (2006), ‘A portrait of the internet’s new storytellers’, Pew Internet & American Life Project, Washington, DC.
Hagel, J & Armstrong, A (1997), Net Gain, Harvard Business School Press, Boston.
Haddon, L. (1988) ‘The Home Computer: The Making of a Consumer Electronic’, Science as Culture, No.2, pp.7-51
Kelly, Kevin (2005), ‘We are the Web’, Wired, August.
Madden, M & Lenhart, A (2005), Family, Friends, and Community, Pew Internet & American Life Project, Washington, DC.
Marvin, C (1998), When Old Technologies were New: Thinking about Communications in the Late Nineteenth Century, Oxford University Press, Oxford.
McKinsey & Company (1997), ‘Book Description: Net Gain: Expanding Markets Through Virtual Communities’, McKinsey & Company corporate Web site, viewed on 18, October 2006, <http://www.mckinsey.com/ideas/books/netgain.asp>
O-Reilly, T (2005), What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software, O’Reilly.
Silverstone, Roger (1999), Why Study the Media?, London, Sage Publications.
Van Couvering, E (2003), Media Power on the Internet: Towards a Theoretical Framework, Research Seminar Paper, London School of Economics, London.
Wikipedia (2005), ‘Web 2.0’, Wikipedia, viewed 19 September 2006,<http://en.wikipedia.org/w/index.php?title=Web_2.0&dir=prev&action=history. >
Williams, R (1974), Television: Technology and Cultural Form, Fontana/Collins, Glasgow.
United States Institute of Peace (2004), How Modern Terrorism Uses the Internet, United States Institute of Peace, viewed 19 December, 2005, <http://www.usip.org/pubs/specialreports/sr116.html>.
0 Comments on "Putting Web 2.0 in Perspective"