The complexification of the web
Today’s web is not the web of the 1990s – hence the idea of Web 2.0 as articulated by Tim O’Reilly and others. The early emphasis on simple standards – HTTP, static HTML pages – represented a shift away from the 1980s PC-related trend towards locating processing power and bespoke algorithmic complexity on the desktop.
The alacrity with which a standards-based web was adopted was in part a reflection of the exasperation many felt at the complexity and brittleness of local proprietary software configurations running in a Windows environment. One result was the rise of the open-source movement and the LAMP paradigm. But with increasing network bandwidth and the continued proliferation of often complex web standards for a range of data types and operations on data (and growing user bases for a variety of application types), functionality has again become richer and applications have become thicker and more varied at both client and server ends.
Static pages of HTML have increasingly been supplanted by dynamic pages bringing together diverse resources from multiple locations. Page templates and style rules are often combined with dynamically sourced data and perhaps with advertising material selected according to user behaviour. Documents – if that term can still be considered apt – must therefore increasingly be seen as modular, distributed and context-dependent. Various web protocols and standards provide a framework for accessing software services and specific elements of functionality as opposed simply to data. These web services are intended for incorporation into other data processing frameworks rather than for human consumption.
As the online environment has become more established and useful in everyday life it has been adopted by a widening demographic. Web users have shown themselves willing to adapt to technological developments where they demonstrably serve their needs (e.g. for financial services or entertainment). Social media and the interactive, participative functionality of Web 2.0 have become integral aspects of a media-savvy, platform-plural lifestyle that has connotations of ‘cool’ rather than merely geeky. But accompanying these developments are not unreasonable fears of the disenfranchisement of the technology-averse.
The explosion of content on the web, in the form of corporate and institutional sites, personal pages, blogs, and media-sharing sites, demands new and improved ways of locating relevant content. Google apparently meets many of the search needs of most users, but feed formats such as RSS and associated aggregation tools are increasingly necessary for keeping on top of developments. A further level of delegation in respect of the discovery of relevant information is likely to come about through increasingly automated metadata generation and algorithmic content analysis. (This can be considered complementary to the arguably more labour-intensive data engineering efforts needed to realize visions of a ‘semantic web’.)
The overall effect of the web’s evolution is to open up new kinds of information space, catering to the diverse needs and interests of multiple kinds of user, as well as new ways of meeting old information needs. This complexification threatens to make simple categorical prognostications and projections look merely simplistic.
(First posted on the KnowledgeCraft wordpress blog on July 11, 2009)