Epistemic Systems

[ cognition – information – knowledge – publishing – science – software ]

Friends de-united? Academics, publishers, and open access

with one comment

An unfortunate feature of some of the recent debates about open access and the role of publishers, culminating now in the widely publicized boycott of Elsevier journals, has been the way in which the amount of light cast about has been matched or exceeded by the quantity of heat simultaneously liberated. Perhaps it’s inevitable: even rational people feel strongly about these things. My own standpoint on academic publishing matters is, I think, distinct from some of the most heavily represented academic and commercial positions, and could perhaps be characterized as citizen-centred.

As someone who in the past has carried out R&D within a non-academic setting, and who sometimes engages in what I suppose should be called independent research, I’m all for open access. It really is frustrating to go to the website of some journal in order to find that all-important article (quite probably written by someone whose work is supported by one of the taxpayer-funded research councils), only to be greeted by a big fat paywall. The experience soon becomes familiar, however, whenever one seeks journal content from any of an army of publishers large and small, ranging from the generalist and overtly mercantile to specialists affiliated professionally in some way with a particular discipline. At (typically) around £30 per article download (approx. $50), independent research – or even the simple exercise of pure curiosity, or an old-fashioned desire for self-improvement – that connects closely with the current literature is not a pastime for the impecunious.

That said, I’m sceptical that academia, at the collective and institutional level rather than at the level of individual researchers (amongst whom I count many good friends), is invariably the innocent party in all of this. Sometimes in the groves of academe one sees something of the self-interestedness and closed shop protectionism of a medieval guild, and current debates should be viewed in the context of a publishing model that gives academics near-exclusive control over what gets published. A plausible case could probably be made for saying that one of the journal’s functions has been as a weapon with which to exert control over disciplinary territory. On the academic side of the drawbridge, the thinking often seems to be, roughly, that so long as my institution can pay the subscriptions, paywalls aren’t a problem for me.

Latent in this issue of disciplinary control is the complex topic of peer review. I shall save looking at that for future posts. For now, I want just to observe that academic hostility to publishers is largely a modern phenomenon. SPARC was formed as recently as 1997 – and that development came from the library community rather than directly from researchers. Computers have much to answer for, of course, and for a combination of reasons. In sum, however, they boil down to the fact that when journals were physical objects as well as intellectual ones, their production required the application of tangible, physical skills, such as typesetting and printing. (Yes, a university press was once precisely that. Oxford University Press stopped its own presses only in 1989.) Coordinating the activities of all those specialist trades took time and knowledge that publishers could supply. It took money too, of course. The printed journal had to be paid for, and since it would be accessible only locally at the places to which copies were distributed, it made sense to aggregate all the production costs and recoup them via subscription charges, paid primarily by the institutional libraries that constituted the bulk of customers.

Then came the personal computer, word processing and desktop typesetting. Authors were generally content to limit their efforts at styling their articles to what could be accomplished using basic word processor capabilities. Was that because typesetting turned out to be unexpectedly arcane and surprisingly difficult to do well, or just because publishers wanted to typeset journals their way and didn’t want to have to unpick each author’s attempt at attaining an aesthetically pleasing article? I’m not sure, but I guess the latter. Still, an awareness of DTP probably made a few authors wonder whether they couldn’t do more of what the publisher did. (And in mathematics, TeX and LaTeX actually did put typesetting in the hands of authors.)

In the early 1990s the Web arrived, capitalizing on the infrastructure of the internet. Global information dissemination had never been so easy. People began to ask how essential all that nice styling stuff was that publishers do in any case. And then, do we really need printed journals? Thus we reach the present day, when on the face of it the case for the publisher has never looked so dubious. Learning that Elsevier makes profits of 40% on STM journals does little to strengthen the case, but when you come to think about it, the issue of profit might be something of a red herring. Other suppliers to and participants in the scientific research process than publishers are rarely castigated for daring to take more in revenue than is needed to cover their costs. That NMR spectrometer? It came from a profitable company. Your reagents supplier of choice? Probably not a big loss maker. And the maker of the laptop on which you’re writing up your results? $7bn net profits in Q4 of 2011, I gather.

So why should publishers’ profits arouse such ire? Presumably because it is felt that now, in the digital era, publishers contribute essentially no value at all. However, if it really were the case that publishers contributed precisely nothing, wouldn’t their profit margins be stratospherically higher than current levels? Alright, it could be that all those thousands of people working for publishers really are just sitting around drinking coffee all day, but if that were the case I suspect the shareholders would have found out by now. No, more likely is that the majority of those publishers’ employees (many of them graduates with a fondness still for their original academic disciplines) are doing substantive jobs that make a real contribution to the dissemination of academic research findings in a form which, with the occasional sorry exception, provides a sound basis for further research. Undoubtedly many of the skills involved are less materially grounded than the old crafts of hot-metal typesetting and printing, and for sure the costs of producing an all-electronic journal are considerably lower than those involved with printed journals. But it would be a mistake to think that journals – even electronic journals – cost nothing to develop, publish, maintain and extend. If profit truly is without honour then maybe we should think seriously about the public sector taking over the job of journal publishing. But a small voice in the back of my head wonders by how much overall publishing costs would actually fall. My suspicion is that the like-for-like publishing bill would end up roughly the same as now – maybe slightly less, perhaps slightly more – albeit with some of the costs redistributed to the point of near-invisibility.

OK, for the sake of argument I’ve adopted a deliberately charitable stance towards the commercial sector. There is that uncomfortable sense in which publishers sell back material to its very creators (amongst others). But I really do think we should keep such issues as the profit levels, seeming monopoly status, or apparently flexible ethical values of any one company separate from the issue of open access. If the regulatory framework within which academic publishers operate now seems shaky or defective then fine, let’s go about fixing it. Open access, however, is a distinct issue relating to a transcending idea: that knowledge is one of the glories of humankind, and the world is enhanced when it is unconfined. Some have suggested that there never was a business model for academic publishing. I don’t believe that. For many years, academics and publishers made a great couple. On the one hand publishing was (as it remains) an intellectually satisfying business, based on meeting academic needs; on the other, there were disciplines to be founded, defended and extended. There was a quid pro quo, in other words. The electronic age undoubtedly presents challenges, but I think those are shared more widely than many suppose. What isn’t clear to me is the degree to which a world in which control over the production and consumption of knowledge lies solely in the hands of academics is better than one in which some of that control is shared with publishers. A different but perhaps more interesting question is, how can we make knowledge creation a more genuinely open and participatory, and less paternalistic, process? We may have to think beyond journals, and of openness to more than merely the outputs of research.

Advertisements

Written by Alex Powell

February 13, 2012 at 3:25 pm

Posted in open access, publishing

The death of the journal?

with one comment

Currently much of academic publishing is organized around the production of subject-specific journals. But for how much longer? In the online environment the article tends to look like the more ontologically fundamental unit of information. Users can use search tools like PubMed to search across multiple journal titles in order to locate just those articles that are directly relevant to their specific research interests. Why should the journal in which an article appears matter if the article reports exciting and original research about a relevant topic?

The journal’s original role was largely about organizing and managing the physical delivery of research reports to readers, and then as research specialisms developed journals increasingly fulfilled the additional function of filtering research by subject. It is interesting to note, however, that in science three of the most prestigious journals – Nature, Science and PNAS – remain generalist titles. They provide a narrow horizontal slice through the vertical disciplines of science, which serves to identify – and hence confer credit on the authors of – articles reporting research of the highest quality. This reminds us of the journal’s importance in providing some sort of indication of research quality, which in an online context can inform a decision about whether to download and view a particular article.

It might seem that this quality indicating function is rather prosaic. Does it really take all the infrastructure of journal publishing to implement it in a way that is adequate to the needs of online users? Most researchers are capable of gauging article quality pretty well, after all. (The viability of the peer review mechanism depends in part on that very fact.) But even if readers are happy to make up their own minds about the quality of a reported piece of research, funders need some assurance that the work they are financing is worthwhile. Established journal quality metrics make this task somewhat easier.

If in future authors submit their articles not to journals but simply inject them into online information spaces then new ways of addressing the issue of quality will have to be found. Nature Publishing Group concluded after small-scale investigations that change to the status quo is not yet indicated. Even so, it does not seem especially fanciful to suppose that an online post-publication article rating system could be developed into an effective quality assessment regime – assuming that issues around self-recommendation and so on could be satisfactorily addressed, of course. There would need to be a community-level desire to make the system work, in terms of reader participation, but the relevant collective spirit is what underpins peer review now.

I suspect that over time this is the kind of direction the dissemination of research findings will head in. The journal will become increasingly virtualized and personal, in all respects secondary to the article and the article corpus. But it may take more time to get there than many maintain. The materiality of science means that it has outputs besides articles, and the knowledge it acquires about nature is made manifest in a range of ways – in new structures, datasets, and experimental methods, for example. Articles are vehicles of scientific knowledge about the world, but that knowledge has a non-verbal existence in scientific cognition and practice, and in objects besides scientific publications. (This reflects the notion that cognition about scientific topics is often non-verbal, being instead typically visual or mechanical in some way.) In some other subjects, however, the deployment of words in particular ways is effectively constitutive of the subject. Without the relevant patterns of linguistic logic there is nothing. I am thinking especially of some of the humanities, where particular forms of words are not just about a set of ideas, they actually come very close to being those ideas. And thus in the humanities research life arguably revolves around textual expressions and productions – including, centrally, the journal – rather more than it does in science.

(First published on the KnowledgeCraft wordpress blog on July 13, 2009)

Written by Alex Powell

February 2, 2012 at 11:01 am

The complexification of the web

leave a comment »

Today’s web is not the web of the 1990s – hence the idea of Web 2.0 as articulated by Tim O’Reilly and others. The early emphasis on simple standards – HTTP, static HTML pages – represented a shift away from the 1980s PC-related trend towards locating processing power and bespoke algorithmic complexity on the desktop.

The alacrity with which a standards-based web was adopted was in part a reflection of the exasperation many felt at the complexity and brittleness of local proprietary software configurations running in a Windows environment. One result was the rise of the open-source movement and the LAMP paradigm. But with increasing network bandwidth and the continued proliferation of often complex web standards for a range of data types and operations on data (and growing user bases for a variety of application types), functionality has again become richer and applications have become thicker and more varied at both client and server ends.

Static pages of HTML have increasingly been supplanted by dynamic pages bringing together diverse resources from multiple locations. Page templates and style rules are often combined with dynamically sourced data and perhaps with advertising material selected according to user behaviour. Documents – if that term can still be considered apt – must therefore increasingly be seen as modular, distributed and context-dependent. Various web protocols and standards provide a framework for accessing software services and specific elements of functionality as opposed simply to data. These web services are intended for incorporation into other data processing frameworks rather than for human consumption.

As the online environment has become more established and useful in everyday life it has been adopted by a widening demographic. Web users have shown themselves willing to adapt to technological developments where they demonstrably serve their needs (e.g. for financial services or entertainment). Social media and the interactive, participative functionality of Web 2.0 have become integral aspects of a media-savvy, platform-plural lifestyle that has connotations of ‘cool’ rather than merely geeky. But accompanying these developments are not unreasonable fears of the disenfranchisement of the technology-averse.

The explosion of content on the web, in the form of corporate and institutional sites, personal pages, blogs, and media-sharing sites, demands new and improved ways of locating relevant content. Google apparently meets many of the search needs of most users, but feed formats such as RSS and associated aggregation tools are increasingly necessary for keeping on top of developments. A further level of delegation in respect of the discovery of relevant information is likely to come about through increasingly automated metadata generation and algorithmic content analysis. (This can be considered complementary to the arguably more labour-intensive data engineering efforts needed to realize visions of a ‘semantic web’.)

The overall effect of the web’s evolution is to open up new kinds of information space, catering to the diverse needs and interests of multiple kinds of user, as well as new ways of meeting old information needs. This complexification threatens to make simple categorical prognostications and projections look merely simplistic.

(First posted on the KnowledgeCraft wordpress blog on July 11, 2009)

Written by Alex Powell

February 2, 2012 at 10:52 am

Posted in web technology