Epistemic Systems

[ cognition – information – knowledge – publishing – science – software ]

Archive for the ‘publishing’ Category

Open scientific communication and peer review: speculations and prognostications

with 5 comments

Much has been written of late about the status of peer review in STM publishing [1, 2, 3, 4], and publishers have begun tentatively exploring alternatives to the traditional peer review model [5, 6]. Rather than just review this work I thought it would be interesting to take seriously what people like Jan Velterop have been saying [7] and consider more speculatively how things might look in a hypothetical future from which pre-publication peer review is largely absent. If this is the future that awaits us, I suspect that it will be at least a decade or two before it comes to pass. By then, I assume, the majority of content will be available on an open access basis. How then will things stand with the scientific research article, individually and in aggregate?

In the medium-term scenario that I shall address shortly, scientific communication will undoubtedly have evolved, but the commonalities with today’s publishing model will remain palpable. I say this by way of justification for not considering in any depth certain more esoteric possibilities, e.g. for recording (in real time) every second of activity involving a particular piece of lab equipment or all the work carried out by a particular researcher (headcams anyone?), or even for directly recording a researcher’s brain activity. (How much researcher hostility would there be to such ‘spy-in-the-lab/head’-type scenarios?) One day perhaps reports of specific lines of research activity will indeed centre on a continuous, or a linked collection of, semantically structured, digitized time series of data recording experimental and cognitive activities.  But for now I’ll stick with the idea that the definitive research record is the scientific article qua post hoc rational reconstruction of the processes to which these conceivable but as yet largely non-existent, relatively unmediated datastreams would relate. (One can imagine easily enough how today’s article-centric model could give way by degrees to an activity recording model. It would begin with datasets being attached to articles in ever-greater quantities, and with increasing connectedness between datasets, until a tipping point was reached where the (largely non-textual) datasets in some sense outweighed the text itself. This would stimulate a restructuring such that the datasets now formed the spine of the scientific record, with chunks of elucidatory and reflective text forming appendages to that spine. Maybe this would be the prelude to robots submitting research reports!)

OK, anyway, human-written articles remain for now. So what might change? For a start, I think that automated textual analysis and semantic processing will play a far bigger role than at present. I shall assume that on submission, as a matter of course, an author’s words will be automatically and extensively dissected, analysed, classified and compared with a variety of other content and corpora. One result will be the demise of the journal, a possibility that I discussed in a previous post – or possibly the proliferation of the journal, albeit as a virtual, personalizable, object defined by search criteria. What we need for this, whatever the technical basis for its realization, is a single global scientific article corpus (let’s call it the GSC), with unified, standard mechanisms for addressing its contents.

Suppose I’ve written an article, which I upload/post for publication. (To where? A topic for a later post, perhaps.) Immediately it would be automatically scanned to identify key words and phrases. These are the words and combinations of words that occur more often in what I’ve written than would be expected given their frequency in some corpus sufficiently large to be representative of all scientific disciplines – perhaps in the GSC. (To some extent that’s just a summary of some experiments I did in the late 1990s, and I don’t think they were especially original then. Autonomy launched around the same and they were doing far fancier statistical stuff.) This is The Future we’re talking about, of course, and in The Future, as well as the GSC and associated term frequency database (TFDB) there will be a term associations and relations database (TARDB). (Google is working on something like this, it seems.[8]) Maybe they could be combined into a single database of term frequency and relational data, but for now I’ll refer to two functionally distinct databases. Term-related data will need to be dynamically updated to reflect the growing contents of the GSC.  Once an article’s key terms have been derived, it should be possible to classify it using the TARDB. Note that this won’t define rigid, exclusively hierarchical term relationships; we know the problems they can occasion. (For example, when we try to set up a system of folders and sub-folders for storing browser bookmarks. Should we add this post under ‘computing’, ‘publishing’, ‘web’ or ‘journal’? What are the ‘right’ hierarchical relationships between those categories?) Instead we’ll need to capture semantic relations between terms by way of a weighted system of tags, say, or weighted links. (Yes, maybe something like semantic nets.) In this way we’ll probably be able to classify content down at the paragraph level, so to some extent we’ll be able automatically to figure out semantic boundaries between different parts of an article. (This part’s about protein folding; this one’s about catalysis; etc.)

Natural language processing (NLP) is developing apace, of course, so extracting key words and phrases on the basis of statistical measures, and using those as a basis for classification and clustering, is probably just the lower limit of what we should be taking for granted. Once we start associating individual words with specific grammatical roles we can do much more – although I suspect that statistics-based approaches will retain great appeal precisely because they ride roughshod over grammatical details and assign chunks of content to broad processing categories. The software complexity required to handle all categories is thus correspondingly relative low. (So say my highly fallible intuitions!)

Anyway, when we can easily establish roughly what an article is about, why do we need the journal? If a user wants to find all the articles published recently to do with protein folding, no problem. Likewise if they’d rather survey the field of protein engineering and enzyme catalysis, say. What’s that you say? Quality? Oh I see, you want just the good articles. Well, I suppose it was hoping for too much: we’re going to have to think, after all, about how we deal with the issue of quality, post peer review. That’s tricky, and I don’t have all the answers. But it’s interesting to think about what we might be able to do to make post-publication peer evaluation work as a reliable article quality assessment mechanism.

We need a system that encourages ‘users’, i.e. readers, to provide useful feedback on published articles. (Something I should have said earlier too: I’m assuming that all articles that survive certain minimal filters to remove spam and abusive submissions do get published. Those filters might involve automated scanning related to, or as part of, the key terms identification processes outlined above, or human eyeballs may need to be involved.) Does user feedback need to be substantive, or would a simple system based on numerical ratings suffice? The publishing traditionalist in me says the former, but what if I had the reassurance of knowing that the overall rating an article gained reflected the views of respected authorities on the subject addressed by the article? Elsewhere [9] I briefly outlined (in the comments) a simple scheme based on different user categories, with the ratings of users being weighted according to their category. (I previously assumed the continued existence of journals and editorial boards. Here I want to assume that in The Future we have done away with those.) The ratings of users who were themselves highly rated authors would count for more than those of non-authors. Given what I have just said about article analysis and classification, we can see that it would not be too difficult to compute an author’s areas of expertise. Perhaps an author’s ratings should be highly weighted, relative to those of non-authors, only in relation to articles related to their fields of expertise, or in proportion to the relatedness of their expertise to the topic of the article in question. (We’d need to devise suitable measures of disciplinary relatedness, perhaps based on citation overlaps as well as article term relationships and associations as represented in the TARDB.) To be really useful a hybrid review mechanism would probably be needed, combining simple numerical ratings with provision for making substantive comments. The former would enable users to select articles meeting specific assessment criteria, e.g. find me all articles rated 60% or more on average. If users were categorized according to their ‘rating worth’ as discussed above – with the ratings of well-rated authors being weighted most highly – then users could search just for articles rated highly be well-rated authors. (Raters could of course move up or down the scale according to their publishing history.)

A rating system like this would depend on the existence of a trustworthy user identification and authentication system, even if the implied requirement for users to log in would be in tension with the aim of encouraging readers to rate articles. Anonymity is another potential problem area, in that article raters will doubtless sometimes be keen for their ratings to remain anonymous. There is no reason why (with a little ingenuity) it would not be possible to ensure anonymity by default, with the possibility of voluntary anonymity-lifting if desired by all participants in the rating process. (It would be important to translate a rater’s user account identifier into a visible public identifier that was unique to a particular rated article. Otherwise an author who was made aware of the rater’s identity would be able to recognize that rater’s identity when it arose in the context of other articles and communicate it to others.) The area of author—rater relations might in fact be one where a role would exist for a trusted third party who was aware of the identities of article raters. This would enable them to assure authors of a rater’s credentials, should authors deem a particular rating to be suspect or malign in some way, while not revealing the rater’s identity if the rater did not agree to it.

Another issue is how to make it hard to ‘game’ the system. The obvious risk is that a user sets up an account from which to make derogatory assessments of rivals’ work that is distinct from the account they use when submitting their own work. However, the rating weighting scheme can help here, inasmuch as ratings count for more when they come from well-regarded authors. It will thus be important to ensure that authors – well-rated ones especially – are encouraged to rate the work of others, since a malign rater’s views will count for little when they come from a user account not associated with authorship, in comparison with those of a rater who is also a well-regarded author. Assuming an OA model in which authors must meet modest up-front publishing costs (mitigated by knowing that one’s submission is guaranteed to be accepted), it may be that inducements to rate can be offered, in terms of reduced article-processing charges for authors who agree to rate a certain number of articles, say. Not that one would want to discourage constructive critics who happen not to be authors themselves. Perhaps one could allow and encourage authors – again, with a weighting proportional to the quality of the ratings given to their work by others – to rate the comments made by other users. Overall I am optimistic that it is not beyond the wit of man to devise a system that establishes a virtuous dynamic around authorship and criticism, based on a system of article ratings that also allows for substantive comment.

What else might be possible or desirable? One area where it may be possible to use automated intelligence and the resources of the GSC/TFDB/TARDB to effect improvements on existing models and mechanisms falls under the broad heading of assessment of novelty/priority and the discernment of relationships with existing work. Recently I heard of a researcher who came across a newly published article, the bibliography of which listed a number of publications that the researcher had cited in their own earlier work, which had been available online for several years. To their knowledge the works they had cited had not previously been cited by others in the field. The new article did not cite the researcher’s work. It is impossible to know whether the author of the new article was aware of the researcher’s work, and was informed by it at least to the extent that they saw fit to seek out many of the same references. But once possibilities for automated content analysis are exploited more fully, it may become less relevant whether an author is scrupulous in their citation of related work. Publishing systems will be able simply to present links to all the content in the GSC that represent a semantic match with a particular article, and will be able to indicate the order of publication. (When done well this list of related material would amount to something like a supplementary bibliography.) I mentioned citation overlap earlier, in relation to assessing the relatedness of different research areas. But, as the example above indicates, citation overlap might also represent an additional dimension for the automatic estimation of one aspect of originality.

[1] http://www.michaeleisen.org/blog/?p=694

[2] http://gowers.wordpress.com/2011/11/03/a-more-modest-proposal/

[3] http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2011.00055/abstract

[4] http://www.cell.com/trends/ecology-evolution/fulltext/S0169-5347(12)00024-9

[5] http://blogs.nature.com/peer-to-peer/2006/12/report_of_natures_peer_review_trial.html

[6] http://blogs.nature.com/peer-to-peer/

[7] http://theparachute.blogspot.com/2012/01/holy-cow-peer-review.html

[8] http://mashable.com/2012/02/13/google-knowledge-graph-change-search/

[9] http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=416119&c=1

Advertisements

Written by Alex Powell

March 3, 2012 at 6:13 pm

Friends de-united? Academics, publishers, and open access

with one comment

An unfortunate feature of some of the recent debates about open access and the role of publishers, culminating now in the widely publicized boycott of Elsevier journals, has been the way in which the amount of light cast about has been matched or exceeded by the quantity of heat simultaneously liberated. Perhaps it’s inevitable: even rational people feel strongly about these things. My own standpoint on academic publishing matters is, I think, distinct from some of the most heavily represented academic and commercial positions, and could perhaps be characterized as citizen-centred.

As someone who in the past has carried out R&D within a non-academic setting, and who sometimes engages in what I suppose should be called independent research, I’m all for open access. It really is frustrating to go to the website of some journal in order to find that all-important article (quite probably written by someone whose work is supported by one of the taxpayer-funded research councils), only to be greeted by a big fat paywall. The experience soon becomes familiar, however, whenever one seeks journal content from any of an army of publishers large and small, ranging from the generalist and overtly mercantile to specialists affiliated professionally in some way with a particular discipline. At (typically) around £30 per article download (approx. $50), independent research – or even the simple exercise of pure curiosity, or an old-fashioned desire for self-improvement – that connects closely with the current literature is not a pastime for the impecunious.

That said, I’m sceptical that academia, at the collective and institutional level rather than at the level of individual researchers (amongst whom I count many good friends), is invariably the innocent party in all of this. Sometimes in the groves of academe one sees something of the self-interestedness and closed shop protectionism of a medieval guild, and current debates should be viewed in the context of a publishing model that gives academics near-exclusive control over what gets published. A plausible case could probably be made for saying that one of the journal’s functions has been as a weapon with which to exert control over disciplinary territory. On the academic side of the drawbridge, the thinking often seems to be, roughly, that so long as my institution can pay the subscriptions, paywalls aren’t a problem for me.

Latent in this issue of disciplinary control is the complex topic of peer review. I shall save looking at that for future posts. For now, I want just to observe that academic hostility to publishers is largely a modern phenomenon. SPARC was formed as recently as 1997 – and that development came from the library community rather than directly from researchers. Computers have much to answer for, of course, and for a combination of reasons. In sum, however, they boil down to the fact that when journals were physical objects as well as intellectual ones, their production required the application of tangible, physical skills, such as typesetting and printing. (Yes, a university press was once precisely that. Oxford University Press stopped its own presses only in 1989.) Coordinating the activities of all those specialist trades took time and knowledge that publishers could supply. It took money too, of course. The printed journal had to be paid for, and since it would be accessible only locally at the places to which copies were distributed, it made sense to aggregate all the production costs and recoup them via subscription charges, paid primarily by the institutional libraries that constituted the bulk of customers.

Then came the personal computer, word processing and desktop typesetting. Authors were generally content to limit their efforts at styling their articles to what could be accomplished using basic word processor capabilities. Was that because typesetting turned out to be unexpectedly arcane and surprisingly difficult to do well, or just because publishers wanted to typeset journals their way and didn’t want to have to unpick each author’s attempt at attaining an aesthetically pleasing article? I’m not sure, but I guess the latter. Still, an awareness of DTP probably made a few authors wonder whether they couldn’t do more of what the publisher did. (And in mathematics, TeX and LaTeX actually did put typesetting in the hands of authors.)

In the early 1990s the Web arrived, capitalizing on the infrastructure of the internet. Global information dissemination had never been so easy. People began to ask how essential all that nice styling stuff was that publishers do in any case. And then, do we really need printed journals? Thus we reach the present day, when on the face of it the case for the publisher has never looked so dubious. Learning that Elsevier makes profits of 40% on STM journals does little to strengthen the case, but when you come to think about it, the issue of profit might be something of a red herring. Other suppliers to and participants in the scientific research process than publishers are rarely castigated for daring to take more in revenue than is needed to cover their costs. That NMR spectrometer? It came from a profitable company. Your reagents supplier of choice? Probably not a big loss maker. And the maker of the laptop on which you’re writing up your results? $7bn net profits in Q4 of 2011, I gather.

So why should publishers’ profits arouse such ire? Presumably because it is felt that now, in the digital era, publishers contribute essentially no value at all. However, if it really were the case that publishers contributed precisely nothing, wouldn’t their profit margins be stratospherically higher than current levels? Alright, it could be that all those thousands of people working for publishers really are just sitting around drinking coffee all day, but if that were the case I suspect the shareholders would have found out by now. No, more likely is that the majority of those publishers’ employees (many of them graduates with a fondness still for their original academic disciplines) are doing substantive jobs that make a real contribution to the dissemination of academic research findings in a form which, with the occasional sorry exception, provides a sound basis for further research. Undoubtedly many of the skills involved are less materially grounded than the old crafts of hot-metal typesetting and printing, and for sure the costs of producing an all-electronic journal are considerably lower than those involved with printed journals. But it would be a mistake to think that journals – even electronic journals – cost nothing to develop, publish, maintain and extend. If profit truly is without honour then maybe we should think seriously about the public sector taking over the job of journal publishing. But a small voice in the back of my head wonders by how much overall publishing costs would actually fall. My suspicion is that the like-for-like publishing bill would end up roughly the same as now – maybe slightly less, perhaps slightly more – albeit with some of the costs redistributed to the point of near-invisibility.

OK, for the sake of argument I’ve adopted a deliberately charitable stance towards the commercial sector. There is that uncomfortable sense in which publishers sell back material to its very creators (amongst others). But I really do think we should keep such issues as the profit levels, seeming monopoly status, or apparently flexible ethical values of any one company separate from the issue of open access. If the regulatory framework within which academic publishers operate now seems shaky or defective then fine, let’s go about fixing it. Open access, however, is a distinct issue relating to a transcending idea: that knowledge is one of the glories of humankind, and the world is enhanced when it is unconfined. Some have suggested that there never was a business model for academic publishing. I don’t believe that. For many years, academics and publishers made a great couple. On the one hand publishing was (as it remains) an intellectually satisfying business, based on meeting academic needs; on the other, there were disciplines to be founded, defended and extended. There was a quid pro quo, in other words. The electronic age undoubtedly presents challenges, but I think those are shared more widely than many suppose. What isn’t clear to me is the degree to which a world in which control over the production and consumption of knowledge lies solely in the hands of academics is better than one in which some of that control is shared with publishers. A different but perhaps more interesting question is, how can we make knowledge creation a more genuinely open and participatory, and less paternalistic, process? We may have to think beyond journals, and of openness to more than merely the outputs of research.

Written by Alex Powell

February 13, 2012 at 3:25 pm

Posted in open access, publishing

The death of the journal?

with one comment

Currently much of academic publishing is organized around the production of subject-specific journals. But for how much longer? In the online environment the article tends to look like the more ontologically fundamental unit of information. Users can use search tools like PubMed to search across multiple journal titles in order to locate just those articles that are directly relevant to their specific research interests. Why should the journal in which an article appears matter if the article reports exciting and original research about a relevant topic?

The journal’s original role was largely about organizing and managing the physical delivery of research reports to readers, and then as research specialisms developed journals increasingly fulfilled the additional function of filtering research by subject. It is interesting to note, however, that in science three of the most prestigious journals – Nature, Science and PNAS – remain generalist titles. They provide a narrow horizontal slice through the vertical disciplines of science, which serves to identify – and hence confer credit on the authors of – articles reporting research of the highest quality. This reminds us of the journal’s importance in providing some sort of indication of research quality, which in an online context can inform a decision about whether to download and view a particular article.

It might seem that this quality indicating function is rather prosaic. Does it really take all the infrastructure of journal publishing to implement it in a way that is adequate to the needs of online users? Most researchers are capable of gauging article quality pretty well, after all. (The viability of the peer review mechanism depends in part on that very fact.) But even if readers are happy to make up their own minds about the quality of a reported piece of research, funders need some assurance that the work they are financing is worthwhile. Established journal quality metrics make this task somewhat easier.

If in future authors submit their articles not to journals but simply inject them into online information spaces then new ways of addressing the issue of quality will have to be found. Nature Publishing Group concluded after small-scale investigations that change to the status quo is not yet indicated. Even so, it does not seem especially fanciful to suppose that an online post-publication article rating system could be developed into an effective quality assessment regime – assuming that issues around self-recommendation and so on could be satisfactorily addressed, of course. There would need to be a community-level desire to make the system work, in terms of reader participation, but the relevant collective spirit is what underpins peer review now.

I suspect that over time this is the kind of direction the dissemination of research findings will head in. The journal will become increasingly virtualized and personal, in all respects secondary to the article and the article corpus. But it may take more time to get there than many maintain. The materiality of science means that it has outputs besides articles, and the knowledge it acquires about nature is made manifest in a range of ways – in new structures, datasets, and experimental methods, for example. Articles are vehicles of scientific knowledge about the world, but that knowledge has a non-verbal existence in scientific cognition and practice, and in objects besides scientific publications. (This reflects the notion that cognition about scientific topics is often non-verbal, being instead typically visual or mechanical in some way.) In some other subjects, however, the deployment of words in particular ways is effectively constitutive of the subject. Without the relevant patterns of linguistic logic there is nothing. I am thinking especially of some of the humanities, where particular forms of words are not just about a set of ideas, they actually come very close to being those ideas. And thus in the humanities research life arguably revolves around textual expressions and productions – including, centrally, the journal – rather more than it does in science.

(First published on the KnowledgeCraft wordpress blog on July 13, 2009)

Written by Alex Powell

February 2, 2012 at 11:01 am