Epistemic Systems

[ cognition – information – knowledge – publishing – science – software ]

Open scientific communication and peer review: speculations and prognostications

with 5 comments

Much has been written of late about the status of peer review in STM publishing [1, 2, 3, 4], and publishers have begun tentatively exploring alternatives to the traditional peer review model [5, 6]. Rather than just review this work I thought it would be interesting to take seriously what people like Jan Velterop have been saying [7] and consider more speculatively how things might look in a hypothetical future from which pre-publication peer review is largely absent. If this is the future that awaits us, I suspect that it will be at least a decade or two before it comes to pass. By then, I assume, the majority of content will be available on an open access basis. How then will things stand with the scientific research article, individually and in aggregate?

In the medium-term scenario that I shall address shortly, scientific communication will undoubtedly have evolved, but the commonalities with today’s publishing model will remain palpable. I say this by way of justification for not considering in any depth certain more esoteric possibilities, e.g. for recording (in real time) every second of activity involving a particular piece of lab equipment or all the work carried out by a particular researcher (headcams anyone?), or even for directly recording a researcher’s brain activity. (How much researcher hostility would there be to such ‘spy-in-the-lab/head’-type scenarios?) One day perhaps reports of specific lines of research activity will indeed centre on a continuous, or a linked collection of, semantically structured, digitized time series of data recording experimental and cognitive activities.  But for now I’ll stick with the idea that the definitive research record is the scientific article qua post hoc rational reconstruction of the processes to which these conceivable but as yet largely non-existent, relatively unmediated datastreams would relate. (One can imagine easily enough how today’s article-centric model could give way by degrees to an activity recording model. It would begin with datasets being attached to articles in ever-greater quantities, and with increasing connectedness between datasets, until a tipping point was reached where the (largely non-textual) datasets in some sense outweighed the text itself. This would stimulate a restructuring such that the datasets now formed the spine of the scientific record, with chunks of elucidatory and reflective text forming appendages to that spine. Maybe this would be the prelude to robots submitting research reports!)

OK, anyway, human-written articles remain for now. So what might change? For a start, I think that automated textual analysis and semantic processing will play a far bigger role than at present. I shall assume that on submission, as a matter of course, an author’s words will be automatically and extensively dissected, analysed, classified and compared with a variety of other content and corpora. One result will be the demise of the journal, a possibility that I discussed in a previous post – or possibly the proliferation of the journal, albeit as a virtual, personalizable, object defined by search criteria. What we need for this, whatever the technical basis for its realization, is a single global scientific article corpus (let’s call it the GSC), with unified, standard mechanisms for addressing its contents.

Suppose I’ve written an article, which I upload/post for publication. (To where? A topic for a later post, perhaps.) Immediately it would be automatically scanned to identify key words and phrases. These are the words and combinations of words that occur more often in what I’ve written than would be expected given their frequency in some corpus sufficiently large to be representative of all scientific disciplines – perhaps in the GSC. (To some extent that’s just a summary of some experiments I did in the late 1990s, and I don’t think they were especially original then. Autonomy launched around the same and they were doing far fancier statistical stuff.) This is The Future we’re talking about, of course, and in The Future, as well as the GSC and associated term frequency database (TFDB) there will be a term associations and relations database (TARDB). (Google is working on something like this, it seems.[8]) Maybe they could be combined into a single database of term frequency and relational data, but for now I’ll refer to two functionally distinct databases. Term-related data will need to be dynamically updated to reflect the growing contents of the GSC.  Once an article’s key terms have been derived, it should be possible to classify it using the TARDB. Note that this won’t define rigid, exclusively hierarchical term relationships; we know the problems they can occasion. (For example, when we try to set up a system of folders and sub-folders for storing browser bookmarks. Should we add this post under ‘computing’, ‘publishing’, ‘web’ or ‘journal’? What are the ‘right’ hierarchical relationships between those categories?) Instead we’ll need to capture semantic relations between terms by way of a weighted system of tags, say, or weighted links. (Yes, maybe something like semantic nets.) In this way we’ll probably be able to classify content down at the paragraph level, so to some extent we’ll be able automatically to figure out semantic boundaries between different parts of an article. (This part’s about protein folding; this one’s about catalysis; etc.)

Natural language processing (NLP) is developing apace, of course, so extracting key words and phrases on the basis of statistical measures, and using those as a basis for classification and clustering, is probably just the lower limit of what we should be taking for granted. Once we start associating individual words with specific grammatical roles we can do much more – although I suspect that statistics-based approaches will retain great appeal precisely because they ride roughshod over grammatical details and assign chunks of content to broad processing categories. The software complexity required to handle all categories is thus correspondingly relative low. (So say my highly fallible intuitions!)

Anyway, when we can easily establish roughly what an article is about, why do we need the journal? If a user wants to find all the articles published recently to do with protein folding, no problem. Likewise if they’d rather survey the field of protein engineering and enzyme catalysis, say. What’s that you say? Quality? Oh I see, you want just the good articles. Well, I suppose it was hoping for too much: we’re going to have to think, after all, about how we deal with the issue of quality, post peer review. That’s tricky, and I don’t have all the answers. But it’s interesting to think about what we might be able to do to make post-publication peer evaluation work as a reliable article quality assessment mechanism.

We need a system that encourages ‘users’, i.e. readers, to provide useful feedback on published articles. (Something I should have said earlier too: I’m assuming that all articles that survive certain minimal filters to remove spam and abusive submissions do get published. Those filters might involve automated scanning related to, or as part of, the key terms identification processes outlined above, or human eyeballs may need to be involved.) Does user feedback need to be substantive, or would a simple system based on numerical ratings suffice? The publishing traditionalist in me says the former, but what if I had the reassurance of knowing that the overall rating an article gained reflected the views of respected authorities on the subject addressed by the article? Elsewhere [9] I briefly outlined (in the comments) a simple scheme based on different user categories, with the ratings of users being weighted according to their category. (I previously assumed the continued existence of journals and editorial boards. Here I want to assume that in The Future we have done away with those.) The ratings of users who were themselves highly rated authors would count for more than those of non-authors. Given what I have just said about article analysis and classification, we can see that it would not be too difficult to compute an author’s areas of expertise. Perhaps an author’s ratings should be highly weighted, relative to those of non-authors, only in relation to articles related to their fields of expertise, or in proportion to the relatedness of their expertise to the topic of the article in question. (We’d need to devise suitable measures of disciplinary relatedness, perhaps based on citation overlaps as well as article term relationships and associations as represented in the TARDB.) To be really useful a hybrid review mechanism would probably be needed, combining simple numerical ratings with provision for making substantive comments. The former would enable users to select articles meeting specific assessment criteria, e.g. find me all articles rated 60% or more on average. If users were categorized according to their ‘rating worth’ as discussed above – with the ratings of well-rated authors being weighted most highly – then users could search just for articles rated highly be well-rated authors. (Raters could of course move up or down the scale according to their publishing history.)

A rating system like this would depend on the existence of a trustworthy user identification and authentication system, even if the implied requirement for users to log in would be in tension with the aim of encouraging readers to rate articles. Anonymity is another potential problem area, in that article raters will doubtless sometimes be keen for their ratings to remain anonymous. There is no reason why (with a little ingenuity) it would not be possible to ensure anonymity by default, with the possibility of voluntary anonymity-lifting if desired by all participants in the rating process. (It would be important to translate a rater’s user account identifier into a visible public identifier that was unique to a particular rated article. Otherwise an author who was made aware of the rater’s identity would be able to recognize that rater’s identity when it arose in the context of other articles and communicate it to others.) The area of author—rater relations might in fact be one where a role would exist for a trusted third party who was aware of the identities of article raters. This would enable them to assure authors of a rater’s credentials, should authors deem a particular rating to be suspect or malign in some way, while not revealing the rater’s identity if the rater did not agree to it.

Another issue is how to make it hard to ‘game’ the system. The obvious risk is that a user sets up an account from which to make derogatory assessments of rivals’ work that is distinct from the account they use when submitting their own work. However, the rating weighting scheme can help here, inasmuch as ratings count for more when they come from well-regarded authors. It will thus be important to ensure that authors – well-rated ones especially – are encouraged to rate the work of others, since a malign rater’s views will count for little when they come from a user account not associated with authorship, in comparison with those of a rater who is also a well-regarded author. Assuming an OA model in which authors must meet modest up-front publishing costs (mitigated by knowing that one’s submission is guaranteed to be accepted), it may be that inducements to rate can be offered, in terms of reduced article-processing charges for authors who agree to rate a certain number of articles, say. Not that one would want to discourage constructive critics who happen not to be authors themselves. Perhaps one could allow and encourage authors – again, with a weighting proportional to the quality of the ratings given to their work by others – to rate the comments made by other users. Overall I am optimistic that it is not beyond the wit of man to devise a system that establishes a virtuous dynamic around authorship and criticism, based on a system of article ratings that also allows for substantive comment.

What else might be possible or desirable? One area where it may be possible to use automated intelligence and the resources of the GSC/TFDB/TARDB to effect improvements on existing models and mechanisms falls under the broad heading of assessment of novelty/priority and the discernment of relationships with existing work. Recently I heard of a researcher who came across a newly published article, the bibliography of which listed a number of publications that the researcher had cited in their own earlier work, which had been available online for several years. To their knowledge the works they had cited had not previously been cited by others in the field. The new article did not cite the researcher’s work. It is impossible to know whether the author of the new article was aware of the researcher’s work, and was informed by it at least to the extent that they saw fit to seek out many of the same references. But once possibilities for automated content analysis are exploited more fully, it may become less relevant whether an author is scrupulous in their citation of related work. Publishing systems will be able simply to present links to all the content in the GSC that represent a semantic match with a particular article, and will be able to indicate the order of publication. (When done well this list of related material would amount to something like a supplementary bibliography.) I mentioned citation overlap earlier, in relation to assessing the relatedness of different research areas. But, as the example above indicates, citation overlap might also represent an additional dimension for the automatic estimation of one aspect of originality.

[1] http://www.michaeleisen.org/blog/?p=694

[2] http://gowers.wordpress.com/2011/11/03/a-more-modest-proposal/

[3] http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2011.00055/abstract

[4] http://www.cell.com/trends/ecology-evolution/fulltext/S0169-5347(12)00024-9

[5] http://blogs.nature.com/peer-to-peer/2006/12/report_of_natures_peer_review_trial.html

[6] http://blogs.nature.com/peer-to-peer/

[7] http://theparachute.blogspot.com/2012/01/holy-cow-peer-review.html

[8] http://mashable.com/2012/02/13/google-knowledge-graph-change-search/

[9] http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=416119&c=1

Advertisements

Written by Alex Powell

March 3, 2012 at 6:13 pm

5 Responses

Subscribe to comments with RSS.

  1. Interesting essay. One question, why no name or identity on the article or blog? Prefer anonymity?

    Tim McCormick

    March 4, 2012 at 5:44 pm

    • Thanks Tim. The anonymity is more accidental than deliberate – I come clean about my identity in the first post, but that’s not something people will necessarily see of course. A bit of a redesign is probably in order! Alex (Powell)

      epistemicsystems

      March 4, 2012 at 6:25 pm

  2. […] Open scientific communication and peer review: speculations and prognostications « Epistemic System… Well, I suppose it was hoping for too much: we’re going to have to think, after all, about how we deal with the issue of quality, post peer review. That’s tricky, and I don’t have all the answers. But it’s interesting to think about what we might be able to do to make post-publication peer evaluation work as a reliable article quality assessment mechanism. […]

  3. Hi Alex,

    I’m running http://paperrater.org, where everybody can rate and comment on papers that appeared on arXiv. The questions you raised in your post are absolutely relevant for any way of moving beyond traditional peer review. The most important ones from my point of view, which is focussed on what we can do right now, not in some more distant future: whether an incentive is needed for users to review other people’s work, or whether we should weigh raters/commenters higher if they can be regarded experts (how?) or have a record of excellent paper review.

    My current take on this is: Many sites, foremost Wikipedia, work essentially without incentive. It works because users take pride in their contributions, even if they are not associated with their name in a prominent way. I believe that publicly commenting (in any detailed and informative fashion) on a recent research article is a significant commitment and will make the commenter visible to the relevant community, which in turn raises his reputation in domain of research.

    I can also think of weighing these contributions, but I don’t see an easy way of certifying a user (to make sure he is who he pretends to be). Even if we could do this, it’s unclear to me how I would assign a weight to such a certified user. I like the idea of sites like http://stackoverflow.com/, where one can rate the comments. At PaperRater.org, we allow the users to either agree, disagree, or flag any comment, and I can easily see how these statistics could be turned into a weight for the user, who left the comment.

    I guess this will not be the ultimate step into a future you sketched above, but it should be able to help us right now. I’m curious what others think about it.

    Peter Melchior

    March 6, 2012 at 9:20 pm

    • Hi Peter,

      Thanks for the comments – it’s great to see projects like yours pushing things forward in a practical way! I think you’re right about the wikipedia case: it says much about the willingness of people to contribute to a project they can identify with strongly. I wonder how much its success has depended on it being the only player out there doing what it’s doing. Brand identity and prominence may matter quite a lot. The point you make about the reputational benefits that might accrue around commenting is a good one, and how that might interact with issues to do with anonymity is interesting.

      Another possible way of incentivizing the article rating process with authors (if one is publisher as well as rating service provider), it occurs to me, would be to scale publishing times with willingness to rate articles. If an author agreed to review three articles right away and six in the next month, say, then one might agree to publish their article in the shortest possible time. If they were less willing to make a reviewing commitment one might commit to publishing only within a few weeks. This sort of mechanism would perhaps be easy prey for a competitor service promising no such publication time penalties, however. (The nice thing about writing from a utopian, ‘this is how it will be in the glittering future’ perspective – as to a large extent I was – is that one doesn’t need to worry much about such messy realities/possibilities!) And besides, this mechanism isn’t relevant to the model that you’re trying out.

      Alex

      March 7, 2012 at 10:17 am


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: