This is the first of our weekly digests of what’s been happening in the blogosphere! These will usually be posted at the end of the week, where possible, but we thought we’d give you an early treat this week.. Comments and suggestions are very much welcomed, and please feel free to let us know about anything you think we may have missed.
Text mining has been a veritable ‘hot topic’ over the past week; the development of appropriate search tools has struggled to keep up with the exponential growth of data, and the (scientific) community is increasingly looking to text mine the research for more comprehensive answers. Heather Piwowar and Peter Murray-Rust, both F1000 Research Advisory Panel members, held important discussions on text mining with some of the biggest publishers in the field, namely Elsevier and Wiley. The upshot by the end of the week was that both Heather and Peter had received positive responses from both publishers. During an email discourse with Peter, Duncan Campbell of Wiley stated that “[they] are keen to enhance the usage of [their] journal content by encouraging text and data mining”, whilst Alicia Wise of Elsevier, in conversation with Heather, was very focused on meeting the needs of text mining projects – not only Heather’s three projects in question, but those of other researchers.
On the subject of text mining, Matt Cockerill from BMC pointed out a technical ‘tweak’ by the NCBI that makes the PubMed Central Open Access Subset even more…well…open! By adding the restriction detailed in his post, you can now search PubMed specifically for articles that are open for redistribution – thus making the PubMed search interface fully available to text miners, and anyone else searching for open access articles available for re-use.
Adventures in altmetrics
Martin Fenner, founder of ScienceCard, an online service that automatically collects altmetrics for each member’s publications, penned some more thoughts on the topic. He has now included an ‘activity stream’ for ScienceCard; much like those found on popular social networking sites, the activity stream shows a list of recently posted scholarly works of your ScienceCard ‘friends’, enabling you to like/comment/share them in real-time – something he believes all altmetrics tools should be capable of.
Martin also points out that although reputation and discovery, the two cornerstones of altmetrics, are equally important, more effort is being put into relating altmetrics to reputation. He therefore reminds us that we must not forget that these metrics can also be used to improve upon current search tools.
In terms of the inevitable costs incurred with altmetric collection, Martin provides a possible collaborative solution whereby different functions of an altmetrics service need not be provided by the same group. Additionally, he outlines the need for second-order metrics (e.g. the metrics for works citing a particular paper) in the future, and extols the virtues of Twitter as an invaluable tool in the quick dissemination of new and interesting scholarly works.
A report on the Force 11 workshop on the Future of Research Communication, held last August in Germany, was released online. It highlights key problems facing the current state of scholarly publishing, and details how we could utilise the wealth of communication tools available to us in more beneficial ways.
Open peer review experiment
Given our plans for open post-publication peer review, we were particularly interested to see how Alan Cann’s first experiment with open peer review went, where he published an article entitled ‘An efficient and effective system for interactive student feedback using Google+ to enhance an institutional virtual learning environment’, on the Leicester Research Archive. He reported that it was viewed 1,175 times in 22 days – a pretty encouraging figure given that, on average, a PloS ONE article racks up 900 views per year.
F1000Research is an original open science publishing platform for life scientists that offers immediate open access publication, transparent post-publication peer review by invited referees, and full data deposition and sharing. F1000Research accepts all scientifically sound articles, including single findings, case reports, protocols, replications, null/negative results, and more traditional articles.