juliusbeezer + peerreview   203

Mieke Bal: Let’s Abolish the Peer-Review System – Media Theory
When the academy turned “neo-liberal” world-wide, rules were established that have become a “system”, no longer debatable. No consultation, no trial period, revision, or reconsidering. Rules rule, overruling people. One of those rules is the unquestioned system that all respectable, serious academic journals and book series have to obey the requirement to have all submissions for publication “peer-reviewed”. This seemed a good idea at the beginning – to get feedback to optimize quality – but became problematic when generalized into a rule. It has become a term, even part of ordinary language, and I have had it thrown at me many times in totally wrong contexts. I would like to offer no fewer than ten arguments, intricately related yet distinguishable, that make the peer-review system (PRS) highly problematic, and, in my view, ready for abolition. Only when the rule is reregulated – stripped of its rule-character – can alternatives be considered that preserve the positive aspects but eliminate the ten objections I am highlighting here.[1]

The peer-review system is deeply wrong, firstly, because it entails a heavy burden on scholars who should spend the little time they have to do their own work.
peerreview  scholarly  sciencepublishing 
6 weeks ago by juliusbeezer
Chernobyl: Consequences of the Catastrophe for People and the Environment - Wikipedia
the Chernobyl disaster, and 2004 reflect 985,000 premature deaths as a result of the radioactivity released. The authors suggest that most of the deaths were in Russia, Belarus and Ukraine, though others occurred worldwide throughout the many countries that were struck by radioactive fallout from Chernobyl.[1] The literature analysis draws on over 1,000 published titles and over 5,000 internet and printed publications, primarily in Slavic languages (i.e. not translated in English), discussing the consequences of the Chernobyl disaster. The authors contend that those publications and papers were written by leading Eastern European authorities and have largely been downplayed or ignored by the IAEA and UNSCEAR...
Charles agrees with the importance of making eastern research more available in the west, he states that he cannot tell which of the publications referred to by the book would sustain critical peer-review in western scientific literature, and that verifying these sources would require considerable effort. Charles sees the book as representing one end of a spectrum of views, and believes that works from the entire spectrum must be critically evaluated in order to develop an informed opinion.
sciencepublishing  reviews  language  translation  russian  peerreview 
january 2018 by juliusbeezer
Widely used U.S. government database delists cancer journal - Retraction Watch at Retraction Watch
readers who are familiar with the guidelines MEDLINE follows when deselecting journals “can draw their own conclusions.”

Here is some background information from MEDLINE:

Journals may be deselected from MEDLINE for various reasons including, but not limited to, extremely late publication patterns, major changes in the scientific quality or editorial process, and changes in ownership or publishers.

Backus added that since she’s worked with MEDLINE over the past few years, only “a handful” of journals have been removed from the index.

It’s not very many. It’s infrequent.

Oncotarget has been on our radar for some time. Besides a handful of retractions that we’ve covered, we’ve obtained emails that show an editor of the journal, Mikhail Blagosklonny, contacted colleagues of Jeffrey Beall at the University of Colorado Denver who had published in Oncotarget in 2015 after Beall added the journal to his (now inactive) list of possibly predatory publications.
sciencepublishing  reputation  beall  indexing  attention  library  politics  us  peerreview 
november 2017 by juliusbeezer
Oops! - Academia Obscura
Even after sinking hours of labour into it there are bound to be some miner errors.

References to ‘screwed data’ and a ‘screwed distribution’ have not stopped a 2004 paper in the International Journal of Obesity from garnering over 300 citations. Likewise, a group of Japanese researchers concluded: ‘There were no significunt differences in the IAA content of shoots or roots between mycorrhizal and non-mycorrhizal plants’. The paper has racked up 22 citations in spite of the significunt slipup.

An unintentionally honest method appears in another paper, where the authors state: ‘In this study, we have used (insert statistical method here) to compile unique DNA methylation signatures.’

A couple of cringeworthy blunders have drawn the attention of the academic community in recent years. The Gabor scandal started when an internal author note was accidentally included in the final published version of an ecology paper:

Although association preferences documented in our study theoretically could be a consequence of either mating or shoaling preferences in the different female groups investigated (should we cite the crappy Gabor paper here?), shoaling preferences are unlikely drivers of the documented patterns…

The comment was added following peer review during the revision process and unfortunately slipped through the cracks in subsequent rounds of editing.
editing  peerreview  funny 
october 2017 by juliusbeezer
Antediluvian Salad: Breaking Through the 4th Wall: OPEN SCIENCE's Promise of a New Scientific & Spiritual Kingdom
It is the system that is the problem. And it is the system that needs fixing.

It's high time that the modern peer review format goes through such a deconstruction and reconfiguration. Not, as some may wrongly be assuming, by abolishing the peer review process but by dramatically ameliorating the process of peer review in an exponential way. At the same time dropping the curtain on scientific process and controversy, making both creators and reviewers accountable to their words. Creators will face more levels of scrutiny and question but they will also benefit from exponentially more collaboration and insight. Creators will no longer be held at the mercy of their reviewers as reviewers will no longer be anonymous and their critiques will be displayed to all. The inherent collaborative and synergistic methods of a truly free and liberal OPEN SCIENCE paradigm shift will dramatically and irrevocably speed up the process of science. Science operating at maximum RPM. Contrary to what many may fear I advocate, as sort of free for all of self publishing anarchy I actually hope to curtail that pitfall. By allowing any and all to submit their idea or work in whatever format or state of finality they choose all are given a shot and subject to online review. Therefore charges of "ivory tower" orthodoxy, academic bias, and in-group out group shenanigans get cut off right at the root. The lone wolf outsider, forever reeling at the unfair treatment they suffer from "the establishment" will be a thing of the past. In short the future of scientific communication as I envisage it will combine the best elements of the peer review process and the social media, group sourced, immediacy of "blogging" format while eschewing the problematic elements inherent in both practices.
sciencepublishing  openscience  peerreview  openaccess 
september 2017 by juliusbeezer
Transparency is superior to trust | chorasimilarity
Publishing, scientific publishing I mean, is simply irrelevant at this point. The strong part of Open Science, the new, original idea it brings forth is validation.

Sci-Hub acted as the great leveler, as concerns scientific publication. No interested reader cares, at this point, if an article is hostage behind a paywall or if the author of the article paid money for nothing to a Gold OA publisher.

Scientific publishing is finished. You have to be realistic about this thing.


Transparency is superior to trust—as long as some relevant person(s) actually exploit(s) the transparency. Look at how long that ssl flaw hung about in Debian, for example: https://pinboard.in/u:juliusbeezer/t:security/t:opensource/
That was all open code, utterly vital to the security of hordes of crucial servers run by the world's top-most geeks, and therefore, every internet user. But the problem sat there for two years, apparently.
That's an extreme example that did get fixed. Transparency is necessary yes, but unless it's actually backed by readers/critics/reviewers/coders/experts actually looking through the windowpane afforded by it, its value is only rhetorical.
It does mean that the guards can guard the guards and we can watch the guards guarding the guards though. Or maybe McGregor-Maywether.
sciencepublishing  peerreview  openaccess  scihub  science  transparency  dccomment 
august 2017 by juliusbeezer
When Will Climate Change Make the Earth Too Hot For Humans?
We published “The Uninhabitable Earth” on Sunday night, and the response since has been extraordinary — both in volume (it is already the most-read article in New York Magazine’s history) and in kind. Within hours, the article spawned a fleet of commentary across newspapers, magazines, blogs, and Twitter, much of which came from climate scientists and the journalists who cover them.

Some of this conversation has been about the factual basis for various claims that appear in the article. To address those questions, and to give all readers more context for how the article was reported and what further reading is available, we are publishing here a version of the article filled with research annotations. They include quotations from scientists I spoke with throughout the reporting process; citations to scientific papers, articles, and books I drew from; additional research provided by my colleague Julia Mead; and context surrounding some of the more contested claims. Since the article was published, we have made four corrections and adjustments, which are noted in the annotations (as well as at the end of the original version).
journalism  annotation  peerreview  attention  climatechange  sciencepublishing  science 
august 2017 by juliusbeezer
Who is Actually Harmed by Predatory Publishers? | Eve | tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society
In terms of its limitations, peer review is very bad at predictively spotting excellent work, even when conducted by researchers within their own sub-fields (Smith 2006; Eyre-Walker and Stoletzki 2013; Moore et al. 2017). It should also be considered that there are significant differences between peer-review processes in different disciplines (Walker and Rocha da Silva 2015). Peer review is a heterogeneous term that is ill-defined and barely standardised. For instance, in much academic research-book publishing it is not uncommon for contracts to be issued on the basis of a proposal, with a lighter review of the full manuscript. This means that criteria for what constitutes 'excellence' varies across disciplinary boundaries, but also within fields. Although almost every academic has an anecdote about how positive review comments or criticism have helped to improve work, as we have previously shown, when peer review is used as a gatekeeping process there are examples of both false negatives and false positives within this realm (Moore et al. 2017).

For an example of false negatives, consider that Campanario (2009) and Gans and Shepherd (1994) each examined instances of Nobel-prize winning work being rejected from elite journals. Further, Campanario and several others have shown that papers that were originally rejected have gone on to be among the most cited works in particular fields (Campanario 1993; 1996; Campanario and Acedo 2007; Siler et al. 2015). Given that most rejected manuscripts do end up being published elsewhere anyway, this is not surprising (Moore et al. 2017).
peerreview  sciencepublishing  openaccess 
august 2017 by juliusbeezer
It's surprisingly easy to game scientific publishing's most important method — Quartz
On April 20, Tumor Biology announced it was retracting more than 100 papers published between 2012 and 2016 over issues with the peer-review process, amounting to nearly one-fifth of the 450 papers retracted for that reason in the same period, according to US-based blog Retraction Watch.

Many of these retractions, including the most recent round, have been for research from Chinese scientists, who often rely on third-party companies to help translate and submit their work to journals...
Instead, the third-party companies provided fake emails and reviews, according to Springer’s investigation (link in Chinese). The fake reviews came to light as a result of Springer’s investigations resulting from earlier retractions and the publisher recommended new practices; in January, the journal saw new peer-review practices put in place when Tumor Biology became part of SAGE.
peerreview  rejecta  sciencepublishing 
may 2017 by juliusbeezer
Is it OK to cite preprints? Yes, yes it is. | Jabberwocky Ecology
Why hasn’t citing unreviewed work caused the wheels to fall off of science? Because citing appropriate work in the proper context is part of our job. There are good preprints and bad preprints, good reports and bad reports, good data and bad data, good software and bad software, and good papers and bad papers. As Belinda Phipson, Casey Green, Dave Harris and Sebastian Raschka point out it is up to us as the people citing research to make professional judgments about what is good science and should be cited. Casey’s take captures my thoughts on this exactly:
citation  peerreview  archiving 
may 2017 by juliusbeezer
What is open peer review? A systematic review - F1000Research
Open pre-review manuscripts are manuscripts that are immediately openly accessible (via the internet) in advance, or in synchrony with, any formal peer review procedures. Subject-specific “preprint servers” like arXiv.org and bioRxiv.org, institutional repositories, catch-all repositories like Zenodo or Figshare and some publisher-hosted repositories (like PeerJ Preprints) allow authors to short-cut the traditional publication process and make their manuscripts immediately available to everyone. This can be used as a complement to a more traditional publication process, with comments invited on preprints and then incorporated into redrafting as the manuscript goes through traditional peer review with a journal. Alternatively, services which overlay peer-review functionalities on repositories can produce functional publication platforms at reduced cost (Boldt, 2011; Perakakis et al., 2010). The mathematics journal Discrete Analysis, for example, is an overlay journal whose primary content is hosted on arXiv (Day, 2015).
peerreview  open  openscience  openaccess 
april 2017 by juliusbeezer
With this new system, scientists never have to write a grant application again | Science | AAAS
In Bollen’s system, scientists no longer have to apply; instead, they all receive an equal share of the funding budget annually—some €30,000 in the Netherlands, and $100,000 in the United States—but they have to donate a fixed percentage to other scientists whose work they respect and find important. “Our system is not based on committees’ judgments, but on the wisdom of the crowd,” Scheffer told the meeting.

Bollen and his colleagues have tested their idea in computer simulations. If scientists allocated 50% of their money to colleagues they cite in their papers, research funds would roughly be distributed the way funding agencies currently do, they showed in a paper last year—but at much lower overhead costs.
science  finance  peerreview 
april 2017 by juliusbeezer
» Blacklists are technically infeasible, practically unreliable and unethical. Period.
We already have plenty of perfectly good Whitelists. Pubmed listing, WoS listing, Scopus listing, DOAJ listing. If you need to check whether a journal is running traditional peer review at an adequate level, use some combination of these according to your needs. Also ensure there is a mechanism for making a case for exceptions, but use Whitelists not Blacklists by default.

Authors should check with services like ThinkCheckSubmit or Quality Open Access Market if they want data to help them decide whether a journal or publisher is legitimate. But above all scholars should be capable of making that decision for themselves. If we aren’t able to make good decisions on the venue to communicate our work then we do not deserve the label “scholar”.
openaccess  peerreview  archiving  scholarly  sciencepublishing 
february 2017 by juliusbeezer
The high-tech war on science fraud | Science | The Guardian
Statcheck had read some 50,000 published psychology papers and checked the maths behind every statistical result it encountered. In the space of 24 hours, virtually every academic active in the field in the past two decades had received an email from the program, informing them that their work had been reviewed. Nothing like this had ever been seen before: a massive, open, retroactive evaluation of scientific literature, conducted entirely by computer.

Statcheck’s method was relatively simple, more like the mathematical equivalent of a spellchecker than a thoughtful review, but some scientists saw it as a new form of scrutiny and suspicion, portending a future in which the objective authority of peer review would be undermined by unaccountable and uncredentialed critics.
opendata  psychology  openscience  peerreview 
february 2017 by juliusbeezer
‘You never said my peer review was confidential’ — scientist challenges publisher : Nature News & Comment
Tennant, who now works as communications director at ScienceOpen, an online platform that promotes open-access research, wanted to receive credit for his unpaid peer-review work. With permission from the authors of the paper, he decided to openly post the text of his review on Publons, a platform for sharing reviews.

But his post was turned down. Publons told him that the journal’s publisher, Elsevier, requires reviewers to obtain permission from journal editors before posting a review.

That was not part of the deal — at least, not explicitly — Tennant argues. “I didn’t sign a confidentiality agreement, and I was not aware that I had implicitly agreed to the journal’s policies,”
peerreview  open 
january 2017 by juliusbeezer
PLOS ONE: The Global Burden of Journal Peer Review in the Biomedical Literature: Strong Imbalance in the Collective Enterprise
From 1990 to 2015, the demand for reviews and reviewers was always lower than the supply (Fig 2). In 2015, 1.1 million journal articles were indexed by MEDLINE and we estimated that they required about 9.0 million reviews and 1.8 million reviewers. In contrast, depending on the scenario, the annual supply would be between 10 and 30 million reviews and between 2.1 and 6.4 million reviewers. A substantial proportion of researchers do not contribute to the peer-review effort. In fact, the supply exceeded the demand by 249%, 234%, 64% and 15%, depending on the scenario. The peer-review system in its current state seems to absorb the peer-review demand and be sustainable in terms of volume.
If the peer-review effort were split equally among researchers, it would generate a demand for 1.4 to 4.2 yearly reviews per researcher, depending on the scenario. However, we found a considerable imbalance in the peer-review effort in that 20% of researchers perform 69% to 94% of reviews (Fig 3A). The imbalance translates into the time spent on peer review. In all, 70% to 90% of researchers dedicate 1% or less of their research work-time to peer review (Fig 3B). Among researchers actually contributing to peer review, 5% dedicate 13% or more of their research work-time to peer review. In 2015, we estimated that a total of 63.4 million hours were devoted to peer review, among which 18.9 (30%) million hours were provided by the top 5% contributing reviewers.
peerreview  scholarly  sciencepublishing 
november 2016 by juliusbeezer
We should reward peer reviewers. But how?
Prioritizing speed in the review process is fine if the goal is throughput, but is it good for promoting quality science?

The answer is hardly. Rapid reviews can be shoddy, as Elsevier knows well from a case in one of its own journals last year. And given how many problems readers are identifying on sites like PubPeer once papers are published, does pushing for speed really make sense?

That leaves a final kind of incentive that some have experimented with: Money. “We need to abandon the belief that there is only one peer review market that operates entirely on volunteer labor,”
peerreview  economics  attention 
august 2016 by juliusbeezer
Why getting medical information from Wikipedia isn't always a bad idea
The Wikiversity Journal of Medicine, which was launched in 2014, is hosted directly by the Wikimedia Foundation, the same organisation that hosts Wikipedia. It uses the same software, MediaWiki, which makes editing and processing very easy.

The whole service is free to authors and readers; as with Wikipedia our operating costs are covered by donations from around the world. The Wikiversity Journal of Medicine follows standard international best-practice guidelines for medical journals, drawing from such reputable bodies as the International Committee of Medical Journal Editors.
wikipedia  peerreview  medicine 
june 2016 by juliusbeezer
Data Colada | [44] AsPredicted: Pre-registration made easy
Pre-registering a study consists of leaving a written record of how it will be conducted and analyzed. Very few researchers currently pre-register their studies. Maybe it’s because pre-registering is annoying. Maybe it’s because researchers don’t want to tie their own hands. Or maybe it’s because researchers see no benefit to pre-registering. This post addresses these three possible causes. First, we introduce AsPredicted.org, a new website that makes pre-registration as simple as possible. We then show that pre-registrations don’t actually tie researchers’ hands, they tie reviewers’ hands, providing selfish benefits to authors who pre-register.
science  statistics  peerreview  sciencepublishing  ebm 
december 2015 by juliusbeezer
For the first time! | Journal of Cell Science
here is the thing that bothers me. Often, I begin a paper with an observation that someone has previously described, and show that it applies to the problem we have undertaken. And most of the time either the editors or the reviewers tell me to take that out, as ‘it has already been shown.’ It isn't new. Okay, we have established why new is important, so this makes sense.

Except it doesn't. If you've been paying attention to the front matter in many journals, and to the popular press, you may have noticed that there is a growing concern that research results are not reproducible.
sciencepublishing  peerreview  writing  editing 
december 2015 by juliusbeezer
How non-native English researchers can overcome barriers to academic publishing | Editage Insights
When you started out, how easy or difficult did you find it to write academic articles in English? Did you face any specific challenges? Based on your experiences, would you like to share any tips with our readers?

At first, it was difficult because of the language barrier. I struggled with presenting my experiment methods and research findings in a way that the reviewers would clearly understand. While writing in English, I tended to follow the Chinese writing style and syntax, and as a result my writing was unnatural and sounded “Chinglish.” I realized that to overcome these difficulties, I had to keep reading papers published in the leading journals in my field to gradually improve my vocabulary and learn common expressions in academic writing. I also had to learn to write directly in English and think in English. My first published SCI article marked the formation of my English academic writing style.

There’re a few other things I learned early on. First, the key factor determining whether an academic article will be published is not the writing skills displayed by the author(s) but the contents of the paper. Second, you should ensure that you write a good introduction.
writing  editing  peerreview  china  learning 
november 2015 by juliusbeezer
What are my chances of being Accepted at PeerJ? | PeerJ Blog
as we now have 3 years of submission data, we thought it would be helpful to give people an indication of our current acceptance rate.

All published articles at PeerJ have been peer-reviewed by two external reviewers and then formally accepted by their handling Academic Editor. Right now, if we look at the submissions to the journal, an average of 58% of these submissions will end up as Accepted (having passed through our various checks, adhered to our policies, and been formally peer-reviewed with an ultimate acceptance decision).
peerreview  sciencepublishing 
november 2015 by juliusbeezer
Hypothes.is at Society for Neuroscience | Hypothesis
Researchers clearly saw the value in incorporating Hypothes.is into the scientific workflow, particularly during the peer review process, where the ability to use targeted annotations of particular phrases or sentences was seen as a very valuable means to improve the review process for authors, reviewers and editors alike.

Researchers were also excited by the educational and collaborative opportunities of web annotation, and asked whether one could annotate in groups with their colleagues. I am happy to let everyone know that the private group annotation launched on November 3rd. Thanks to our program in education, and our educational director, Jeremy Dean, Hypothes.is is, in fact, enjoying robust use in the classroom.

But all of the above activities are carried out privately or semi-privately. What about “the Internet, peer reviewed”? This tag line brought people to the booth, but the possibility of putting a public knowledge layer over the scientific literature and related materials both excited and concerned many neuroscientists. Many recognized that our current methods for reporting scientific findings would benefit from an interactive, public layer where questions could be asked and answered and where additional information could be provided. Those that blogged liked the idea of “blogging in place” on articles or news articles that fell into their area of expertise.
peerreview  commenting  archiving 
november 2015 by juliusbeezer
Academia.edu’s peer-review experiments | Dr. Martin Paul Eve | Senior Lecturer in Literature, Technology and Publishing
eminent thinkers on the reform of this system, such as Kathleen Fitzpatrick, have been careful to point out that while a social system of recommendation and review (“peer-to-peer review”) might work better than existing structures, such measures must be carefully designed.

Specifically, I believe that in an academic environment: 1.) any such system should not have a quantified level of engagement specified (if there are not 4 good papers to recommend, then this is a false measure); 2.) such a system should not be a binary “recommend or ignore” but should allow for qualitative discussion and signalling; 3.) any entity implementing such systems should be open and transparent about the ranking measures they are using and how algorithmic processes will sort these recommendations; 4.) trust, reputation and recommendation metrics require the evaluation of reviewers and transparency about this process: “in using a human filtering system, the most important thing to have information about is less the data that is being filtered, than the human filter itself: who is making the decisions, and why. Thus, in a peer-to-peer review system, the critical activity is not the review of the texts being published, but the review of the reviewers” (Fitzpatrick).
peerreview  openaccess 
november 2015 by juliusbeezer
Impact of Social Sciences – What will the scholarly profile page of the future look like? Provision of metadata is enabling experimentation.
What all these Facebook-mimicking services have in common is that all of the information entered in the database of these services, from simple facts about a researcher’s work to whole papers that can be self-archived directly into these services, is owned solely by the commercial enterprises behind them. In this way, these services exemplify the “web 2.0” principle of being free (as in free beer), with the caveat that you cede control over your aggregated profile data. This is not only a matter of data-freedom principles. If you try to harvest large chunks of content from these databases for reuse elsewhere (as undertaken regularly by Google and other search engines), you soon learn that this is not permitted...
With the growing expectations of cultivating one’s own scholarship profile online completely and conveniently, things have become more interesting, and sometimes confusing. The whole area still seems to be in its infancy. A strong indicator of the ongoing development of this ecosystem is the consolidation of freely available metadata streams – besides ORCID, we now have CrossRef’s DOI event tracker pilot as a free source of impact metadata across many scholarly articles. In the area of institutional research information systems, open approaches such as VIVO ontologies and software are constantly gaining greater traction, enabling custom developments and experimentation. So, interesting times ahead!
altmetrics  sciencepublishing  peerreview  scholarly 
november 2015 by juliusbeezer
Impact of Social Sciences – The arXiv cannot replace traditional publishing without addressing the standards of research assessment.
In Mathematics, a period of one year between submission and publication is quite common, while periods of 3-4 years are nothing exceptional. A major reason for those long lead times is the thorough refereeing that is expected...
Because of the long time between submission and publication, the existence of “preprints” or “reports” was standard in the mathematical community...
So the arXiv is not something that came into existence because of the move towards Open Access. It’s more that it was the solution to a practical problem: “if it will take several years before my paper will be published, how do I tell the world about my brilliant work in the meantime?”. Of course, the arXiv is now seen as a prime example of Open Access: it is completely free to search and download all publications. It allows uploading new versions of a paper, while at the same time keeping previous versions accessible...
So could we see a more prominent role of completely open repositories such as the arXiv in the scientific publication process? Maybe. But two main obstacles remain, from my point of view. How do you set up a review process that makes it possible to recognise (top-)quality among the publications in the repositories?
arxiv  repositories  overlay  peerreview  mathematics  sciencepublishing  scholarly 
october 2015 by juliusbeezer
Open peer review 'better quality' than traditional process | Times Higher Education
Open peer review produces better scrutiny of research than traditional methods, according to a new study.

Reviews were found to be of slightly higher quality – around 5 per cent better – when authors could see who had reviewed their papers and these assessments were made available with the published article.

Researchers compared 400 papers in two similar journals: BMC Infectious Diseases, which uses open peer review, and BMC Microbiology, which uses the common “single-blind” process where reviewers know the identity of the author but the author does not know who they are being reviewed by.

Judged using a scorecard of eight criteria, the open reviews were of moderately better quality than the single-blind reviews, according to the paper published in the journal BMJ Open.
peerreview  open  sciencepublishing  openness 
october 2015 by juliusbeezer
Extreme Bias: How Rejection Clouds The Eyes of Researchers | The Scholarly Kitchen
Break the respondents down into authors whose manuscript was accepted (blue) and rejected (red) and you’ll notice a great schism in author responses (Figure 2). Not only did rejected authors believe that the editorial board failed to understand their work, but peer reviewers — supposed experts in their field — failed to understand it as well. In the minds of rejected authors, the editorial board did not properly weigh the reviewers’ comments and ultimately made decisions that were not based on scientific grounds. Not surprisingly, rejected authors were much less likely to believe they would ever submit again to this journal. In contrast, accepted authors were resoundingly supportive of nearly every aspect of the journal.
rejecta  authorship  sciencepublishing  editing  peerreview 
october 2015 by juliusbeezer
Fifth-Grade Science Paper Doesn't Stand Up To Peer Review
Nogroski presented his results before the entire fifth-grade science community Monday, in partial fulfillment of his seventh-period research project. According to the review panel, which convened in the lunchroom Tuesday, "Otters" was fundamentally flawed by Nogroski's failure to identify a significant research gap.

"When Mike said, 'Otters,' I almost puked," said 11-year-old peer examiner Lacey Swain, taking the lettuce out of her sandwich. "Why would you want to spend a whole page talking about otters?"

"It's probably only the dumbest topic in the history of the entire world," 10-year-old Duane LaMott added.
peerreview  funny 
october 2015 by juliusbeezer
Punching down; In defense of PubPeer | PSBLAB
he sides with Hilda Bastian in espousing “the importance of assessing whether commenters are outside their areas of expertise”. This is a classic prat-fall of the entitled. I’ll re-phrase it into plain English – Your opinion only counts if I deem you important enough to have an opinion. Witness this discussion between myself and a senior scientist on PubPeer, in which my scientific credentials were considered as a topic worthy of discussion, instead of the actual data in question. Quite simply, there are no rules regarding who is qualified to comment on science.
peerreview  sciencepublishing 
september 2015 by juliusbeezer
The Winnower | Open Scholarly Publishing
peer review, more broadly construed, takes place every day amongst individuals, in groups, in labs, in classes around the world, and in the form of organized meetings informally referred to as “journal clubs.” These discussions—disinterested reviews—tend to happen post-publication, as scholars of all stripes discuss works relevant to their research with their colleagues. Unfortunately, journal club proceedings, like the other forms of peer review, are very rarely published, if only because of the burdens of publishing, and the lack of incentives to do so. Given that these meetings are potentially of enormous benefit to the community, The Winnower will explore if publishing post-publication peer reviews can be incentivized by elevating peer reviews to the same level as original research, with all the affordances and services of scholarly publications.

As part of our proposal, we will soon be seeking participants willing to commit to making their journal club discussions public, as written reviews.
peerreview  sciencepublishing 
september 2015 by juliusbeezer
I am supporting RIO Journal. I think you should too - Ross Mounce
RIO uses an integrated end-to-end XML-backed publication system for Authoring, Reviewing, Publishing, Hosting, and Archiving called ARPHA. As a publishing geek this excites me greatly as it eliminates the need for typesetting, ensuring a smooth and low-cost publishing process. Reviewers can make comments inline or more generally over the entire manuscript, on the very same document and platform that the authors wrote in, much like Google Docs. This has been successfully tried and tested for years at the Biodiversity Data Journal and is a system now ready for wider-use.
sciencepublishing  openaccess  journals  tools  peerreview  scholarly 
september 2015 by juliusbeezer
Publons — Peer review essentials for the beginning peer...
"As I am reading the manuscript for the first time, I will have a text editor open in which I immediately write down small comments on specific parts of the manuscript, such as a typo in line 15 or an unclear sentence in the introduction. While I go through the paper, I will start to write down more general thoughts as well, such as remarks about the length of the introduction or a misinterpretation of results. After reading the whole paper, I will then re-read the abstract to see if it correctly captured hypothesis, experiments, results and interpretation. At the end of my read-through, I try to structure my peer review into three parts."
peerreview  sciencepublishing  scholarly 
august 2015 by juliusbeezer
Whose problem is the “reproducibility crisis” anyway? | Fumbling towards tenure
a single funky data point out of almost 60 is not a "result," but a...data point...the answer is no, I do not "usually" replicate. Look, I get that in some labs it's super easy to run an experiment in an afternoon for like $5. If this is the situation you're in, by all means replicate away! Knock yourself out, and then give yourself a nice pat on the back. But in the world of mammalian behavioral neuroscience, single experiments can take years and many thousands of dollars. When you finish an experiment, you publish the data, whatever they happen to be. You don't say, let's spend another couple of years and thousands more dollars and do it all again before we tell anyone what we found! So I thought, OK, this guy runs an insect lab, maybe he doesn't know what's involved.
statistics  science  sciencepublishing  peerreview  twitter  scholarly 
august 2015 by juliusbeezer
A Code of Conduct for Peer Reviewers in the Humanities and Social Sciences | Practical Ethics
1. The fact that you disagree with the author’s conclusion is not a reason for advising against publication. Quite the contrary, in fact. You have been selected as a peer reviewer because of your eminence, which means (let’s face it), your conservatism. Accordingly if you think the conclusion is wrong, it is far more likely to generate interest and debate than if you agree with it.

2. A very long review will simply indicate to the editors that you’ve got too much time on your hands. And if you have, that probably indicates that you’re not publishing enough yourself. Accordingly excessive length indicates that you’re not appropriately qualified.
august 2015 by juliusbeezer
Have we reached Peak Megajournal? | Sauropod Vertebra Picture of the Week
2.5 million scholarly articles were published in English-language journals in 2014 (page 6). Björk’s data tells us that only 38 thousand of those were in megajournals — that’s less than 1/65th of all the articles. I find it very hard to believe that 1.5% of the total scholarly article market represents saturation for megajournals.
openaccess  peerreview  scholarly  sciencepublishing 
june 2015 by juliusbeezer
The Seer of Science Publishing
Tracz is taking aim at science's life force: peer review. "Peer review is sick and collapsing under its own weight," he contends. The biggest problem, he says, is the anonymity granted to reviewers, who are often competing fiercely for priority with authors they are reviewing. "What would be their reason to do it quickly?" Tracz asks. "Why would they not steal" ideas or data?

Anonymous review, Tracz notes, is the primary reason why months pass between submission and publication of findings. "Delayed publishing is criminal; it's nonsensical," he says. "It's an artifact from an irrational, almost religious belief" in the peer-review system.

As an antidote, the heretic in January launched a new venture that has dispensed altogether with anonymous peer review: F1000Research, an online outlet for immediate scholarly publishing. "As soon as we receive a paper, we publish it," after a cursory quality check.
openaccess  sciencepublishing  peerreview  anonymity 
may 2015 by juliusbeezer
The Winnower | DIY Scientific Publishing
We have started to address this by developing an open source RSS reader (a feedly clone) with a plug-in functionality to allow for all the different features, but development has halted there for a while now. So far, the alpha version can sort and filter feeds according to certain keywords and display a page with the most tweeted links, so it’s already better than feedly in that respect, but it is still alpha software. All of the functionalities I want have already been developed somewhere, so we’d only need to leverage it for the scientific literature.

In such a learning service, it would also be of lesser importance if work was traditionally peer-reviewed or not: I can simply adjust for which areas I’d like to only see peer-reviewed research and which publications are close enough that I want to see them before peer-review – I might want to review them myself. In this case, peer-review is as important as I, as a reader, want to make it. Further diminishing the role of traditional peer-review are additional layers of selection and filtering I can implement. For instance, I would be able to select fields where I only want recommended literature to be shown, or cited literature, or only reviews, not primary research. And so forth, there would be many layers of filtering/sorting which I could use flexibly to only see relevant research for breakfast.
peerreview  sciencepublishing  reading 
april 2015 by juliusbeezer
Our Papers Our Way - Enago Blog: Scientific Publication Help
The pilot service was called Your Paper Your Way (YPYW), and it proposed a re-alignment of the submission process, such that formatting and citation requirements did not have to be met until after the paper had been accepted for publication. In operational terms, this meant that:

The initial submission could be made in one file with no specific formatting requirements other than the content should be legible enough for initial refereeing.
References could be in any format, provided the style was consistent.
Editable source files – text, figures, tables – would only be required after acceptance for publication.
Elsevier’s follow-up to the pilot was a survey of 3,958 authors, 70% of whom found the new process to be a positive experience. When the 870 reviewers who had to adjust to the process change were surveyed, no significant difference in reviewer satisfaction was found, although there were several positive comments in reference to the ease of checking figures and tables in the text document rather than separate files, which had been the prior requirement.
sciencepublishing  peerreview  editing  scholarly 
april 2015 by juliusbeezer
[no title]
Dealing with these issues is new not only for us but also for Cambridge University Press, a point that was driven home abundantly in our conversations with senior editors and staff. In book form, "revised editions" are rarely issued with this level of detailed annotation. Standard practice for a traditional print book, our editors quickly pointed out, would be summed up by one quick line on the copyright page of a standard print book: “revised edition: some text has been altered from the original.” Even when there have been meetings with positions drafted and recognized, activities such as these are still new to Cambridge University Press.
openaccess  scholarly  publishing  peerreview  open  openness 
april 2015 by juliusbeezer
Ask The Chefs: How Can We Improve the Article Review and Submission Process? | The Scholarly Kitchen
the only player in this system that drives decisions is the one that invests capital, and that means the publisher. Improving the system has to have a benefit for the publisher or it won’t happen. In this formulation improving the process for authors and reviewers is best understood if it provides a return to the publisher. Will a more efficient system persuade more authors to submit papers to a particular publisher? That’s a reason to invest. Will it reduce costs? That is a reason to invest. But it should be clear that all such improvements are an arms race: when one publisher does this, all the others must follow.
march 2015 by juliusbeezer
Judge tells PubPeer to hand over information about anonymous commenter; site weighing “options” - Retraction Watch
the judge ordered PubPeer to produce “identifying information for that commenter,” said Alexander Abdo of the American Civil Liberties Union...

Abdo told us: We are disappointed with the ruling and are weighing our options for how to continue to fight for the right to anonymity of PubPeer’s commenters.

The case began when Fazlul Sarkar of Wayne State University sued the people who commented anonymously about him on PubPeer, and demanded that PubPeer release their names. Sarkar, who has not been found to have committed research misconduct, claims he lost a lucrative job offer at the University of Mississippi, likely as a result of the posts.
peerreview  commenting  anonymity 
march 2015 by juliusbeezer
Co-operating for gold open access without APCs | Eve | Insights
I take issue with:
"The first component, the OLH Megajournal, is a multi-disciplinary space for any researcher who identifies his or her practice as falling within ‘the humanities’. Although not a ‘megajournal’ in the PLOS-ONE sense of ‘peer-review light’ (in which ‘technical soundness’ becomes the core determinant for admission), this broad space is an area where the approximately 150 researchers who have pledged us articles can submit their new work. Of course, we cannot guarantee that all 150 pledges will be received. We can guarantee that not all of these will pass peer review. The end result, though, at launch, should be a sizeable tranche of initial material across a wide disciplinary spread."
openaccess  dccomment  peerreview 
march 2015 by juliusbeezer
journal quality 1
One of Wouter Gerritsma’s five sites which attempt to judge the quality of
journals. Open Access or not. They invite you contribute your experience
journals  peerreview 
february 2015 by juliusbeezer
"FDA has repeatedly hidden evidence of scientific fraud," says author of new study - Retraction Watch at Retraction Watch
(Full disclosure: Ivan and Charles are colleagues at NYU, and some of their mutual students helped gather the data for the article). We asked Seife about what it was like working with J-school students, and how the process of digging through the documents went:

One of the wonderful things about having a dozen or so bright students is that you can set them loose on a many-hands-light-work sort of assignment. Go out and find fraud, my pretties! *cackle* So all I had to do was point them in the right direction, and data began trickling in.

The tough part was gathering up all the data and validating it. One advantage we had was that we had no illusions that we’d be comprehensive; the redactions were sometimes way too extensive for us to have hope that we’d get everything. That knowledge kept us from spending too much time beating our heads against the wall trying to crack documents that simply wouldn’t be cracked.
science  attention  education  peerreview 
february 2015 by juliusbeezer
The San Francisco Declaration on Research Assessment (DORA)
Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment. These limitations include: A) citation distributions within journals are highly skewed [1–3]; B) the properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews [1, 4]; C) Journal Impact Factors can be manipulated (or “gamed”) by editorial policy [5]; and D) data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public [4, 6, 7].
citation  altmetrics  peerreview  research  sciencepublishing 
february 2015 by juliusbeezer
Prepublication histories and open peer review at The BMJ | The BMJ
Randomised controlled trials conducted at The BMJ since the turn of the millennium found that removing anonymity improved the tone and constructiveness of reviews without detriment to scientific and editorial value. One of the trials also found that telling reviewers that prepublication histories might be posted online did not affect the quality of peer review.
peerreview  open  ebm  science  scholarly 
february 2015 by juliusbeezer
Manuscript submission modelling – my comments in full - Ross Mounce
Some academics have an odd psychological complex around this thing called ‘scooping’. The authors of this paper are clearly strong believers in scooping. I don’t believe in scooping myself – it’s a perverse misunderstanding of good scientific practice. I believe what happens is that someone publishes something interesting; useful data testing a novel hypothesis — then somewhere else another academic goes “oh no, I’ve been scooped!” without realising that even if they’re testing exactly the same hypothesis, their data & method is probably different in some or many respects — independently generated and thus extremely useful to science as a replication even if the conclusions from the data are essentially the same...
All interesting hypotheses should be tested multiple times by independent labs, so REPLICATION IS A GOOD THING.
I suggest the negative psychology around ‘scooping’ in academia has probably arisen in part from the perverse & destructive academic culture of chasing publication in high impact factor journals. Such journals typically will only accept a paper if it is the first to test a particular hypothesis, regardless of the robustness of approach used – hence the nickname ‘glamour publications’ / glam pubs. Worrying about getting scooped is not healthy for science. We should embrace, publish, and value independent replications.
With relevance to the PLOS ONE paper – it’s a fatal flaw in their model that they assumed that ‘scooped’ (replication) papers had negligible value. This is a false assumption
sciencepublishing  scholarly  philosophy  citation  peerreview 
january 2015 by juliusbeezer
Q&A: Psychiatry Faculty Member Shervin Assari, MD, MPH Describes the Peer Review Process Following Recent High Ranking at Publons :: U-M Psychiatry and Depression Center Newsroom
The mission of Publons is “to speed up science by making peer review faster, more efficient, and more effective.” Publons works with reviewers, publishers, universities, and funding agencies to turn peer review into a measurable research output. Publons collects peer-reviewed information from reviewers and from publishers, and produces comprehensive reviewer profiles with publisher-verified peer review contributions that researchers can add to their CV. Publons helps scholars advance their careers by building a portfolio of article critiques and, in turn, helps journals find quality reviewers.
peerreview  publon  reputation  scholarly 
january 2015 by juliusbeezer
PubPeer - A Stronger Post-Publication Culture Is Needed for Better Science
Commenting about commenting in the world where comments are known as "post-publication peer review" which is "here to stay"(!)
Interesting how unsatisfying it is to be reading lengthy comments authored by "Peer1" (even the choice of pseudonym leaves an artistic reveal: one could imagine rejecting all comments written by an author with more than 4 consecutive digits for example e.g. peer65000).
peerreview  commenting  anonymity  confidentiality  writing 
january 2015 by juliusbeezer
Harvard-Smithsonian climate change skeptic accused of violating academic disclosure rules - Nation - The Boston Globe
A climate-change skeptic at the Harvard-Smithsonian Center for Astrophysics who has relied on grants from fossil-fuel energy interests apparently failed to disclose financial conflicts of interest in a newly released paper, according to a complaint by a climate watchdog group.

The paper by Harvard-Smithsonian scientist Willie Soon and three other climate-change skeptics contends that the UN panel that tracks global warming uses a flawed methodology to estimate global temperature change. Soon and his co-authors claim to have a simpler, more accurate model that shows the threat of global warming to be exaggerated.

The Chinese journal that published the paper, Science Bulletin, imposes a strict conflict of interest policy on authors, obligating contributors to disclose any received funding, financial interests, honors, or speaking engagements that might affect their work.

In a note at the end of the paper, all four authors claimed no conflicts of interest on the published study.
climatechange  agnotology  conflict_of_interest  peerreview 
january 2015 by juliusbeezer
Hypothes.is Reputation System - Implementation details. - Google Docs
reputation of a user represents our trust of the user. In mathematical terms, we can think of the reputation as of a probability of the user telling us a correct statement. If reputation is zero then the user always gives wrong information. If reputation is 1 then the user always correct. If reputation is 0.5 then the user gives correct information in 50% of cases.

From other point of view, reputation expresses how much useful content a user has contributed. For example, in stackoverflow, more good answers I contribute, more reputation I have. Intuitively, this reputation is proportional to amount of useful work the user has done.
reputation  peerreview  internet  web  commenting  dccomment 
january 2015 by juliusbeezer
UCL signs San Francisco Declaration of Research Assessment
UCL has signed the San Francisco Declaration on Research Assessment (DoRA), which acknowledges weaknesses in the use of the Journal Impact Factor (JIF) as a measure of quality, since this measure relates to journals as a whole and not to individual articles. Recognising that research results in outputs other than journal articles, DoRA also attempts to identify new routes to research evaluation.
science  peerreview 
january 2015 by juliusbeezer
Misrepresenting science is almost as bad as fraud: Randy Schekman - Livemint
My own work that led to the Nobel Prize, the first paper was in the PNAS and had very few citations because it was new and no one else was working on this. But the citations grew over time. Measuring the impact factor for that very meaningful paper was useless. What’s happened now is a kind of collusion between these commercial journals and people who calculate this number. This system is broken. I encourage people to think about journals run by scientists and not people who want to sell magazines.
citation  peerreview  sciencepublishing 
january 2015 by juliusbeezer
The pleasure of publishing | eLife
What have we learned after two years of publishing at eLife? The most common complaint from reviewers is that authors are overselling their work. We understand that competition for funding and pages in prestige journals has taught authors to frame their work in the most globally ambitious terms. However, there is a fine line between trying to express in a crisp and compelling manner the contribution made by a manuscript and making claims that are beyond what the manuscript does or could do.
sciencepublishing  peerreview  editing 
january 2015 by juliusbeezer
How to exploit academics | Questo blog non esiste
I have an ingenious idea for a company. My company will be in the business of selling computer games. But, unlike other computer game companies, mine will never have to hire a single programmer, game designer, or graphic artist. Instead I’ll simply find people who know how to make games, and ask them to donate their games to me. Naturally, anyone generous enough to donate a game will immediately relinquish all further rights to it. From then on, I alone will be the copyright-holder, distributor, and collector of royalties.
openaccess  sciencepublishing  peerreview  satire 
january 2015 by juliusbeezer
PubPeer - Prior Publication Productivity, Grant Percentile Ranking, and Topic-Normalized Citation Impact of NHLBI Cardiovascular R01 Grants
"Even after normalizing citation counts, we confirmed a lack of association between peer-review grant percentile ranking and grant citation impact."

Perhaps we should start giving money away randomly.
peerreview  science  sciencepublishing 
december 2014 by juliusbeezer
In the Digital Age, Science Publishing Needs an Upgrade
even the broadest journals don't employ hundreds of specialist editors. Usually no more than a few dozen people, often many years from the inside of a laboratory, are asked to do the impossible: predict the future. Predict whether a manuscript that was just mailed to them is going to be of broad interest and become scientifically important. The simple truth is that they can't possibly know, and trying to predict such impact is an exercise in futility. More important, there is no reason for them to try, because the world should not be cheated of a shred of a new insight, even if seemingly tiny.
sciencepublishing  peerreview  editing  publishing 
december 2014 by juliusbeezer
Retraction Watch is growing, thanks to a $400,000 grant from the MacArthur Foundation - Retraction Watch at Retraction Watch
The goal of the grant — $200,000 per year for two years — is to create a comprehensive and freely available database of retractions, something that doesn’t now exist, as we and others have noted. That, we wrote in our proposal, is

a gap that deprives scholarly publishing of a critical mechanism for self-correction.

While we’re able to cover somewhere around two-thirds of new retractions as they appear, we’ll need more resources to be comprehensive. Here’s more from our proposal:

The main benefit would be that scientists could use it when planning experiments and preparing manuscripts to make sure studies they would like to cite have not been the subject of a retraction, correction, expression of concern or similar action. Retracted studies are often cited as if they were still valid
sciencepublishing  peerreview  editing 
december 2014 by juliusbeezer
Why Scientists Hate Their Journals - Pacific Standard: The Science of Society
a cynical move by the publisher to avoid making more substantial changes that would benefit the scientific community.
sciencepublishing  openaccess  peerreview 
december 2014 by juliusbeezer
For Sale: “Your Name Here” in a Prestigious Science Journal - Scientific American
A quick Internet search uncovers outfits that offer to arrange, for a fee, authorship of papers to be published in peer-reviewed outlets. They seem to cater to researchers looking for a quick and dirty way of getting a publication in a prestigious international scientific journal.

In November Scientific American asked a Chinese-speaking reporter to contact MedChina, which offers dozens of scientific "topics for sale" and scientific journal "article transfer" agreements. Posing as a person shopping for a scientific authorship, the reporter spoke with a MedChina representative who explained that the papers were already more or less accepted to peer-reviewed journals; apparently, all that was needed was a little editing and revising. The price depends, in part, on the impact factor of the target journal and whether the paper is experimental or meta-analytic.
sciencepublishing  peerreview 
december 2014 by juliusbeezer
Are companies selling fake peer reviews to help papers get published? - Retraction Watch at Retraction Watch
we have reported on a number of cases in which authors were able to submit their own peer reviews, using fake email addresses for recommended reviewers. But what seems to be happening now is that companies are offering manuscript preparation services that go as far as submitting fake peer reviews. And that, no surprise, worries publishers.

Here’s COPE’s statement out today:

The Committee on Publication Ethics (COPE) has become aware of systematic, inappropriate attempts to manipulate the peer review processes of several journals across different publishers.
peerreview  sciencepublishing 
december 2014 by juliusbeezer
When peers are not peers and don't know it: The Dunning‐Kruger effect and self‐fulfilling prophecy in peer‐review - Huang - 2013 - BioEssays - Wiley Online Library
The fateful combination of (i) the Dunning-Kruger effect (ignorance of one's own ignorance) with (ii) the nonlinear dynamics of the echo-chamber between reviewers and editors fuels a self-reinforcing collective delusion system that sometimes spirals uncontrollably away from objectivity and truth. Escape from this subconscious meta-ignorance is a formidable challenge but if achieved will help correct a central deficit of the peer-review process that stifles innovation and paradigm shifts.
Provide feedback or get help

“Real Knowledge is to know the extent of one's ignorance” – Confucius
peerreview  agnotology 
december 2014 by juliusbeezer
Why correcting the scientific record is hard - FX's blog: musings on chemistry, among other things…
during the course of the MSc project, we stumbled onto some papers that quote incorrect mathematical formulations of the Born conditions, while usually citing the original Born book (in which these expressions are not found). Most of the errors arise from people incorrectly generalizing the "cubic" conditions. We looked a bit more, and found more examples of such errors, in papers between 2007 and 2014. Now, if you find several mistakes in series of related equations, in a dozen papers published in a given field throughout a decade, what do you do?

Over the course of a few days, we wrote a short paper,
peerreview  sciencepublishing 
december 2014 by juliusbeezer
Open-access megajournals reduce the peer-review burden | Sauropod Vertebra Picture of the Week
It’s an open secret that nearly every paper eventually gets published somewhere. Under the old regime, the usual approach is to “work down the ladder”, submitting the same paper repeatedly to progressively less prestigious journals until it reached one that was prepared to publish work of the supplied level of sexiness. As a result, many papers go through four, five or more rounds of peer-review before finally finding a home.
openaccess  peerreview 
november 2014 by juliusbeezer
An anonymity problem | Letters | Times Higher Education
There is good evidence – from a 2010 study in the British Medical Journal and from others that have employed open peer review for many years – that an open peer review process does not decrease the quality of the referee report but does make the report more constructive on all sides (author, editor – if there is one – and reader). This is supported by what we have found on F1000Research, an open science publishing platform, where we use a transparent process with immediate publication, fully transparent post-publication peer review, and open data. We have had no legal difficulties with any of our invited peer review reports or with comments.
peerreview  open 
november 2014 by juliusbeezer
What researchers think about the peer-review process - Editors' Update - Your network for knowledge
Most researchers – 70 percent – are happy with the current peer-review process; a satisfaction rate higher than those recorded in similar 2007 and 2009 surveys. When asked if peer review helps scientific communication, 83 percent of those we surveyed agreed, with comments such as, "I have had reviews that were very insightful. When researchers get their nose caught in the lab book, we cannot see the forest through the trees. Having a peer look at your science helps expand the overall view". (Researcher in Environmental Science, Switzerland, aged 36-45.)

However, there is room for improvement; a third of researchers believe that peer review could be enhanced.
peerreview  sciencepublishing 
november 2014 by juliusbeezer
Publons — Reviewer rewards recipients for July-September
The current reviewer rewards period runs from 1 October to 31 December 2014. The three reviewers with the most Publons merit during this period will receive the rewards package (which will be announced closer to the end of the rewards period).
publon  peerreview  sciencepublishing  reviews 
october 2014 by juliusbeezer
PubPeer - Prior Publication Productivity, Grant Percentile Ranking, and Topic-Normalized Citation Impact of NHLBI Cardiovascular R01 Grants
"Even after normalizing citation counts, we confirmed a lack of association between peer-review grant percentile ranking and grant citation impact."

Perhaps we should start giving money away randomly.
commenting  peerreview  science  finance 
september 2014 by juliusbeezer
Scientist threatening to sue PubPeer claims he lost a job offer because of comments | Retraction Watch
Last month, PubPeer announced that a scientist had threatened to sue the site for defamation. At the time, all PubPeer would say was that the “prospective plaintiff” is a US researcher” who was “aggrieved at the treatment his papers are getting on our site.”


We understand that some comments have been removed from PubPeer.
peerreview  sciencepublishing  commenting  law 
september 2014 by juliusbeezer
PubPeer: Pathologist Threatening to Sue Users | The Scientist Magazine®
There are dozens of threads posted to PubPeer discussing Sarkar’s work. In an e-mail to The Scientist, Roumel wrote: “I am concerned about many posts and may begin legal action against anonymous commenter(s). That may lead me to subpoena information from PubPeer. As to whether I have independent grounds to sue PubPeer itself, while I do not rule that out in the future, I don’t have sufficient facts and law on my side to do that right now.
peerreview  law 
september 2014 by juliusbeezer
Open Science Collaboration Blog · How anonymous peer review fails to do its job and damages science.
signal detection theory tells us that reducing the number of false positives inevitably leads to an increase in the rate of false negatives. I want to draw attention here to the fact that the cost of false negatives is both invisible and potentially very high. It is invisible, obviously, because we never get to see the good work that was rejected for the wrong reasons. And the cost is high, because it removes not only good papers from our scientific discourse, but also entire scientists.
peerreview  sciencepublishing  dccomment 
september 2014 by juliusbeezer
What should editors do when referees disagree? | Dynamic Ecology
Journal referees often disagree. Referee disagreements can be challenging for editors to handle. How should editors deal with them?

One common approach, especially among editors at selective journals, is to just reject the paper. That is, anything other than unanimous approval or near-approval of the referees is fatal. This is the path of least resistance for editors. It’s usually justified on the grounds that there are lots of good, or potentially-good, papers to choose from and so decisions have to be made somehow
peerreview  sciencepublishing 
september 2014 by juliusbeezer
A case of open manuscript review - blue_and_black
I published an article with a BMC series journal a couple of years go. It was a long review process (not sure whether finding reviewers who are willing to sign their reports is harder, or because the volume of submissions to the journal at that time was too high, or it was just a matter of bad luck, I do not know). But what I know is that after the initial aggravation of the delayed review process, I ended up with receiving the best reviews ever in my life – constructive, positive and kind in attitude. I felt "supported" by two excellent mentors and it felt really good.
peerreview  openness  sciencepublishing 
august 2014 by juliusbeezer
RajLab: Is academia really broken? Or just really hard?
Take peer review of papers. Colossal waste of time, I agree. Personally, the best system I can envision is one where everyone publishes their work in PLOS ONE or equivalent with non-anonymous review (or, probably better, no review), then “editors” just trawl through that and publish their own “best of” lists. I’m sure you have a favorite vision for publishing, too, and I’m guessing it doesn’t look much like the current system–and I applaud people working to change this system. In the end, though, I anticipate that even if my system was adopted, everyone (including me) would still be complaining about how so and so hot content aggregator is not paying attention to their own particular groundbreaking results they put up on bioRxiv. The bottom line is that we are all competing for the limited attentions of our fellow scientists, and everyone thinks their own work is more important than it probably is, and they will inevitably be bummed when their work is not recognized for being the unique and beautiful snowflake that they are so sure it is. Groundbreaking, visionary papers will still typically be under-recognized at the time precisely because they are breaking new ground. Most papers will still be ignored. Fashionable and trendy papers will still be popular for the same reason that fashionable clothes are–because, umm, that’s the definition of fashion. Politics will still play a role in what people pay attention to. We can do pre-publication review, post-publication review, no review, more review, alt-metrics, old-metrics, whatever: these underlying truths will remain.
sciencepublishing  scholarly  peerreview  attention 
august 2014 by juliusbeezer
Presubmittal peer review for high-impact research | Open Scholar C.I.C.
Researchers who plan to submit to very-high-impact journals perhaps stand to benefit most from LIBRE review. LIBRE provides an environment where pre-submission peer review can take place in the spirit of collegiality, without the pressures of deadlines, nagging reminders from the editorial office, conflicts of interest and gaps in specialized knowledge, all of which can make reviewing a challenge. Feedback provided spontaneously by researchers with expertise in the subject is more likely to be constructive and helpful than feedback provided anonymously and perhaps grudgingly, out of a sense of duty or obligation, possibly by reviewers who lack the required expertise but are unable to admit this to themselves or to the editor.
july 2014 by juliusbeezer
Major award from the Alfred P. Sloan Foundation | Hypothes.is
Hypothes.is is a 501(c) not-for-profit working to develop an open source solution supporting annotation of web documents, building on top of the Open Knowledge Foundation’s Annotator project, and contributing to and utilizing the Open Annotation standard.
peerreview  commenting  sciencepublishing  arxiv  openstandards  opensource 
july 2014 by juliusbeezer
Is Wikipedia’s medical content really 90% wrong? | The Cochrane Collaboration
Wikipedia has well established guidelines for what counts as a suitable medical source: we recommend the use of meta-analyses or systematic reviews published in well-respected journals from the last 3-5 years; position statements of national or internationally recognized medical bodies; or major textbooks. Is Wikipedia a perfect source? No, but other studies suggest that its quality is broadly similar to (and sometimes better than) many respected sources.
wikipedia  medicine  peerreview 
june 2014 by juliusbeezer
« earlier      
per page:    204080120160

Copy this bookmark: