MicrowebOrg + openscience   73

Scheitert Open-Access am Impactfactor? - Alexander Grossmann / Matthias Andrasch
Alexander Grossmann ist Professor für Verlagsmanagement und Studiendekan für Buch- und Medienproduktion an der HTWK in Leipzig

... Wie absurd das Ganze ist, fiel mir vor knapp 20 Jahren auf, als ich erstmals in einer Berufungskommission saß. Absurd, weil man schon damals als wesentliches Kriterium für die Berufungsentscheidung eine Strichliste abfasste, die unterteilt war in die Zahl der Veröffentlichungen in „A- und B-Journals“ sowie „andere“. Was sagt das über den Bewerber oder die Bewerberin aus? Was sagt das über die Qualität der Forschung aus? Was sagt das darüber aus, wie andere Kollegen diese Arbeiten einschätzen? Wenig. Oder eher nichts. Absolut nichts!

Soweit ich es beurteilen kann und aus Gesprächen mit etlichen Kolleg*innen und Forscher*innen erfahre, hat sich an diesem zweifelhaften Prinzip vor allem bei Stellenbesetzungen hierzulande wenig geändert. Nur dass man heute keine Strichliste mehr macht, sondern eine Excel-Tabelle anlegt. Das klingt sarkastisch, formuliert aber den Kern des Problems. Inzwischen hat man lediglich einige Verfeinerungen eingebaut: Man berücksichtigt nur noch die Arbeiten der letzten fünf Jahre oder man lässt die Bewerber*innen die wichtigsten fünf Arbeiten auswählen. Oder man rechnet sogenannte „Impact-Points“ aus, wie ich es neulich in einer Ausschreibung der Charité gelesen habe. Besser wird durch diese Verfeinerungen nicht. Man festigt nur ein pervertiertes System.

Dazu muss man wissen, dass der sogenannte „Journal Impact Factor“ ein rechnerisches Konstrukt aus den Fünfziger Jahren ist, ursprünglich dafür bestimmt zu ermitteln, wie viele Zitate alle in einem bestimmten Jahr veröffentlichten Artikel in einer Fachzeitschrift in einem Zeitraum von zwei darauffolgenden Jahren erhalten haben. Je höher die Zahl, desto mehr Zitate – im Durchschnitt – hat ein Beitrag in dieser Zeitschrift erhalten. Und je größer der Impact Factor, desto mehr Prestige, und desto eher spielt man in der Champions League. Klingt kompliziert, die Formel an sich ist aber ganz einfach. Nur dass die meisten Wissenschaftler sie nicht richtig verstehen.

Viele gehen davon aus, dass ein großer Impact Factor automatisch viele Zitate für jeden Beitrag in dieser Zeitschrift bedeutet. Das ist falsch, da es sich ja um einen Durchschnittswert handelt, der möglicherweise über hunderte oder manchmal tausende von einzelnen Fachartikeln gemittelt wurde, die in einem Jahr in dieser Zeitschrift erschienen sind. Man kann anhand von Auswertungen einzelner Top-Zeitschriften belegen, dass nur ganz wenige Artikel wirklich häufig zitiert wurden, während die Mehrheit eher unter dem Durchschnitt lag. Fast zehn Prozent wurden sogar nur ein einziges Mal oder gar nicht zitiert, obwohl sie in einer der renommiertesten Fachzeitschriften weltweit erschienen waren, deren Impact Factor doch so hoch ist. Ich könnte also fünf Beiträge in einem dieser Champions-League-Journals haben, und kaum ein Kollege oder eine Kollegin hat diesen Beitrag so relevant gefunden, dass er zitiert wurde. Dennoch bin ich auf der Strichliste der Berufungskommission ganz oben.

Warum setzt sich der Glaube an den Impact Factor dann überhaupt so ungebrochen fort? Ich vermute: Weil die wenigsten Wissenschaftler*innen die Zeit haben, sich wirklich damit zu beschäftigen, und würden sie ihre Zweifel äußern, hätten sie die Verlage und diejenigen Bibliotheken gegen sich, die den Impact Factor seit Jahren als wichtiges, vermeintlich objektives Instrument protegieren. Insofern mache ich meinen Forscherkolleg*innen auch keinen Vorwurf, dass sie sich nicht öffentlich exponieren. Doch zumindest in den Gremien der eigenen Hochschule oder Forschungseinrichtung sollten all jene, die erkannt haben, dass es so nicht weitergehen kann, dafür eintreten, dass wir alternative Möglichkeiten zur Bewertung und Evaluierung von Forschung und Forscher*innen nutzen.

Möglichkeiten, die es ebenfalls seit Jahren gibt. Heute kann ich für fast jeden Beitrag, der in einer Fachzeitschrift erschienen ist, relativ genau sagen, wie oft dieser Artikel von anderen Wissenschaftlern zitiert wurde: einmal oder 500mal in einem Jahr. Auf Google Scholar beispielsweise finde ich diese Information sofort.

Oder ich sehe mir die Häufigkeit an, mit der ein bestimmter Fachartikel in sozialen Netzen erwähnt oder in Blogs diskutiert wurde. Das nennt man alternative Metriken oder englisch Altmetrics.

Ein Beispiel: Auf ScienceOpen, dem Netzwerk, das ich 2013 zusammen mit meinem Partner Tibor Tscheke aus Boston als Berliner Start-Up gegründet habe, haben wir für derzeit 34 Millionen Artikel sowohl die Zahl der Zitate, als auch diese Altmetrics-Angabe für jeden Artikel tagesaktuell verfügbar. Offen, frei und kostenlos. Ein Forscher irgendwo auf der Welt kann hier also die Verknüpfung neuester wissenschaftlicher Ergebnisse mit anderen Fachartikeln überprüfen und damit den Impact, also die wissenschaftliche Relevanz von Forschung individuell und im Kontext bestimmen.
openaccess  openscience  mikrobuch:add:universitaet 
28 days ago by MicrowebOrg
How Does One “Open” Science? Questions of Value in Biological ResearchScience, Technology, & Human Values - Nadine Levin, Sabina Leonelli, 2017
In the past decade, the Open Science movement has emerged as a champion of scientific progress, emphasizing its ability to foster transparency, equality, and innovation through openness. This paper critically examined not only how openness is negotiated by researchers but also how politics enters the negotiation. Because openness must be accomplished rather than being automatically secured, this examination highlights how particular work is required to make certain things open in certain ways and to certain people. Openness—whether it involves disclosure, dissemination, sharing, or reuse—comes in degrees and varieties. Like shadows, openness is the result of nuanced encounters between light and darkness, whose visible results reflect both the obstructions and specificities of each setting. Again like shadows, openness is inherently positional and relational and is subject to dramatic qualitative shifts depending on the characteristics of the locations involved or the personal relationships between the individuals and groups involved. Whether this is explicitly acknowledged or not, openness entails judgments about what counts as a valuable research output or practice, such that particular enactments of openness lead to the endorsement of some things as valuable, and others as not. It is not just a question of what should be made open but also about how particular instantiations of openness value some forms of care and labor over others.

Taken together, these cases show how openness, as a process and practice, is not in itself positive or negative, but rather its role and implications constantly shift across institutional settings and research networks, and in relation to given resources and priorities. Examining openness as a mode of valuation becomes increasingly important in the context of Open Science policies, where particular forms of openness are frozen and embedded in specific social norms, economic structures, and political reasoning. When openness is codified in policy, it not only enacts particular things as open and closed but also performs certain values, such as defining some research outputs and practices as more valuable than others. Although Open Science policies benefit society in numerous ways, they also carry assumptions about what, who, when, and how openness should occur (Whyte and Pryor 2011). These policies promote normative understandings of the economic and sociocultural significance of the processes and products of research whereby value is often stripped from outputs like data, software, and databases, leading these entities to remain in the shadows, unacknowledged (or, in the case of data, acknowledged in ways that obscure the labor and care necessary to effectively disseminate these outputs as valuable in and of themselves, rather than as evidential props for claims made in journal publications).
openscience  mikrobuch:add:universitaet:open 
4 weeks ago by MicrowebOrg
Most research I publish will be wrong. And I’m OK with that
reproducibility crisis:
-- konstruktiv "falsche" wissenschaft
-- crap-wissenschaft

When first exposed to journal papers, students have two reactions: (1) I don’t understand a word of this, please can I have my textbook back? (Answer: no, because the textbook is wrong). And (2) it’s published, so it must be true. We train them that this is nonsense. We train them into how to think about papers, how they are, in all probability, wrong. How to not defer to authority, but be constructively sceptical – to ask of a paper basic questions: “do the claims follow from the actual results?”; “What could have been done better?”; “What could be done next?”.

And not because we think the authors of the paper are lying to us. Or crap. But because science is hard. Experiments are hard. There are many uncontrolled, often unknown variables.

Perhaps because I’m a computational scientist – building models of neurons and bits of brain using maths and computer code – I have a different view: I expect research to be wrong. Because I expect my research to be wrong. Before I’ve even started it. (Incidentally, this is why the computational researchers in any field – systems biology, neuroscience, evolution, etc – are such a miserable bunch).

Models are wrong from the outset. A model tries to capture some essence of an interesting problem – how neurons turn their inputs into outputs, for example. A model does not capture every detail of the system it tries to emulate – to do so would be folly, as this would be like creating a map of a country by building a perfect scale model of its every bump, nook, and cranny. Pretty to look at; useless for navigation. So models are wrong by design: they leave out the detail. They aim to be useful.

That number is wrong. We know it’s wrong. We don’t know how much it is wrong. If a little bit wrong, then the effect of decreased risk is still there. If a lot wrong, then the effect is not there.

Here’s the important bit: it doesn’t matter what the statistical test says. The test tells us that the number is likely to fall into some range, given the amount of data we have, and our assumptions about the data. Which include, among other things, that the data were measured correctly (if not, then the number is, of course, wrong). And that we used the right way of simplifying the data down to simple numbers (if we didn’t, then the number has no relationship with reality). And, most importantly, that there is actually, truly an effect in the real world. That it actually exists. If it doesn’t exist, it doesn’t matter how great our statistics are.

The mundane truth is that much of what cannot be reproduced is because the original studies either used crap statistics or were unlucky. (And in some cases the inability of the original authors to understand their own results). No cheating, no faking data, no outright lying.

Every paper is an idea – it says “hey look at this, this could be cool”. Not “This is The Truth.”
openscience  mertonian 
8 weeks ago by MicrowebOrg
Confessions of an Open Access Advocate – Leslie Chan | OCSDNET
!!

Selbstkritik:
-- OA treibt die Überwachungs- und Verdatungs/Verpunktungs-Kultur der Universitäten
-- OA mit der naiv mertonischischen Öffnung verschlechtert die Situation für Nicht-Kerngruppen (Frauen usw.)

In my opinion, Open Access is just one small issue in a much larger debate. The conversation is mostly focused on research output and who has access to that research. One great thing that has come out of Open Access, though, is an exploration of alternative ways for communicating research, aside from a traditional, published journal article. That has been quite interesting.

On the other hand, I find that Open Science has become a much more useful narrative. Open Science challenges the entire research process to become more open: including the production of the research question, methodologies, through to data collection, publication and dissemination. In that way, it is easier to look at who is participating in these processes of knowledge production and what kind of power they have in a given context. It allows us to be more cognisant of how power is prevalent in systems of knowledge production, and allows us to think of ways to democratise these processes – to make them more collaborative and equitable.

Given your criticisms of Open Access, do you feel that Open Science could be headed in a similar direction?

Unfortunately, yes.

Although Open Science is a relatively new concept, we see, more often than not, that those with power in processes of knowledge production are able to take advantage of these types of discourses and use them to their advantage. We are seeing that the framing of competitiveness in knowledge production and knowledge-as-an-economic-engine is reiterated in Open Science narratives. For instance, it has become popular to hear people say “ data is the new oil . ” The idea is that data can be used to create knowledge that can be used for economic benefit. Of course, this is generally only true for those with the power to access and manipulate this data. Therefore, the idea of ‘extractive research’ has not really improved within discussions of open science. If anything, it has become more in line with a neoliberal agenda, in many ways
openaccess  openscience  mikrobuch:add:universitaet  mikrobuch:add:open 
8 weeks ago by MicrowebOrg
About ScienceOpen
gehört Alexander Grossmann et al.
Frage: Warum sollte ich denen vertrauen?
Wer garantiert Nachhaltigkeit?
openscience 
july 2017 by MicrowebOrg
non-specialist summaries -- Making your science work for society - ScienceOpen Blog
We’ve already had over 170 great authors writing non-specialist summaries since making the announcement. By integrating this into our research engine, we are seeing those articles gaining a huge boost in popularity! These authors have also added extra keywords and thumbnails to their articles to make them more visible and discoverable on ScienceOpen.
openscience  plaindeutsch 
july 2017 by MicrowebOrg
Prof. Dr. Isabella Peters — Arbeitsgruppe Web Science
Arbeitsschwerpunkte
Social Media und Web 2.0 (insbesondere nutzergenerierter Content)
Science 2.0
Wissenschaftliche Kommunikation im Social Web
Altmetrics
Wissensrepräsentation
Information Retrieval

Dissertation "Folksonomies in Wissensrepräsentation und Information Retrieval" (2009)
25.4.2014, Antrittsvorlesung Unweaving the Web: Web Science an ZBW und CAU, CAU Kiel.
web:science  openscience 
july 2017 by MicrowebOrg
Open Science: Damit das Wissen frei wird – Stifterverband / Wikimedia / VW Stiftung
Open Access, Open Data oder Open Source: Es kursieren einige Kernbegriffe um das Thema „Freies Wissen“. Mit ihrem Fellow-Programm wollen Wikimedia, Stifterverband und die VolkswagenStiftung dem Thema „Open Science“ in Deutschland einen kräftigen Schub verleihen.

Open Science bedeutet, den wissenschaftlichen Prozess von der Datenerhebung bis zur Veröffentlichung der Ergebnisse offen zugänglich, nachvollziehbar und nutzbar zu machen. Wissenschaftler können so von dem transparenten methodischen Vorgehen anderer Forscher lernen. Sie haben die Möglichkeit, ihre Analysen mit den Forschungsdaten anderer anzureichern. Oder sie nutzen freie Lehr- und Lernmaterialien, die sich für die eigene Lehre zielgruppengerecht anpassen lassen.

Das Ziel von Open Science ist, Forschungsergebnisse zu verbessern und Forschungsförderung effizienter einzusetzen. Open Science ist somit ein wichtiger Bestandteil der Sicherung guter wissenschaftlicher Praxis. Zusätzlich soll durch Öffnung und Transparenz der Wissenstransfer in Gesellschaft, Wirtschaft und Politik verbessert werden
openscience  mikrobuch:uni20:buch  stifterverband 
july 2017 by MicrowebOrg
ScienceOpen (Alexander grossmann)
Nice & free: You can export citation information of single articles or a whole bunch of publications on ScienceOpen as EndNote, BibTeX, RIS.
openscience  mikrobuch:uni20:buch 
july 2017 by MicrowebOrg
Tim Berners-Lee – Im Schatten von WWW - bild der wissenschaft
Damals arbeitete Berners-Lee am europäischen Kernforschungszentrum CERN in Genf und hatte die Idee, ein Kommunikationsnetz für Physiker zu schaffen. "CERN ist eine wunderbare Organisation", schrieb Tim Berners-Lee im März 1989 in seinem ersten Vorschlag. Wohl an keinem anderen Ort der Erde hätte das World Wide Web so entstehen können. Das europäische Institut an der schweizerisch-französischen Grenze bei Genf ist für die Physiker vor allem ein riesengroßes Mikroskop für die Suche nach den kleinsten Bausteinen der Materie - viel zu groß und teuer, als daß ein Land es finanzieren und betreiben könnte.

Die meisten Forscher arbeiten in ihrem Heimatland und kommen nur einige Monate nach Genf für ihre Experimente. Hier wiederum sind sie in großen internationalen Teams zusammengeschweißt, oft aus 100 Forschern oder mehr, die ständig an den Meßinstrumenten oder an den Computerprogrammen basteln. Dennoch will natürlich jeder mit seiner Entdeckung der erste sein, auch wenn es nur um abstrakte Grundlagenforschung geht. Über 7000 Physiker in mehr als 120 Ländern der Erde stehen so ständig mit CERN in Verbindung.

Kurz: Eine Welt ohne Hierarchien, in ständigem Wandel, vom Wettbewerb bestimmt, aber auch vom Zwang zur Kooperation, mit einem geradezu chaotischen Kommen und Gehen, wo die Informationen in den Köpfen und Computern anderer Menschen für jeden Beteiligten entscheidend sind. Tim Berners-Lee begriff schnell, wie sehr diese Miniwelt der Physiker der großen, realen Welt auf unserem Globus ähnelt - im Spannungsfeld von Kooperation und Wettbewerb, von Ordnung und Chaos, von Mensch und Maschine, von ständigem Wandel und Bewahrung der Information. "CERN hat heute einige der Probleme, denen auch die Welt in ein paar Jahren gegenüberstehen wird", schrieb er 1989 in seinem Konzept.

Berners-Lees versuchte gar nicht erst, Ordnung in das Informations-Chaos zu bringen, benutzte Werkzeuge, die ebenfalls chaotisch sind - und machte so das Chaos beherrschbar. Ausgangspunkt war für ihn ein Programm, das er Jahre vorher einmal für sich selbst geschrieben hatte, um Namen, Adressen und Informationen von Leuten zu speichern, die er traf oder die ihm empfohlen wurden. Genauso funktioniert heute das World Wide Web: Jedes Wort, jedes Bildelement, jedes Symbol kann eine Verknüpfung - ein Link - zu jeder anderen im Web gespeicherten Information sein, ganz gleich, ob es nur ein Satz oder eine ganze Bibliothek ist. Von jeder Stelle des World Wide Web (WWW) ist es möglich - ohne die Gegenseite zu fragen -, einen Link auf jedes andere Informationsangebot zu legen.
web:history  cern  mertonian:norms  openscience 
july 2017 by MicrowebOrg
Publikationssytem-Desaster // Laborjournal online: Überdenken - Stephan Feller
weithin ignorierte Wahrheit, dass über 95 Prozent der Wissenschaftler samt ihren Publikationen gerade mal zwanzig Jahre nach ihrem Abgang aus der aktiven Forschung bereits wieder völlig vergessen sind.

Trotzdem investieren immer mehr Forscher mehr und mehr Zeit mit dem Polieren ihrer aktuellen Zitierungszahlen, dem Aufblasen ihrer h-Indizes (Hirsch-Faktor, misst die wissenschaftliche „Geweihgröße“) und anderer vermeintlicher Wertschätzungsfaktoren, sowie dem Ansammeln der verschiedensten Formen von zweifelhaften „Aktivitätspunkten“ – als wären sie Unsterbliche, die nur durch Enthauptung oder eine silberne Kugel ins Herz jemals gestoppt werden könnten.

Essays
Illustr.: iStock / Akindo

Zunehmend verwenden wissenschaftliche Kommissionen diese oftmals feh­lerhaf­ten Schätzwerte als Surrogate, mit denen sie vorauszusagen versuchen, wer zukünftig maximal produktiv sein wird. Auf diese Weise wählen sie dann Kandidaten für massiv unterfinanzierte Positionen in Evaluations-geschüttelten Instituten aus – scheinen aber oft um jeden Preis vermeiden zu wollen, die konkreten Veröffentlichungen der Bewerber wirklich gründlich lesen zu müssen

Zunehmend werden „Ressourcen“-Artikel veröffentlicht, in der Hoffnung, dass irgendjemand irgendwo irgendwann irgendwelchen Sinn in diesen turmhohen Ansammlungen von fehlerverseuchtem Chaos entdeckt. Viele GWAS-Analysen (von genome-wide association study) sind hier ein gutes Beispiel.

Mehr und mehr Autoren drängen sich in Publikationen mit stetig wachsenden Methodenportfolios hinein, bis kein einzelner von ihnen mehr in der Lage ist, den gesamten Inhalt und die tatsächliche Bedeutung der Artikel vollständig zu erfassen, auf denen sie selbst mit draufstehen.

Zugleich wollen manche Artikel mit einem 120-seitigen Supplement beeindrucken und verwandeln dadurch die Vorbereitung von Journal-Clubs zu wochenlangen Folterübungen. Wieder andere Autoren versuchen die kleinste veröffentlichbare Einheit neu zu definieren, um die Häufigkeit ihrer Paper-Ausflüsse zu maximieren (und damit gleichzeitig Impakt-Punkte irgendwelcher Art).

Über 30.000 wissenschaftliche Zeitschrif­ten – darunter viele, die von recht zwielichtigen Gestalten mit offensichtlichem Sitz in Spamalot-Castle betrieben werden – nerven uns unaufhörlich, indem sie uns fast täglich mit Müll-Mails bombardieren, in denen sie um Manuskript-Futter betteln. Dazu scheint sich in gewissen Ländern wissenschaftliche Kreativität hauptsächlich in der Erfindung immer neuer Formen von Peer-Review- und Editor-Betrügereien, sowie verschiedenen Spielarten der betrügerischen Autorenschaft-Erschwindelung zu manifestieren.

Wissenschaftler verbringen endlose Stunden als Journal-Reviewer und -Editoren; und müssen dann überteuerte Zeitschriftenabonnements kaufen, oder viel Geld für die Downloads einzelner Artikel zahlen, manchmal sogar für die eigenen (!) – während scheinbar sehr gierige Verleger erstaunliche Gewinne scheffeln.

Die Wissenschaftsgesellschaften und -förderer wollen jedoch offensichtlich nicht damit belästigt werden, das wissenschaftliche Publizieren wieder zurück in die Hände der Wissenschaft zu bringen, was vielleicht dazu beitragen würde, kontraproduktive Auswüchse (zum Beispiel „Reviewmania“ zur Steigerung von Impakt-Faktoren) im wissenschaftlichen Verlagswesen deutlich einzudämmen, da diese zu großen Teilen von nichtwissenschaftlichen, kommerziellen Interessen getrieben werden. Die Wissenschaftsgemeinde muss dringend weniger Paper veröffentlichen und nicht etwa mehr! Niemand, und ich meine WIRKLICH NIEMAND, hat die Zeit, den ganzen Unsinn zu lesen, der derzeit sogar innerhalb eines einzigen Spezialgebiets tagtäglich auf die Menschheit losgelassen wird.

Heutzutage werden viele unserer Univer­sitäten wie schlechte Kopien von Unternehmen betrieben. Nicht zuletzt deshalb haben die administrativen Mitarbeiter oft feste Stellen, während viele Wissenschaftler stetig darum kämpfen müssen, ihre befristeten Verträge noch einmal verlängert zu bekommen. Dies passiert oft nur dann, wenn sie genug Overhead-Geld generieren – mit dem dann am Ende noch mehr administrative Mitarbeiter finanziert werden.
openscience  mikrobuch:uni20:buch 
july 2017 by MicrowebOrg
The Ed Techie | Open Science Laboratory 2017 Martin Weller
Prüfungen usw. 2017:
I’d like to contrast this ed tech rapture approach with a more pragmatic one. I am a big fan of the Open Science Laboratory at the OU. They do really neat things like the virtual microscope, virtual field trips and live lab demonstrations with interactive elements. All of these really help students, and they’ve done enough research to find what they benefits are, how they can develop them, and what combination with other media works best. They are, in short, useful. No-one pitches ed tech like this as an end of education as we know it. They are focused on students’ needs, have evidence of impact, and are in use now without reference to an imagined future.
openuniversity  openscience  martinweller  ou 
june 2017 by MicrowebOrg
Swipe right on the new ‘Tinder for preprints’ app | Science | AAAS
If you’re tired of swiping left and right to approve or reject the faces of other people, try something else: rating scientific papers. A web application inspired by the dating app Tinder lets you make snap judgments about preprints—papers published online before peer review—simply by swiping left, right, up, or down.

Papr brands itself as “Tinder for preprints” and is almost as superficial as the matchmaker: For now, you only get to see abstracts, not the full papers, and you have to rate them in one of four categories: “exciting and probable,” “exciting and questionable,” “boring and probable,” or “boring and questionable.” (On desktop computers, you don’t swipe but drag the abstract.) The endless stream of abstracts comes from the preprint server bioRxiv.

Papr co-creater Jeff Leek, a biostatistician at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland, released an earlier version of Papr late last year but only started publicizing the app on social media earlier this month after his colleagues added a few more features, including a recommendation engine that suggests studies based on your preferences, an option to download your ratings along with links to the full preprints on bioRxiv, and suggestions for Twitter users with similar tastes as yours.
openaccess  openscience 
june 2017 by MicrowebOrg
Plain-language Summaries of Research: Something for everyone | eLife
Who reads plain-language summaries?
When the British Psychological Society (BPS) started its Research Digest email newsletter back in 2003, its aim was to summarize new psychology research for 16–18 year old school students. "However, we quickly came to realize that we were reaching a much wider audience," says its editor, Christian Jarrett. "For many years now we've been writing for the general public, as well as students, researchers and journalists."

Something similar has happened at Astrobites, a website that publishes plain-language summaries of astrophysics papers on the ArXiv preprint server. At first the site was aimed primarily at undergraduate students studying astrophysics but, again, it has attracted other readers. "Our audience turns out to be approximately equal parts undergraduates, graduate students, professional researchers, and interested members of the broader public," says Nathan Sanders, who helped to launch the site in 2010, when he was a graduate student at Harvard University (Sanders, 2013).

Some journals and other organizations employ writers to produce plain-language summaries. To help their writers, Annals of the Rheumatic Diseases and eLife both ask the authors of research papers to answer a set of questions about their work in plain language. At eLife we have found that involving the original authors in this way can save time later because they tend to make fewer changes when they check the draft summary.
plainlanguage  openscience 
june 2017 by MicrowebOrg
Plain-language Summaries of Research: Writing for different readers | eLife
Plain-language Summaries of Research: Writing for different readers
Peter Rodgers
Feature Article Mar 15, 2017
Cite as: eLife 2017;6:e25408 doi: 10.7554/eLife.25408


We have to make an effort to communicate with readers outside the research community; we have to speak to pupils and teachers, to healthcare professionals and patients (and their families), to anyone and everyone who is interested in science and research. And we have to speak to them in their language, in the language of the news media and Wikipedia. We have to speak to them in plain language, not in the formal and formulaic prose found in most research papers; and we have to use verbs, not nouns, and to avoid words like characterization and facilitation that – while much loved and used by scientists – can stop a sentence or article dead in its tracks.

As explained in "An inside guide to eLife digests", we have been publishing plain-language summaries of eLife papers, called digests, since the journal was launched in 2012 (King et al., 2017). These summaries are typically between about 250 and 400 words long and appear immediately below the abstract (and at the top of the second page in the PDF version).

The aim of the digest is threefold – to describe the background to the paper, to summarize the main findings, and to briefly discuss what might happen next – in language that an interested or motivated reader can understand. While eLife digests are primarily intended for readers outside the research community, a recent survey suggested that they are widely read by other researchers. And plain-language summaries can also be useful to authors when, for example, they need to explain their work in non-technical terms when applying for a fellowship or faculty position.

The area of science with perhaps the greatest need for clear and accurate information about current research is medical research and, as described in "The value of a healthy relationship", medical charities and patient groups are very active in this arena (Kuehn, 2017). Some charities require researchers to include plain-language summaries with applications for funding, and others include patient representatives in the panels that evaluate funding applications.

There is nothing new in expecting academic researchers to communicate with the public. ((Zu kurz formuliert: Was verlangt wird, ist DISKURSfähigkeit.)

There is actually nothing new in expecting academic researchers to communicate with the public: the founding document of the American Association of University Professors, published in 1915, states that one of the roles of an academic is to "impart the results of their own and their fellow-specialist's investigations and reflection, both to students and to the general public" (Sugimoto, 2016). And the challenge of communicating complex subject matter to a general audience is not unique to science. Last year, for example, Jonathan Fullwood of the Bank of England compared the readability of written outputs from five different sources: he found that reports and speeches from his employer and other banks were the least readable, and that political speeches were the most readable. The reason, he wrote, is that "those writing in the financial industry tend to use long words. They put those words in long sentences. And those sentences in long paragraphs" (Fullwood, 2016).
mikrobuch:open  openuniversity  openscience  plainlanguage 
june 2017 by MicrowebOrg
Open Science Manifesto | OCSDNET
Network-based collaboration is allowing us to re-imagine scientific research as a more open and collaborative process. This is inspiring many open science initiatives:

Open access is encouraging scientists to share their data and research online.

The open-source hardware movement is promoting the independent design of technology.

And citizen science projects are inviting the general public to help scientists collect large amounts of data.

However, this model is not making science a more inclusive and representative practice.

Many scientists around the world continue to be underrepresented and excluded from scientific research

New technologies continue to exclude those with limited digital rights.

And citizens rarely get to shape the research agenda.

This model of open science is not challenging the core values of science. Instead it is reproducing and amplifying global inequalities in scientific research, defeating its purpose of making science more open.

We need to ask ourselves what values has been absent from existing discussions.

To whom does knowledge belong?

Whose voice counts in science?

And how can we increase people’s participation and agency in scientific production?

At OCSDNet, we engaged in a participatory consultation with scientists, development practitioners and activists from 26 countries in Latin America, Africa, the Middle East and Asia to understand what are the values at the core of open science in development.

What we learned is that there is not one right way to do open science. It requires constant negotiation and reflection, and the process will always differ by context.

But we also found a set of seven values and principles at the core of their vision for a more inclusive open science in development.

At OCSDNet, we propose that Open and Collaborative Science…

Principle 1: Enables a knowledge commons where every individual has the means to decide how their knowledge is governed and managed to address their needs

Principle 2: It recognizes cognitive justice, the need for diverse understandings of knowledge making to co-exist in scientific production

Principle 3: It practices situated openness by addressing the ways in which context, power and inequality condition scientific research

Principle 4: It advocates for every individual’s right to research and enables different forms of participation at all stages of the research process.

Principle 5: It fosters equitable collaboration between scientists and social actors and cultivates co-creation and social innovation in society

Principle 6: It incentivizes inclusive infrastructures that empower people of all abilities to make, and use accessible open-source technologies.

And finally, open and collaborative science:

Principle 7: strives to use knowledge as a pathway to sustainable development, equipping every individual to improve the well-being of our society and planet
openscience  mikrobuch:open 
june 2017 by MicrowebOrg
Re-envisioning a future in scholarly communication
!! Gesamtentwurf.
Dezentral blockchain-artig, Mertonian Ideals

promoting behavior that is antithetical to the norms of science (Merton 1942; Mitroff 1974; Anderson, Martinson, and De Vries 2007)
I thoroughly believe that the Mertonian norms in science (Merton 1942) align perfectly with transparency in science (see also Hartgerink and Wicherts 2016) and that a scholarly communications system can be built on this framework in a sustainable manner.

At OpenCon2016, Brewster Kahle mentioned that the scholarly system should be “locked open”. After this talk I spent much time thinking about how this could be done. Using a decentralized and distributed system, as I tried to conceptualize throughout this article, is my initial attempt at realizing a “locked open” system that benefits not just the scholarly community, but also those that aim to generate value from these outputs. By shifting from a knowledge commodification to a commodification of how that knowledge is consumed, free access and reuse becomes beneficial to all parties. This is key to create a sustainable eco-system where scholars and companies can cooperate instead of compete, as we currently do.
openscience  mikrobuch:uni20:buch  blockchain 
may 2017 by MicrowebOrg
Open Science / Peter Broks [Blog] – Literacy of the Present
Geschichte der "Wissenschaft" im populären/öffentlichen Raum.
ZITAT (Diss, 1988:)

"The public understanding of science should not be about more information, like shouting English at foreigners. What is needed is not greater understanding of science as a product, but greater involvement in science as process. Scientific practice needs to be democratized. The popularization of science should not only share knowledge, but the power that goes with it."

Broks, P., 1988, Science and the popular press: a cultural analysis of British family magazines 1890-1914. PhD thesis, University of Lancaster.
peterbroks  openscience  kulturtechnik  culturalstudies  mikrobuch:quote 
may 2017 by MicrowebOrg
The end of Academia.edu: how business takes over, again | diggit magazine
Academia.edu used to be a refreshing and very useful platform for academics and students all over the world. That has ended with academia.edu introducing the ‘paid search’.

When Academia.edu was launched, it was a welcome change in the academic publishing world. That world was and is dominated by the so-called ‘important journal’. The business model of these journals rested on the support of the national governments pushing academic researchers to publish in those journals if they want an academic career. That combination was a very profitable business model for those journals: academics are paid by the governments (and thus by the tax-payers), they don’t get paid by the journals for their publication. The reviewers and the editors of these journals, again academics, are not paid for their job either. Moreover, when you submit your research to such a journal, you transfer your copyright to the publisher – for free. After which, the publisher asks a very high price for that ‘quality product’. This is a business model based on free labor and making research unavailable for students or researchers who don’t have a subscription.

The academic production of knowledge should not be used to make profit, but to improve society

This publishing practice was the direct cause of the success of Academia.edu. That social network site for academics allowed academics to build a network, to upload their papers, books and projects and search for new content. All for free. In the last 5 years, Academia has been a useful educational tool. Lecturers suggest their students to make a profile so that they have access to an enormous amount of papers and other content. It showed the democratic and educational potential of a social network site.

The problem with Academia.edu is that it is a commercial enterprise. It is not created to serve the common good – diffusing knowledge. It is also not created to serve democratic ideals, but to make money. And like almost all such ‘user-generated content sites’ they start as dot.communism but almost overnight turn into dot.capitalism, to paraphrase Van Dijk. The first signs of that shift in the case of Academia.edu were visible when they introduced ‘the premium account’ saying: ‘Academia Premium is for people who want powerful extra features on Academia.’

The lessons to be drawn from this, are the same ones that Siva Vaidhyanathan listed when talking about the Google Books projects. The academic production of knowledge should not be used to make profit, but to improve society. Academic knowledge is, or at least should be a common. The fact that academic knowledge is now part of the ‘for profit’ business can only be understood as the failing of the state and the dominance of neoliberalism. The market destroys academia and the only way to change that is to set up our own platforms. Platforms that only have one goal: to give that knowledge back to society. Fortunately, in a way, this policy shift in Academia.edu now opens a space for new platforms offering genuinely open access for a community of scholars around the world, craving to read and discuss each others' findings, but increasingly constrained by insane paywalls.
openscience 
april 2017 by MicrowebOrg
Fostering open science to research using a taxonomy and an eLearning portal
The term "Open Science" is recently widely used, but it is still unclear to many research stakeholders - funders, policy makers, researchers, administrators, librarians and repository managers - how Open Science can be achieved. FOSTER (Facilitate Open Science Training for European Research) is a European Commission funded project, which is developing an e-learning portal to support the training of a wide range of stakeholders in Open Science and related areas. In 2014 the FOSTER project co-funded 28 training activities in Open Science, which include more than 110 events, while in 2015 the project has supported 24 community training events in 18 countries. In this paper, we describe the FOSTER approach in structuring the Open Science domain for educational purposes, present the functionality of the FOSTER training portal and discuss its use and potential for training the key stakeholders using self-learning and blended-learning methods.
openscience 
april 2017 by MicrowebOrg
What is open peer review? A systematic review - F1000Research
PLattform selbst: mit SUBMIT-button.

F1000Research : an Open Science
publishing platform

Offers immediate publication &
transparent refereeing
Avoids editorial bias
Ensures inclusion of all source data
(where relevant)
openaccess  openscience  openuniversity 
april 2017 by MicrowebOrg
The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice eBook: Chris Chambers: Amazon.de: Kindle-Shop
In this unflinchingly candid manifesto, Chris Chambers draws on his own experiences as a working scientist to reveal a dark side to psychology that few of us ever see. Using the seven deadly sins as a metaphor, he shows how practitioners are vulnerable to powerful biases that undercut the scientific method, how they routinely torture data until it produces outcomes that can be published in prestigious journals, and how studies are much less reliable than advertised. He reveals how a culture of secrecy denies the public and other researchers access to the results of psychology experiments, how fraudulent academics can operate with impunity, and how an obsession with bean counting creates perverse incentives for academics. Left unchecked, these problems threaten the very future of psychology as a science—but help is here.

Outlining a core set of best practices that can be applied across the sciences, Chambers demonstrates how all these sins can be corrected by embracing open science, an emerging philosophy that seeks to make research and its outcomes as transparent as possible.
openscience 
april 2017 by MicrowebOrg
How to start an Open Science revolution! An interview with patient advocate, Graham Steel. – ScienceOpen Blog
When did you first hear about open access/data/science? What were your initial thoughts?

In order, I first heard about open access late 2006, open science the following year and then open data. My initial thoughts were that all these entities were much needed and refreshing alternatives to all that I had seen or read about such topics up until then, i.e., closed access, prohibitive paywalls, “data not shown” etc.

You’re what some people call a ‘Patient Advocate’ – what is that, and what’s the story there?

The terms Patient Advocate and Patient Advocacy broadly speaking can mean a number of things. By definition, “Patient advocacy is an area of lay specialization in health care concerned with advocacy for patients, survivors, and carers”. For myself personally, this began in 2001 and mainly concerned bereaved relatives and then patients and their family members. See here for further details.

You relentlessly campaign for various aspects of open science – what drives you in this?

My means of background, I would say with certainty that during the period of around 2008 – 2011, the (sadly now deceased) social media aggregator site Friendfeed was the space in which the foundations for a lot of my current thinking were set out. Prior to that, having already been primed with open access and open data, that’s pretty much where open science really took off in earnest. Science and indeed research in the open is without question the way forward for all.

What do you think the biggest impediments to open research are? How can we collectively combat or overcome them

First and foremost has to be Journal Impact Factor (JIF). This is despite an abundance of evidence which over the years has shown that this is a highly flawed metric. I would encourage academics to make enquiries within their Institutions to take a pledge and sign the San Francisco Declaration on Research Assessment, DORA. Secondly, as mentioned earlier, embrace the fact that it takes very little effort these days to get a preprint of your work archived on the web.

I would encourage academics to make enquiries within their Institutions to take a pledge and sign the San Francisco Declaration on Research Assessment, DORA

What tools or platforms would you recommend to researchers looking to get into open science?

There are so many these days, where does one start? The best resource out there at present (I am not alone in this view) is Innovations in Scholarly Communication (now available in seven languages) created by Bianca Kramer and Jeroen Bosman. Also see https://innoscholcomm.silk.co/ which is super awesome.

Where do you see the future of scholarly communication? What steps are needed to get there? Whose responsibility do you think it is to lead this change?

I don’t have the answers to those myself. As of the time of writing, I would highly recommend Open Science Framework. I am moving more and more in the direction of advocating preprints for any paper with optionally, publication in journals later.
openaccess  openscience 
march 2017 by MicrowebOrg
[no title]
Those in the humanities often champion collaboration and the open exchange of ideas, but you wouldn't necessarily know that when you look at the venues they use to share their work. Hybrid Pedagogy seeks to challenge not only how humanists teach, but also how they publish. The six-year-old online journal pursues humanistic values by embracing an editorial process well-established in the sciences: open peer review.

Hybrid Pedagogy’s editorial process is uncommonly inclusive. Whereas many academic journals prize selectivity, Hybrid Pedagogy accepts the vast majority of submissions—about 70 percent, according to the current editor—with the expectation that authors and reviewers work hand-in-glove to revise essays. The resulting articles are short (by academic standards), visually engaging, widely circulated, and more personal and political than those in traditional academic publications.

While print journals embrace open peer review—as is the case with STEM journals such as Atmospheric Chemistry & Physics and PeerJ—the pairing of open peer review with web technology enables new editorial approaches. In a Views article for Inside Higher Ed, Alex Mueller, associate professor of English at the University of Massachusetts in Boston, wrote that combined with open access, open peer review can support new forms of scholarly inquiry.

Such methods have long proliferated in the sciences. For years, physicists have used arxiv.org, the physics pre-print repository, to perform pre-publication review, said Cheryl Ball, editor of the web-text journal Kairos: A Journal of Rhetoric, Technology, and Pedagogy and an associate professor at West Virginia University.
openscience  open:peerreview  bnbuch:openscience 
march 2017 by MicrowebOrg
Experiment - Offenes Promotionsvorhaben - eingereichte Version - Eingereicht
Die Arbeit orientierte sich dabei an der Forderung von Open Science, dass der umfassende Zugriff auf den gesamten wissenschaftlichen Erkenntnisprozess inklusive aller Daten und Informationen, die bereits bei der Erstellung, Bewertung und Kommunikation der wissenschaftlichen Erkenntnisse entstanden sind, jederzeit gegeben ist. Auf der Webseite http://offene-doktorarbeit.de wurde zu jeder Zeit der gesamte Text sowie die verwendete Literatur, aber auch die Ergebnisse der empirischen Arbeit zeitnah veröffentlicht.
openscience  christianheise  offenedoktorarbeit 
march 2017 by MicrowebOrg
About – Open Science Commons
The Open Science Commons (OSC) is a new approach to sharing and governing advanced digital services, scientific instruments, data, knowledge and expertise that enables researchers to collaborate more easily and be more productive.

Within the OSC, researchers from all disciplines will have easy, integrated and open access to the advanced digital services, scientific instruments, data, knowledge and expertise they need to collaborate and achieve excellence in science, research and innovation.

Using Open Science as a guideline and applying the Commons as a management principle will bring numerous benefits for the research community, and society at large.

More about the benefits of the Open Science Commons to research

The Open Science Commons builds on the idea of an e-Infrastructure Commons, first proposed in a White Paper published in 2013 by the European e-Infrastructure Reflection Group (e-IRG).

The Open Science Commons relies on four pillars, representing a wide range of groups, providers and community types:

Data. The data that is the subject matter for research. It should be dealt with according to the principles of open access and open science, while maintaining trust and privacy for researchers.
e-Infrastructures. The technology and technical services supporting researchers, building towards integrated services and interoperable infrastructures across Europe and the world.
Scientific instruments. The equipment and collaborations which generate scientific data, from small-scale lab machines to global collaborations around massive facilities.
Knowledge. The human networks, understanding and material capturing skills and experience required to carry out open science using the three other pillars.
openscience 
february 2017 by MicrowebOrg
Our Mission | ORCID
Unique Identifier für Forscher

ORCID’s vision is a world where all who participate in research, scholarship, and innovation are uniquely identified and connected to their contributions across disciplines, borders, and time.
Our mission

ORCID provides an identifier for individuals to use with their name as they engage in research, scholarship, and innovation activities. We provide open tools that enable transparent and trustworthy connections between researchers, their contributions, and affiliations. We provide this service to help people find information and to simplify reporting and analysis.
openscience 
february 2017 by MicrowebOrg
Projektbeschreibung • OJS-de.net
Das Ziel des Projekts ist, die elektronische Publikation wissenschaftlicher Zeitschriften an deutschen Hochschulen auf Basis von OJS zu erleichtern, auszubauen und langfristig zu sichern. Das Projekt umfasst Softwareanpassung, Bedarfsanalyse, Aufbau eines OJS-Netzwerks und die Steigerung der Sichtbarkeit von OJS-Journals.
openscience  openjournal 
february 2017 by MicrowebOrg
Humanities Commons – Open access, open source, open to all
Yes, members can create multiple WP sites (for conferences, journals, courses, etc.) on HC. We have plugins for SlideShare, Soundcloud, etc.

Welcome to Humanities Commons, the sharing and collaboration network for people working in and around the humanities. Discover the latest open-access scholarship and teaching materials, make interdisciplinary connections, build a WordPress Web site, and increase the impact of your work by sharing it in the repository.

Not just articles and monographs: Upload your course materials, white papers, conference papers, code, digital projects—these can have an impact too!
openscience  digitalhumanities  opensyllabus 
february 2017 by MicrowebOrg
Self Journals, Open Peer-review!
Michaël Bon1, Michael Taylor2, Gary S McDowell3,4
Novel processes and metrics for a scientific evaluation rooted in the principles of science
Version 1 Released on 26 January 2017 under Creative Commons Attribution 4.0 International License

We propose an implementation of our evaluation system with the platform “the Self-Journals of Science” (www.sjscience.org)

In this system of value creation, scientific recognition is artificially turned into a resource of predetermined scarcity for which scholars have to compete. In one camp, members of the scientific community must compete for limited space in a few “top” journals, which can impede the natural unrestricted progress of science by disincentivizing open research and collaboration. In the other camp, a low number of editors must also contend with each other for exclusive content to increase the reputation of their journal, a process that can have strong negative effects on scientific output and on the research enterprise as a whole. Although many scholars wear both hats –being authors and journal editors at the same time– here we do not identify the problem in individual agents but rather in the roles themselves and the power relationship between them. Thus, we argue that it is not only the kind of value that is promoted by the current system that is questionable (journal prestige and ‘impact', as in impact factor): more importantly, it is the way the system produces value and how its implicit asymmetric power structure is detrimental to scientific progress.

In the current publishing environment, since scientists are competing for the same limited resources, relations between peers can become inherently conflictive. For instance, scientists working on the same topic may tend to avoid each other for as long as possible so as not to be scooped by a competitor, whereas collectively it is likely that they would have benefited most from mutual interaction during the early research stages. The most worrying consequence of peers' diverging interests is that debating becomes socially difficult –if not impossible– in the context of a journal. The rejection and downgrade of an article to a lower-ranked journal can be a direct consequence of a scientific disagreement that few people would openly take responsibility for, to avoid reprisals. While the reliability of science comes from its verifiability, today it is being validated by a process which lacks this very property. Journal's peer-review is not a community-wide debate but a gatekeeping process tied to the local policy of an editorial board

While peer-trial still dominates the mainstream, there are strong signs that the scientific community is actively engaged in a more continuous process of validation. Browsing websites such as PubPeer or Publons (where “post-print peer-review” is possible) makes it clear that, although articles are improved with respect to initial submission, the discussion process continues long after publication and that the evolution of articles is a more dynamic construct [14]. This is at odds with the world of undisclosed email dialogues between authors and editors, and reviewers and editors during the peer-trial process.

In this section, we present a definition of scientific value and describe the open and community-wide processes required to capture it. These processes maintain symmetry in the creation of scientific value and fulfil what we consider the minimal expectations from any desirable alternative evaluation system, which are:

to promote scientific quality.
to provide incentives to authors, reviewers and evaluators.
to promote academic collaboration instead of competition.
to be able to develop in parallel to current journal publication practices (as long these remain essential for funding and career advancement).
to propose article-level metrics that are easy to calculate and interpret.
to be verifiable and hard to game.

A prototype of an evaluation system driven by these processes is implemented in “the Self-Journals of Science” (SJS, www.sjscience.org): an open, free and multidisciplinary platform that empowers scientists to achieve the creation of scientific value. SJS is a horizontal environment for scientific assessment and communication and is technically governed by an international organisation2 of volunteer research scholars whose membership is free and open to the entire scientific community

We have defined scientific peer-review as the community-wide debate through which scientists aim to agree on the validity of a scientific item.

In our system, peer-review is an open and horizontal (i.e. a non-authoritative and unmediated) debate between peers where “open” means transparent (i.e., signed), open access (i.e., reviewer assessments are made public), non-exclusive (i.e., open to all scholars), and open in time (i.e. immediate but also continuous). This brings a new ethic to publishing [31]: the goal of peer-review is not to provide a one-time certification expressed in the form of a binary decision of accept or reject as per the traditional mode of publishing, rather it is to scientifically debate the validity of an article with the aim of reaching an observable and stable degree of consensus. Here, reviews are no longer authoritative mandates to revise an article, but elements of a debate where peers are also equals. The influence of a review over an article is based on its relevance or its ability to rally collective opinion, and on an open context where authors cannot afford to let relevant criticism go unanswered.

The validity of an article is captured by a transparent and community-wide vote between two options: “this article has reached scientific standards” 5 or “this article still needs revisions”.

Self-Journals. In our alternative evaluation system we introduce the concept of self-journals as a way for scientists to properly express their judgement regarding an article's importance for a specific field. A self-journal is a novel means of personal scientific communication; it can be thought of as a scholarly journal attached to each individual scientist that works on the curation of any scientific item available on the Web via hyperlinks (and not on appropriation of articles following a submission process).

A self-journal is released into structured issues, which are collections of articles around a certain topic. Every issue has its own title and editorial providing an introduction for the community and must contain a minimal number of articles (in our implementation, we set this minimum to 4). The curator has the possibility to provide personal comments on each article that has been curated in the issue (for concrete examples, please check the first issue of the self-journal of Sanli Faez, Konrad Hinsen or Michaël Bon). The consistency of the selection of articles and the relevance of the personal comments determine the scientific added value of each self-journal issue. Every scientist can curate their own self-journal, through which they can release as many issues on as may topics as they please. Curators can take advantage of self-journals to review a field, present a promising way to develop it, offer a comprehensive collection of their own research, host the proceedings of a workshop or a journal club, or popularize scientific ideas and discoveries etc. A self-journal reflects the scientific vision of its curator and bears his or her signature. Interested readers can freely subscribe to a self-journal and get notified whenever a new issue is released.

An ecosystem of self-journals offers a way to quantify the importance of an article, primarily by the number of its curators.

Incentives in the absence of official recognition by institutions and funders. Self-journals have their own rationale. Firstly, they are a means of personal scientific communication that allow their curators to elaborate on an individual vision of science with the necessary depth they see fit. A self-journal therefore provides a great scientific service to its readers by providing a level of consistency in the interpretation and analysis of scientific output. In return, this service benefits curators who increase their visibility and influence over what is disseminated to the community. Self-journals give new freedom and scope to the editing process since curation, as proposed here, applies to any research work with an Internet reference. In other words, a mechanism is provided that allows scientists to fully express an aspect of their individual worth that is absent in the current system, and build a reputation accordingly.

A response is also provided to the problem of the decreasing visibility of authors, articles and reviewers as the volume of scientists and scientific works grows on the Web. Each issue of a self-journal acts as a pole of attraction that is likely to have a minimum audience: the authors whose articles have been curated can be notified about what is being said about their work, and may want to follow the curator. Moreover, on a platform like SJS where the ecosystem of self-journals is well integrated, interest for a particular article can guide readers to self-journal issues where it has been uniquely commented on and contextualized in relation to other articles.

We wish to emphasize that the interest value of a particular self-journal issue does not lie so much in the intrinsic value of the articles selected, but rather in the specific comments and collective perspective that is being given to them.
openscience  selfjournal 
february 2017 by MicrowebOrg
bjoern.brembs.blog » Open Science: Too much talk, too little action
!! I got involved in Open Science more than 10 years ago. Trying to document the point when it all started for me, I found posts about funding all over my blog, but the first blog posts on publishing were from 2005/2006, the announcement of me joining the editorial board of newly founded PLoS ONE late 2006 and my first post on the impact factor in 2007. That year also saw my first post on how our funding and publishing system may contribute to scientific misconduct.

In an interview on the occasion of PLoS ONE’s ten-year anniversary, PLoS mentioned that they thought the publishing landscape had changed a lot in these ten years. I replied that, looking back ten years, not a whole lot had actually changed:

Publishing is still dominated by the main publishers which keep increasing their profit margins, sucking the public teat dry
Most of our work is still behind paywalls
You won’t get a job unless you publish in high-ranking journals.
Higher ranking journals still publish less reliable science, contributing to potential replication issues
The increase in number of journals is still exponential
Libraries are still told by their faculty that subscriptions are important
The digital functionality of our literature is still laughable
There are no institutional solutions to sustainably archive and make accessible our narratives other than text, or our code or our data

The only difference in the last few years really lies in the fraction of available articles, but that remains a small minority, less than 30% total.

So the work that still needs to be done is exactly the same as it was at the time Stevan Harnad published his “Subversive Proposal” , 23 years ago: getting rid of paywalls. This goal won’t be reached until all institutions have stopped renewing their subscriptions. As I don’t know of a single institution without any subscriptions, that task remains just as big now as it was 23 years ago. Noticeable progress has only been on the margins and potentially in people’s heads. Indeed, now only few scholars haven’t heard of “Open Access”, yet, but apparently without grasping the issues, as my librarian colleagues keep reminding me that their faculty believe open access has already been achieved because they can access everything from the computer in their institute.

there can be no dispute that now a lot more people are talking about these issues. Given perhaps another 23 years or 50, there may even be some tangible effects down the road – as long as one assumes some sort of exponential curve kicking in at some point fairly soon. It sure feels as if such an exponential curve may be about to bend upwards. With the number of Open Science events, the invitations to talk have multiplied recently

open text, data, code:

We’ve already started by making some of our experiments publish their raw data automatically by default. This will be expanded to cover as many of our experiments as technically feasible. To this end, we have started to work with our library to mirror the scientific data folders of our harddrives onto the library and to provide each project with a persistent identifier whenever we evaluate and visualize the data. We will also implement a copy of our GitHub repository as well as our Sourceforge code in our library, such that all of our code will be archived and accessible right here, but can be pushed to whatever new technology arises for code-sharing and development. Ideally, we’ll find a way to automatically upload all our manuscripts to our publication server with whatever authoring system we are going to choose (we are testing several of them right now). Once all three projects are concluded, all our text, data and code will not only be open by default, it will also be archived, backed up and citable at the point of origin with a public institution that I hope should be likely to survive any corporation.
openscience 
february 2017 by MicrowebOrg
Zenodo - Research. Shared.
Zenodo in a nutshell

Research. Shared. — all research outputs from across all fields of research are welcome! Sciences and Humanities, really!
Citeable. Discoverable. — uploads gets a Digital Object Identifier (DOI) to make them easily and uniquely citeable.
Communities — create and curate your own community for a workshop, project, department, journal, into which you can accept or reject uploads. Your own complete digital repository!
Funding — identify grants, integrated in reporting lines for research funded by the European Commission via OpenAIRE.
Flexible licensing — because not everything is under Creative Commons.
Safe — your research output is stored safely for the future in the same cloud infrastructure as CERN's own LHC research data.

all research outputs from across all fields of research are welcome! Zenodo accepts any file format as well as both positive and negative results. We choose to promote peer-reviewed openly accessible research, and we curate the uploads posted on the front-page.
Citeable.
Discoverable.
— be found!

Zenodo assigns all publicly available uploads a Digital Object Identifier (DOI) to make the upload easily and uniquely citeable. Zenodo further supports harvesting of all content via the OAI-PMH protocol.
Community
Collections
— create your own repository

Zenodo allows you to create your own collection and accept or reject uploads submitted to it. Creating a space for your next workshop or project has never been easier. Plus, everything is citeable and discoverable!

twitter 1/2017
Thomas Robitaille @astrofrog
Just uploaded 33Gb of data to @ZENODO_ORG in 20 minutes. Mind blown!
openscience 
january 2017 by MicrowebOrg
Der ‚goldene Weg‘ zu Open Science – scilog
Das Einzigartige an OLH ist das Finanzierungsmodell. In vielen Gesprächen mit Bibliotheken in den Jahren 2013 und 2014 erkannten wir, dass viele bereit waren uns zu helfen einen anderen Veröffentlichungsmodus auf die Beine zu stellen, der nicht profitorientiert und für geisteswissenschaftliche Fächer nachhaltiger war als das APC-Modell. So führten wir die “Library Partnership Subsidy” (LPS) ein. – Anstatt Geld von Bibliotheken über ein Subskriptionsmodell zu verlangen, zahlen die uns fördernden Institutionen in einen „Kostenpool“ ein, aus dem wir die Infrastruktur für unsere Veröffentlichungsplattform finanzieren und mit dem wir Produktionskosten wie Lektorat, Schriftsatz, digitale Archivierung, etc. bestreiten. Als wir das System im September 2015 einführten, hatten uns bereits fast 100 Bibliotheken aus den USA, Großbritannien und Europa ihre Unterstützung zugesagt.

Das Einzigartige an OLH ist das Finanzierungsmodell. In vielen Gesprächen mit Bibliotheken in den Jahren 2013 und 2014 erkannten wir, dass viele bereit waren uns zu helfen einen anderen Veröffentlichungsmodus auf die Beine zu stellen, der nicht profitorientiert und für geisteswissenschaftliche Fächer nachhaltiger war als das APC-Modell. So führten wir die “Library Partnership Subsidy” (LPS) ein. – Anstatt Geld von Bibliotheken über ein Subskriptionsmodell zu verlangen, zahlen die uns fördernden Institutionen in einen „Kostenpool“ ein, aus dem wir die Infrastruktur für unsere Veröffentlichungsplattform finanzieren und mit dem wir Produktionskosten wie Lektorat, Schriftsatz, digitale Archivierung, etc. bestreiten. Als wir das System im September 2015 einführten, hatten uns bereits fast 100 Bibliotheken aus den USA, Großbritannien und Europa ihre Unterstützung zugesagt.

Ursprünglich wollten wir ein sogenanntes Megajournal einrichten, in dem eine große Anzahl von Artikeln aus allen geisteswissenschaftlichen Fächern veröffentlicht werden sollte. Gleichzeitig sollte eine Reihe von unterschiedlichen Overlay Journals aufgebaut werden, um der Leserschaft zu erlauben, das veröffentlichte Material im Rahmen von einzelnen Forschungsfeldern zu ordnen. Obwohl es das Megajournal noch gibt, ist es inzwischen nur eines einer ganzen Reihe von Fachzeitschriften auf unserer Plattform.

In unseren laufenden Gesprächen mit einer Reihe von akademischen Herausgebern wurde uns klar, dass die Geisteswissenschafterinnen und Geisteswissenschafter ihre Bindung an eine bestimmte Zeitschrift oder Marke, aber auch an die Forschungsgemeinde, die die Zeitschriften im Lauf der Jahre aufgebaut haben, nicht aufgeben wollen. Wenn wir also einen Großteil der Forscherinnen und Forscher nicht überzeugen konnten, diese Bindung aufzugeben und mit dem OLH Megajournal einen ganz neuen Weg einzuschlagen, konnten wir sie vielleicht dazu überreden, der OLH-Plattform indirekt über ihre Zeitschriften beizutreten – also nicht einzelne Forscherinnen und Forscher, sondern ganze Communities in Richtung Open Access zu bewegen.

Deswegen ermöglichten wir es Zeitschriften, OLH beizutreten und die Vorteile unserer technologischen Innovationen und unseres APC-freien Modells zu genießen, ohne deswegen ihren Namen und die redaktionelle Unabhängigkeit aufgeben zu müssen

Im Moment arbeiten wir daran, ein halbjährliches Einreichungsverfahren für Zeitschriften einzurichten. Seit dem Start unserer Plattform im September 2015 scheinen viele unserer Kolleginnen und Kollegen dieses Projekt sehr viel schneller vorantreiben zu wollen, als wir ursprünglich angenommen hatten. Es gibt eine sehr dynamische Unterstützungsbewegung für das OLH-Modell. Und zurzeit arbeiten wir mit Partnern in Europa zusammen, besonders in den Niederlanden, die sehr daran interessiert sind, Open Libraries für andere Bereiche wie zum Beispiel die Mathematik oder Technik einzurichten.

Mehrere Studien zeigen, dass die Grundprinzipien des Open-Acces-Publizierens, besonders in den Geisteswissenschaften, eine überwältigende Anhängerschaft haben. Wenn es dann an die Praxis geht, sind Forscherinnen und Forscher in den Geisteswissenschaften aber zögerlicher als andere Disziplinen.

Eine der Hauptmotivationen für die Gründung von OLH war die Erkenntnis, dass die akademischen Hierarchien immer stärker werden. Es gibt viele Menschen, die zwar aktiv Forschung betreiben, aber keine Festanstellung haben oder auf Werkvertragsbasis arbeiten. Ohne die Vorteile einer fixen Hochschulanstellung haben sie dann oft keinen Zugang zu kostenpflichtigen Publikationen von Forschungsergebnissen. Auch Absolventen haben nach ihrem Abschluss häufig keinen Zugang zu wissenschaftlichem Material, sobald sie nicht mehr über einen Universitäts-Account verfügen. Das hindert sie daran, ihre Studien auch außerhalb der Universität weiterzuführen.

Außerdem ist uns klar, dass es andere gesellschaftliche Bereiche gibt, wie etwa NGOs, Standesvertretungen oder sogar Politiker, die aus beruflichen Gründen Zugang zu akademischer Forschung benötigen. Wenn all diese Gruppen keinen Zugang zu wissenschaftlichen Veröffentlichungen haben, macht es die Gesellschaft insgesamt ärmer.
openaccess  geiwi  openscience 
january 2017 by MicrowebOrg
Wissenskommunismus und Wissenskapitalismus
Kommunismus, Universalismus, Desinteressiertheit und Organisierter Skeptizismus
merton  openscience  grassmuck 
january 2017 by MicrowebOrg
bjoern.brembs.blog » So your institute went cold turkey on publisher X. What now?
With the start of the new year 2017, about 60 universities and other research institutions in Germany are set to lose subscription access to one of the main STEM publishers, Elsevier. The reason being negotiations of the DEAL consortium (600 institutions in total) with the publisher. In the run-up to these negotiations, all members of the consortium were urged to not renew their individual subsc
bjoernbrembs  openaccess  openscience 
december 2016 by MicrowebOrg
What is the future of Open Education?
From Open Education to Open Science

Fifteen years ago MIT took a big leap by introducing OpenCourseWare. In the intervening years, many universities have followed their steps in the world of Open Education.

In 2007 the Delft University of Technology launched their OpenCourseWare website. In 2010 we shared our course materials through iTunesU, and in 2013 we joined edX to publish openMOOCs.

The first ten years were mostly focused on the creation of more open resources. Over the last five years, the focus has shifted towards adoption. We are concentrating on the move from Open Educational Resources (OER) to Open Educational Practice (OEP).

The US converged towards a specific part of OER, Open Textbooks, and has had a lot of success with this strong focus on cost savings for students. In Europe the focus is diverging towards open science, which is a much broader process of opening up universities.
OpenScience

Often OpenScience is defined as the combination of Open Source, Open Data, Open Access, Open Education and more. More importantly, it is the movement to make scientific research, data, and dissemination (including education) accessible to all levels of an inquiring society, amateur or professional (Wikipedia, 2016). OpenScience is much more of a change in behavior than the adoption of a tool. For the European Commission, OpenScience, along with Open Innovation and Open to the World, are priorities for the next couple of years (European Commission, 2016).

European Commission (2016). Open innovation, Open Science, open to the world. A vision for Europe. Brussels: European Commission, Directorate-General for Research and Innovation. ISBN: 978-92-79-57346-0 DOI: 10.2777/061652. Available at: http://bookshop.europa.eu/en/open-innovation-open-science-open-to-the-world-pbKI0416263/
openscience  mikrobuch:uni20  oer:star5 
november 2016 by MicrowebOrg
Hypothesis | The Internet, peer reviewed. | Hypothesis
Our mission is to bring a new layer to the web. Use Hypothesis to discuss, collaborate, organize your research, or take personal notes.

Open Scholarly Annotation
jonudell  hypothes_is  mikrobuch:open  openscience 
november 2016 by MicrowebOrg
Blogsterben |
Vielleicht liegt das ja nur in meinem persönlichen Umfeld, aber meine Annahme auf Grundlage meiner (unsystematischen) Beobachtung ist: Als Wissenschaftler bloggen und folglich über Dinge berichten, die man für mitteilungswürdig hält, erste Ideen oder interessante Fundstücke teilen und öffentlich reflektieren über das, was einen bewegt, scheint keine Konjunktur mehr zu haben – von Ausnahmen, nämlich der „Eröffnung“ neuer Blogs wie den von Tobias, mal abgesehen ;-). Blogs einzelner Wissenschaftler werden tendenziell eher eingestellt oder sind verwaist oder auf Links und Wiedergaben von Inhalten ohne eigene (nennenswerte) Kommentierung reduziert. Wo sind die Meinungen, die Positionen, die Kritik? Und was sind die Gründe für das Blogsterben? Keine Zeit (mehr), weil man die man für Forschungsanträge und Administration braucht? Kein unmittelbarer Gewinn für die eigene Arbeit, ohne den es nicht mehr geht? Angst vor Kommunikationsabteilungen, die das gar nicht gerne sehen, wenn nicht alle kommunikative Energie in die PR der Organisation fließen? Sorge gar, die Unileitungen könnten sich an der öffentlich geäußerten Meinung ihrer Wissenschaftler stoßen?
openscience 
october 2016 by MicrowebOrg
» Speculation: Sociality and “soundness” ((vs. Excellence -- "Triftigkeit" als sozialer Effekt)) . Is this the connection across disciplines?
Björn Brembs • 21 days ago
Isn't it ironic that more than a decade after social media arrived to us from outside scholarship, the scholars who were among the most early adopters of social media for scholarship are re-discovering just how social scholarship is? :-)

At least for me, these insights are somewhere between "well, duh" and "this should have been on everybody's minds at least 15 years ago!"

NEYLON:

what it was that distinguishes the qualities of the concept of “soundness” from “excellence”. Are they both merely empty and local terms or is there something different about “proper scholarly practice” that we can use to help us.

At the same time I’ve been on a bit of a run reading some very different perspectives on the philosophy of knowledge (with an emphasis on science). I started with Fleck’s Genesis and Development of a Scientific Fact, followed up with Latour’s Politics of Nature and Shapin and Schaeffer’s Leviathan and the Air Pump, and currently am combining E O Wilson’s Consilience with Ravetz’s Scientific Knowledge and its Social Problems. Barbara Herrnstein Smith’s Contingencies of Value and Belief and Resistance are also in the mix. Books I haven’t read – at least not beyond skimming through – include key works by Merton, Kuhn, Foucault, Collins and others, but I feel like I’m getting a taste of the great divide of the 20th century.

I actually see more in common across these books than divides them. What every serious study of how science works agrees on is the importance of social and community processes in validating claims.

In the Excellence pre-print we argued that “excellence” was an empty term, at best determined by a local opinion about what matters. But the obvious criticism of our suggesting “soundness” as an alternate is that soundness is equally locally determined and socially constructed: soundness in computational science is different to soundness in literature studies, or experimental science or theoretical physics. This is true, but misses the point. There is an argument to be made that soundness is a quality of the process by which an output is created, whereas “excellence” is a quality of the output itself. If that argument is accepted alongside the idea that the important part of the scholarly process is social then we have a potential way to audit the idea of soundness proposed by any given community.

If the key to scholarly work is the social process of community validation then it follows that “sound research” follows processes that make the outputs social. Or to be more precise, sound research processes create outputs that have social affordances that support the processes of the relevant communities. Sharing data, rather than keeping it hidden, means an existing object has new social affordances. Subjecting work to peer review is to engage in a process that creates social affordances of particular types.

More social” on its own is clearly not enough. There is a question here of more social for who? And the answer to that is going to be some variant of “the relevant scholarly community”. We can’t avoid the centrality of social construction, because scholarship is a social activity, undertaken by people, within networks of power and resource relationships.
openscience  soundness  bjoernbrembs 
october 2016 by MicrowebOrg
Science in the Open (blog) » About
I currently have a position as Professor of Research Communications at the Centre for Culture and Technology at Curtin University (AUS)
openscience  openscholarship  openaccess 
october 2016 by MicrowebOrg
#Siggenthesen – Merkur
Siggener Thesen zum wissenschaftlichen Publizieren im digitalen Zeitalter

Das digitale Publizieren ermöglicht bessere Arbeits- und Erkenntnisprozesse in der Wissenschaft. Diese Potenziale werden aus strukturellen Gründen gegenwärtig noch viel zu sehr blockiert. Wir möchten, dass sich das ändert, und stellen deswegen die folgenden Thesen zur Diskussion:

1

Digitales Publizieren braucht verlässliche Strukturen statt befristeter Projekte. #Siggenthesen #1

Innovationen im Bereich digitaler Publikationsformate in der Wissenschaft, die in Pilot- und Inselprojekten entwickelt werden, bedürfen einer gesicherten Überführung in dauerhaft angelegte, institutionen- und disziplinenübergreifende Infrastrukturen, um im Sinne der Wissenschaftsgemeinschaft nachhaltige und wettbewerbsfähige Angebote liefern zu können. Wir rufen sowohl Fördereinrichtungen und politische Instanzen als auch Verlage und Bibliotheken auf, sich dieser Verantwortung zu stellen und entsprechende Förder- und Integrationskonzepte im bestehenden Wissenschaftsbetrieb konkret und umgehend umzusetzen. Eine systemische Veränderung hin zum digitalen Publizieren kann nur durch ein verlässliches Angebot exzellenter Dienstleistungen erreicht werden.
mikrobuch:uni20  openaccess  openscience 
october 2016 by MicrowebOrg
A Simple Explanation for the Replication Crisis in Science · Simply Statistics
My primary contention here is
The replication crisis in science is concentrated in areas where (1) there is a tradition of controlled experimentation and
(2) there is relatively little basic theory underpinning the field.

-- weil gute Theorie + vielfältige unsichere Observation (Astronomie) = Bestätigung durch Wiederholung
-- schlechte Theorie + vielfältige unsichere Observation (Epidemiologie),
-- gute Theorie und kontrolliertes Experiment: Partikelphysik (MUSTER)
-- schlechte Theorie + kontrolliertes Experiment



Astronomy and Epidemiology

What do the fields of astronomy and epidemiology have in common? You might think nothing. Those two departments are often not even on the same campus at most universities! However, they have at least one common element, which is that the things that they study are generally reluctant to be controlled by human beings. As a result, both astronomers and epidemiologist rely heavily on one tools: the observational study. Much has been written about observational studies of late, and I’ll spare you the literature search by saying that the bottom line is they can’t be trusted (particularly observational studies that have not been pre-registered!).

But that’s fine—we have a method for dealing with things we don’t trust: It’s called replication.

My understanding is that astronomers have a similar mentality as well—no single study will result in anyone believe something new about the universe. Rather, findings need to be replicated using different approaches, instruments, etc.

The key point here is that in both astronomy and epidemiology expectations are low with respect to individual studies. It’s difficult to have a replication crisis when nobody believes the findings in the first place. Investigators have a culture of distrusting individual one-off findings until they have been replicated numerous times. In my own area of research, the idea that ambient air pollution causes health problems was difficult to believe for decades, until we started seeing the same associations appear in numerous studies conducted all around the world. It’s hard to imagine any single study “proving” that connection, no matter how well it was conducted.

One large category of methods includes the controlled experiment. Controlled experiments come in a variety of forms, whether they are laboratory experiments on cells or randomized clinical trials with humans, all of them involve intentional manipulation of some factor by the investigator in order to observe how such manipulation affects an outcome. In clinical medicine and the social sciences, controlled experiments are considered the “gold standard” of evidence. Meta-analyses and literature summaries generally weight publications with controlled experiments more highly than other approaches like observational studies.

VORHERSAGE: Physik vs. Medizin
whether a field has a strong basic theoretical foundation. The idea here is that some fields, like say physics, have a strong set of basic theories whose predictions have been consistently validated over time. Other fields, like medicine, lack even the most rudimentary theories that can be used to make basic predictions.

We need to stop thinking that any single study is definitive or confirmatory, no matter if it was a controlled experiment or not. Science is always a cumulative business, and the value of a given study should be understood in the context of what came before it.

>> Psycho-Experimente:
Kann man nehmen als "zugelassene Hypothese" (Ersatz für Theorie), mit der man dann spielen kann, aber NICHT als Beweis.
openscience  replicationcrisis 
october 2016 by MicrowebOrg
Felix Schönbrodt's blog
The Reproducibility Project: Psychology was published last week, and it was another blow to the overall credibility of the current research system’s output.

Some interpretations of the results were in a “Hey, it’s all fine; nothing to see here; let’s just do business as usual” style. Without going into details about the “universal hidden moderator hypothesis” (see Sanjay’s blog for a reply) or “The results can easily explained by regression to the mean” (see Moritz’ and Uli’s reply): I do not share these optimistic views, and I do not want to do “business as usual”.

What makes me much more optimistic about the state of our profession than unfalsifiable post-hoc “explanations” is that there has been considerable progress towards an open science, such as the TOP guidelines for transparency and openness in scientific journals, the introduction of registered reports, or the introduction of the open science badges (Psych Science has increased sharing of data and materials from near zero to near 25%38% in 1.5 years, simply by awarding the badges). And all of this happend within the last 3 years!

Beyond these already beneficial changes, we asked ourself: What can we do on the personal and local department level to make more published research true?

A first reaction was the foundation of our local Open Science Committee (more about this soon).

Own Research

Open Data: Whenever possible, we publish, for every first-authored empirical publication, all raw data which are necessary to reproduce the reported results on a reliable repository with high data persistence standards (such as the Open Science Framework).

Reproducible scripts: For every first authored empirical publication we publish reproducible data analysis scripts, and, where applicable, reproducible code for simulations or computational modeling.

We provide (and follow) the “21-word solution” in every empirical publication: “We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study.”1 If necessary, this statement is adjusted to ensure that it is accurate.

As co-authors we try to convince the respective first authors to act accordingly.
openscience 
october 2016 by MicrowebOrg
The 20% Statistician: Improving Your Statistical Inferences Coursera course
Improving Your Statistical Inferences Coursera course


I’m really excited to be able to announce my “Improving Your Statistical Inferences” Coursera course. It’s a free massive open online course (MOOC) consisting of 22 videos, 10 assignments, 7 weekly exams, and a final exam. All course materials are freely available, and you can start whenever you want.

In this course, I try to teach all the stuff I wish I had learned when I was a student. It includes the basics (e.g., how to interpret p-values, what likelihoods and Bayesian statistics are, how to control error rates or calculate effect sizes) to what I think should also be the basics (e.g., equivalence testing, the positive predictive value, sequential analyses, p-curve analysis, open science). The hands on assignments will make sure you don’t just hear about these things, but know how to use them.

My hope is that busy scholars who want to learn about these things now have a convenient and efficient way to do so. I’ve taught many workshops, but there is only so much you can teach in one or two days.
mooc  statisticalthought  coursera  openscience 
october 2016 by MicrowebOrg
OS Committe (LMU 2015) Felix Schönbrodt's blog
Krise der psych. Forschung
60% nicht replizierbar
Transparenz ist nötig für GUTE Forschung
Open Data ((wie die Daten von ROGOFF!! Story))

The committee’s mission and goals include:

Monitor the international developments in the area of open science and communicate them to the department.
Organize workshops that teach skills for open science (e.g., How do I write a good pre-registration? What practical steps are necessary for Open Data? How can I apply for the Open Science badges?, How to do an advanced power analysis, What are Registered Reports?).
Develop concrete suggestions concerning tenure-track criteria, hiring criteria, PhD supervision and grading, teaching, curricula, etc.
Channel the discussion concerning standards of research quality and transparency in the department. Even if we share the same scientific values, the implementations might differ between research areas. A medium-term goal of the committee is to explore in what way a department-wide consensus can be established concerning certain points of open science.

The OSC developed some first suggestions about appropriate actions that could be taken in response to the replication crisis at the level of our department. We focused on five topics:

Supervision and grading of dissertations
Voluntary public commitments to research transparency and quality standards (this also includes supervision of PhDs and coauthorships)
Criteria for hiring decisions
Criteria for tenure track decisions
How to allocate the department’s money without setting incentives for p-hacking
openscience 
october 2016 by MicrowebOrg
Barcamp Science 2.0: Open Science in der Praxis? Einfach anfangen! | Leibniz-Forschungsverbund Science 2.0
Leibniz Research Alliance Science 2.0

K Barcamp Science 2.0, Open Science, Science 2.0

c No comment

Offen, innovativ und wissbegierig – so könnten die Teilnehmenden und auch die Sessions des zweiten Barcamp Science 2.0 anlässlich der dritten Science 2.0 Conference in Köln zusammengefasst werden. Nach dem Motto “Putting Science 2.0 and Open Science into Practice” wurde sich ausgetauscht und gleichzeitig viel Gesprächsstoff für weitere Diskussionen gesammelt.

Wie kann man Open Science (nachhaltig) praktizieren und welche Infrastruktur muss dafür bereitstehen?

Infrastructure for Open Science [Pad] [Podcast]
Practicalities of data sharing [Pad] [Podcast]
Data formats for Open Science [Pad]
Package Management for research projects [Pad]

Wie kann man Forschende von Open Science überzeugen? Welche Vor- und Nachteile sind zu beachten? Was ist für Forschende wichtig und welche Anreize sind für sie interessant?

Incentives for Open Science [Pad] [Podcast]
Teaching Open Science [Pad] [Podcast]
Is Open Science bad for Science? [Pad]

Welche Tools werden genutzt und was sind Best Practice-Beispiele? Wie können bereits bestehende Werkzeuge genutzt werden, um Datenmengen zu bearbeiten?

Tools for Open Science [Pad] [Podcast]
Jupyter Notebooks [Pad]
Wikipedia & Wikidata as a workbench [Pad] [Podcast]
Analyzing scholarly tweets [Pad]
Structuring research and publications [Pad] [Podcast]

Was ist bei Open Access-Veröffentlichungen und Peer Reviews zu beachten? Welche (positiven wie negativen) Auswirkungen hat SciHub?

Peer Review [Pad]
Preregistration for publications [Pad] [Podcast]
SciHub good or bad [Pad] [Podcast]
openscience 
october 2016 by MicrowebOrg
Towards Open Science: The Case for a Decentralized Autonomous Academic Endorsement System | Zenodo
The current system of scholarly communication is based on tradition, and does not correspond to the requirements of modern research.

The dissemination of scientific results is mostly done in the form of conventional articles in scientific journals, and has not evolved with research practice.

In this paper, we propose a system of academic endorsement based on blockchain technology that is decoupled from the publication process, which will allow expeditious appraisal of all kinds of scientific output in a transparent manner without relying on any central authority.
openscience  blockchain 
september 2016 by MicrowebOrg
open science
Qualität in top-ranking journals laut studien nicht besser, nach indizien sogar eher schlechter.
(resultate werden übertrieben usw.)
bjoernbrembs  openscience 
september 2016 by MicrowebOrg
chem-bla-ics: Doing science has just gotten even harder
A second realization is that few scientists understand or want to understand copyright law. The result is hundreds of scholarly databases which do not define who owns the data, nor under what conditions you are allowed to reuse it, or share, or reshare, or modify. Yet scientists do. So, not only do these database often not specify the copyright/license/waiver (CLW) information, the certainly don't really tell you how they populated their database. E.g. how much they copied from other websites, under the assumption that knowledge is free. Sadly, database content is not. Often you don't even need wonder about it, as it is evident or even proudly said they used data from another database. Did they ask permission for that? Can you easily look that up? Because you are now only allowed to link to that database until you figured out if they data, because of the above quoted argument. And believe me, that is not cheap.

Combine that, and you have this recipe for disaster.
Furthermore, when hyperlinks are posted for profit, it may be expected that the person who posted such a link should carryout the checks necessary to ensure that the work concerned is not illegally published.

A community that knows these issues very well, is the open source community. Therefore, you will find a project like Debian to be really picky about licensing: if it is not specified, they won't have it. This is what is going to happen to data too.
openscience 
september 2016 by MicrowebOrg
Academic Torrents
We are a community-maintained distributed repository for datasets and scientific knowledge

Welcome to Academic Torrents!
Making 15.47TB of research data available.

We've designed a distributed system for sharing enormous datasets - for researchers, by researchers. The result is a scalable, secure, and fault-tolerant repository for data, with blazing fast download speeds. Contact us at contact@academictorrents.com.
openscience 
august 2016 by MicrowebOrg
hcommons.org (humanities plattform)
Connect with Fellow Humanists
Support Open Access to Research
Humanities Commons Is Open and Not-for-Profit
Brought to you by a consortium of trusted not-for-profit organizations

Explore new modes of scholarship as you share, find, and create your own digital projects. 
Publish your work and increase its visibility with a professional profile and Web site.
Join groups focused on a research or teaching topic, event, or advocacy project—or create your own.

Humanities Commons will help you . . .

Humanities Commons includes a library-quality open-access repository for interdisciplinary scholarship called CORE. The first of its kind, CORE allows users to preserve their research and increase its audience by sharing across disciplinary, institutional, and geographic boundaries.

Humanities Commons is designed to serve the unique needs of humanists as they engage in teaching and research that benefits the larger community. Unlike other social and academic networks,
Humanities Commons is entirely open access, open source, and not-for-profit. It is focused on providing a space to discuss, share, and store cutting-edge research and innovative pedagogy—not on generating profits from users' intellectual and personal data.

Use Humanities Commons to . . .

Host an online conference or continue the conversation after the event.

Store and share your articles, syllabi, data sets, and presentations in a library-quality digital repository.

Connect and collaborate with others who work in the humanities.
Humanities Commons, a project spearheaded by the Modern Language Association (MLA), links online community spaces for the MLA; College Art Association; Association for Jewish Studies; and the Association for Slavic, East European, and Eurasian Studies.

These partners have collaborated to create Humanities Commons—a crossdisciplinary hub for anyone interested in humanities research and scholarship. As other not-for-profit humanities organizations join the partnership, Humanities Commons will grow even larger. 
Humanities Commons is funded by a generous grant from the Andrew W. Mellon Foundation. Recognizing the need for an online professional network for—and by—humanists, the Mellon Foundation supported the development of Commons sites for partner societies and the shared identity-management system that connects these sites and their users to a larger Humanities Commons network.

Brought to you by a consortium of not-for-profit humanities organizations, Humanities Commons is an open-access and open-source digital platform.
ANYONE, anywhere will be able to create a free account and participate in this vibrant intellectual community. 
If you work in the humanities, Humanities Commons is your space for collaborating with colleagues across disciplines, sharing teaching tools, and building a professional profile. With your free account, you can create a Web site, engage in community discussions, and more. 
mikrobuch:uni20:open  openaccess  openscience  hcommons  digitalhumanities 
august 2016 by MicrowebOrg
Project Jupyter | Home
The Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more.
openscience 
june 2016 by MicrowebOrg
Open Science | Telepolis
"Der Vorwurf, meine Doktorarbeit sei ein Plagiat, ist abstrus. Ich bin gerne bereit zu prüfen, ob bei über 1.200 Fußnoten und 475 Seiten vereinzelt Fußnoten nicht oder nicht korrekt gesetzt sein sollten und würde dies bei einer Neuauflage berücksichtigen."1

So antwortete der ehemalige Verteidigungsminister Karl-Theodor zu Guttenberg auf die Frage, ob Teile seiner Dissertation womöglich aus anderen Werken abgeschrieben worden sein könnten, ohne dass er dies ausreichend kenntlich gemacht hatte.

Verstehen Sie, was ich damit meine? Es werden zwar die Forschungsergebnisse mundgerecht präsentiert, aber die Prozesse, die zu diesen Ergebnissen führen, bleiben im Dunkeln. Außenstehende bekommen ein Produkt geliefert, können aber nicht nachvollziehen, wie es entstanden ist und welche Gedanken bei der Erstellung verfolgt und verworfen wurden. Sie sehen vor allem nicht, welche Probleme es auf dem Weg zu lösen gab, welche Fehler gemacht und welche Lehren aus ihnen gezogen wurden. Auch solche Dinge gehören zur Wissenschaft. Wenn man die weglässt, entsteht ein völlig falsches Bild. Und dann wundert man sich, wieso die Menschen nicht verstehen, was Herr zu Guttenberg so Schlimmes getan hat.

Von 1.0 zu 2.0

Nun gibt uns aber speziell das Internet die Möglichkeit an die Hand, dagegen etwas zu tun. Wir bekommen nämlich einen Rückkanal, und der ändert eine ganze Menge. Im einfachsten Fall können Wissenschaftler etwa in Blogs über Themen aus ihrem Fachgebiet berichten und Fragen von Interessierten dazu beantworten. Es wird ein zügiger direkter Austausch möglich, doch damit ist das Potenzial noch längst nicht ausgeschöpft. Da ist noch Luft.

Und alle diese Beispiele würde ich mit Open Science 2.0 betiteln. Es geht nicht um das Präsentieren von fertigen Inhalten, sondern um das Erstellen, Prüfen, Verbessern dieser Inhalte durch Forscher, Praktiker und begeisterte Amateure. Wer an der Entwicklung von Wissen mitwirkt, versteht viel besser, was Wissenschaft eigentlich ausmacht und bedeutet. Andersherum bleiben Forscher vielleicht eher auf dem Boden der Tatsachen und erhalten so den Blick für das Ganze zurück, der bei ihrer Spezialisierung verloren gegangen sein könnte.

Der ehemalige Bundeskanzler Helmut Schmidt ist jedenfalls der Ansicht, Wissenschaft sei "eine zur sozialen Verantwortung verpflichtete Erkenntnissuche".8 und müsse sich um die großen Menschheitsprobleme wie Überbevölkerung, Klimawandel, Globalisierung der Ökonomie oder die weltweite militärische Hochrüstung kümmern. Dabei ist die Kooperation Vieler gefragt, unabhängiger Experten ebenso wie betroffener Amateure.
guttenberg  plagiat  schavan  openscience 
may 2016 by MicrowebOrg
Coko Foundation
Despite fundamental shifts in how humans use technology in research, mass communication and popular media, we are still publishing like it’s 1999. At the Collaborative Knowledge Foundation, we’ve set our sights on transforming the research communication sector by building shared infrastructure that will improve what we publish and increase the integrity and speed of the process.
openscience  uni2.0:avantgarde 
april 2016 by MicrowebOrg
Konrad Förstner
Currently, I am the head of the bioinformatics group at the Core Unit Systems Medicine, University of Würzburg, Germany. I am advocating openess of source code, science, data, education, content - basically everything - and am an active member of the Open Science group of Open Knowledge.
uni2.0:avantgarde  openscience 
april 2016 by MicrowebOrg
Schwerpunktinitiative "Digitale Information": Start
Die Schwerpunktinitiative "Digitale Information" ist eine gemeinsame Initiative der Allianz der Wissenschaftsorganisationen zur Verbesserung der Informationsversorgung in Forschung und Lehre.

Mit der Initiative verfolgen die Wissenschaftsorganisationen das Ziel,

digitale Publikationen, Forschungsdaten und Quellenbestände möglichst umfassend und offen bereit zu stellen und damit auch ihre Nachnutzbarkeit in anderen Forschungskontexten zu gewährleisten,

via Helmholtz, Kiel
openscience 
april 2016 by MicrowebOrg
Positionspapier „Research data at your fingertips“ - 2015_Positionspapier_AG_Forschungsdaten.pdf
I. Vision
2025
„Research data at your fingertips“
Wissenschaftlerinnen und Wissenschaftler aller Disziplinen können auf alle
Forschungsd
aten
einfach, schnell und ohne großen Aufwand zugreifen, um auf höchstem Niveau zu forschen und
exzellente Ergebnisse zu erzielen. Sie können gemeinsam mit anderen arbeiten und ihre For-
schungsergebnisse sicher aufbewahren. Forschungsdaten stehen dabei in e
iner Form zur Ver-
fügung, die Forschung sowohl über disziplinäre als auch über nationale Grenzen hinweg ermög-
licht und
erleichtert
.
D
ie Veröffentlichung von Forschungsdaten und Software steigert die wissenschaftliche Reputa-
tion. Wissenschaftlerinnen und Wis
senschaftler werden beim Sammeln, Erheben, Erfassen und
beim Management ihrer Daten unterstützt.
Leicht nutzbare digitale Infrastrukturen sowie w
issenschaftliche und technische Informationsspe-
zialistinnen und
-
spezialisten unterstützen den vollständigen Fo
rschungszyklus
openscience 
april 2016 by MicrowebOrg
Kiel Thilo Paul-Stüve - Open Data // Welcome - GEOMAR
Welcome to the Data Management Portal
for Kiel Marine Sciences hosted at GEOMAR
openscience 
april 2016 by MicrowebOrg
Offene Wissenschaft > Open Knowledge Foundation Deutschland
Der Begriff Open Science (Offene Wissenschaft) bündelt Strategien und Verfahren, die allesamt darauf abzielen, die Chancen der Digitalisierung konsequent zu nutzen, um alle Bestandteile des wissenschaftlichen Prozesses über das Internet offen zugänglich und nachnutzbar zu machen. Damit sollen Wissenschaft, Gesellschaft und Wirtschaft neue Möglichkeiten im Umgang mit wissenschaftlichen Erkenntnissen eröffnet werden.

Die deutschsprachige OKF-Arbeitsgruppe »Open Science«

Für den Bereich Wissenschaft konstituierte sich am 16.7.2014 im Rahmen des OKFestivals in Berlin eine deutschsprachige Open Science Arbeitsgruppe. Ziel der Arbeitsgruppe ist die Vernetzung von Aktiven im Bereich Öffnung von Wissenschaft und Forschung (Open Science) und die Erarbeitung rechtssicherer Rahmenbedingungen für das Veröffentlichen von Forschungsergebnissen. Zusätzlich soll die Arbeitsgruppe die Zusammenarbeit mit anderen internationalen Open Science Gruppen koordinieren und als Ansprechpartner für Forscher, Institute, Zivilgesellschaft, Wirtschaft und Politik zum Thema Open Science fungieren.
openscience 
april 2016 by MicrowebOrg
Helmholtz Open Science: Newsletter 49 vom 12.06.2014
Der Begriff Open Science umfasst aber auch die Öffnung des gesamten Wissenschaftsprozesses im Sinn einer „intelligent openness“ (Boulton, G. et al. 2012: Science as an open enterprise. London: Royal Society).

Die Helmholtz-Gemeinschaft fördert Open Science, also den offenen Zugang zu wissenschaftlichem Wissen, seine Verifizierbarkeit und Nachnutzbarkeit sowie seinen Transfer in die Gesellschaft und setzt damit einen Prozess fort, der 2003 mit der Erstunterzeichnung der „Berliner Erklärung über den offenen Zugang zu wissenschaftlichem Wissen“ begann.

OPEN ACCESS
- Publ.
- Data
- Software/Algos
openscience 
april 2016 by MicrowebOrg
Offene Wissenschaft – Wikipedia
In den 1990er Jahren wurde der Begriff der ‚Öffentlichen Wissenschaft‘ neu und entscheidend für den deutschen Sprachraum von der Soziologin und Kulturwissenschaftlerin Caroline Y. Robertson-von Trotha geprägt. In den Eröffnungsreden der Karlsruher Gespräche von 1997 und 1998 entwarf sie einen Begriff der ‚Öffentlichen Wissenschaft‘ als Synonym einer interdisziplinären und dialogbasierten Wissenschaftskommunikation.[3][4] In der Folge bettete sie das Konzept in den historisch-soziologischen Kontext ein[5][6] und führte im Jahr 2012 eine erste von mehreren Analysen „im Spiegel der Web 2.0-Kultur“[7] durch.[8] Zugleich etablierte sie als Gründungsdirektorin des ZAK in Karlsruhe ihre Konzeption der ‚Öffentlichen Wissenschaft in Theorie und Praxis‘ auch institutionell: Neben der Forschung und der Lehre bildet diese eine der drei gleichberechtigten Säulen, auf denen das Zentrum basiert.[9][10]
openscience 
april 2016 by MicrowebOrg

Copy this bookmark:



description:


tags: