This article describes what appears to be the typical strategies and mechanisms to distribute knowledge (or ‘exchange information’) across and between academic communities. An oversimplified ‘cartoon stereotype’ of the activities will not be very useful, so some correction to this newcomer’s view is expected and would be appreciated. As a result, this page is likely to be updated rather than archived.
Sharing
Academics formally share information using two main methods: conference presentations and written documents in journals. Face-to-face discussions, letters and recently emails and blogs provide the usual informal routes, but the wide community typically accepts only work (summarised in ‘papers’) that has been accepted by qualified experts for inclusion in a conference or a journal.
These papers are typically explanations of a peice of work and its results, or reviews (“metastudies” or “literature reviews”) of a number of those explanations to produce a summary of the current findings on a topic. Unlike the wiki then, and typical internet webs, knowledge is ‘pushed’ via small individual updates rather than as a ‘pulled’ lookup to reference pages.
Journal Peer Review
Journal publication was an opportunity to present findings for rigorous debate and analysis, and only in the last few decades has expert review become a standard way to assess acceptability. As there are few if any other checks on how well the work has been carried out, academics have come to rely on this ‘peer review‘ as a kind of quality assurance, a way of setting a ‘basic standard’.
In practice, two or three qualified experts are identified by the editor and tasked, in their spare time and usually with no reward, to read a paper and mark it as suitable or not, and suggest corrections and changes if necessary. (Which results in this…) And that is, essentially, it. Journals may provide a set of guidelines for the reviewers, eg “How to critically appraise an article” (Nature magazine)
As Fiona Godlee said way back in 2000: “peer review is very slow, takes up a lot of academic time, is highly selective, is very variable, arbitrary, prone to bias, open to abuse, patchy at detecting important methodological defects, and almost useless at detecting fraud and misconduct.”
“Peer review” is sometimes extended to post-publication: other scientists will examine the published work and so expose any issues with it. This should encourage the authors to provide good evidence and good arguments. Published papers will have value because the authors face such expert debate (the ‘intellectual scrum’) that their reputations will be at risk if they do not make sure the work was properly done.
But the idea that hordes of scientists pick over every published paper is an unsupported myth. In fact the scale of systematic scientific carelessness and even deliberate fraud (see later article…) shows that the pressures to be thorough are evidently not sufficient to cope with ordinary human failures, let alone deliberate subversion.
In principle, anyone can submit a paper to a journal to be considered for publication. In practice, few papers will be seriously considered unless they come from an authoritative source; an established researcher in the community, or an acolyte of one. Similarly, the ‘PhD gateway’ qualification to research communities hinders experienced staff from other research disciplines, expert enterprises and commerce from entering such communities. The result is a set of relatively small, isolated communities of individuals steeped in their own ways that are considered suitable for authoring journal reviewed papers, or that are considered qualified to review papers.
Conference Proceedings
Conferences are opportunities to present concepts for discussion. A time to throw out concepts and results that would be broadly interesting, but are not yet settled enough to be published in a journal.
In principle the ideas can then be dissected by large audiences and improved on and matured, but in practice there is usually only time for a few questions after a presentation, the questions and answers are not usually structured as a rigorous debate or chaired to ensure answers match the questions, and are rarely if ever formally recorded. This lack of rigour is recognised within academia, as conference proceedings do not carry the same cachet as journal publications.
Assessing
The Literature Review
The Literature Review is a common activity at the start of a peice of work, intended to establish what is currently known across the academic communities.
Because the review of individual papers is quite ‘light’, the literature review authors must examine each article that appears relevant, and trace any assertions back to the existing experiment – if any – to see that the factiness of the statements are borne out by suitable tests. In the age of the internet, this is only somewhat laborious as long as the papers are online and available.
Replication
“Replication” is the ultimate check that the findings of an article are indeed true. Essentially some independent researchers run the experiment again and publish their results. Because this publication is as lightly reviewed as the original article, more replications are required until there are enough of them to be be convincing either way.
This can be quite tricky when the experiments are expensive, or the physics inconvenient. For example, experiments at CERN typically cannot be replicated elsewhere, as there is no other equipment like it. And experiments that measure historical data such as seismic activity, sea levels and objects in the night sky cannot be repeated.