The Edith Cowan University in Perth, Australia hold an annual security conference secau (that is not pronounced ‘sea cow’, apparently). I contributed this very dull-titled paper (must work on my titling skilz…), abstract as follows:

“To make good decisions, we need to be suitably informed. ‘Good’ and ‘Suitably’ in this case depend on the informational needs of the decision and the mechanisms of getting the information to the decision maker in time.

The trade-offs in qualities, quantities, timeliness, impacts on other activities, and so on are infamously wickedly complex, and usually buried in a clutter of special circumstances, personality characteristics, environments unsuitable for study, and so on.
Decision-making systems can be explored using case studies and exercises, but these are limited by the expense and time of using real people. A virtual simulator for large scale networks of communities can provide systems to examine that are not otherwise possible, while bearing in mind that simulators only partially reflect real systems.

This paper describes a design for such a simulator framework that can be implemented on an ordinary desktop computer. We intend to use it to exercise and explore various ‘knowledge distribution strategies’ in order to understand and suggest information communication mechanisms for investigation in the real world, without expecting it to be complete enough to be prescriptive.

We focus on military collaborations as suitably ‘eXtreme’ environments to exercise these communication mechanisms. Topics for further investigation include isolation, turnover and resilience.”

PDF here

Advertisement

or why it’s not good to try and know everything:

“Small co-located teams can share ‘Common Operating Pictures’ to enable high levels of cooperation and backup amongst the team members. However, when teams are large or distributed across poor communications networks, sharing operating pictures with all members of the team can consume so much communication resource and individual attention that it reduces the team’s effectiveness in accomplishing its task.

This paper introduces the costs and benefits of distributing picture components, and ways of encouraging the communications to be appropriate and useful: to build sufficiently good common pictures amongst suitable team members for them to function. It follows that a ‘single picture of the truth’ is rarely desirable even when technically possible, and this paper shows why that is so.

Building suitably distributed, shared and incomplete pictures requires sophisticated evaluation and management of information – information that may be real world observations or assessments of them. Understanding how such information needs are expressed and fulfilled in distributed teams should transfer across several information management and information exploitation disciplines.”

PDF here

It’s a bit of a brain dump, and only a conference paper – presented at ECIME 2011 in Italy – I’m working to try and focus on smaller pieces. On the other hand, knowledge distribution is a topic that is properly dealt with holistically. Probably. I also feel I need to watch my language: going back to it, it seems to read too stuffily.

With thanks again to John for many beer-fuelled discussions, my parents for proofing, and Graham for supervising.

The House of Commons Science & Technology Select Committee are holding an inquiry into Peer Review.

Like previous investigations, they focus on peer review as a vehicle for quality assurance and scientific discourse, rather than starting with what they want and working backwards. As peer review occurs after the work has been done, it simply cannot be used to assure quality of the work – although it may be used to assure quality of the published paper

Instead the government could develop the quality controls already being introduced by academic institutions, and use these to assure the quality we would like to see in studies used to inform policy.

Today I submitted a short paper to the inquiry saying this in more detail:

MartinHillForSTC3

(Word doc, about 3 pages in large type, with a few minor language corrections from the submitted document)

(This is a duplicate post with GruntledEngineer, I’m having an identity crisis)

This article describes what appears to be the typical strategies and mechanisms to distribute knowledge (or ‘exchange information’) across and between academic communities. An oversimplified ‘cartoon stereotype’ of the activities will not be very useful, so some correction to this newcomer’s view is expected and would be appreciated. As a result, this page is likely to be updated rather than archived.

Sharing

Academics formally share information using two main methods: conference presentations and written documents in journals. Face-to-face discussions, letters and recently emails and blogs provide the usual informal routes, but the wide community typically accepts only work (summarised in ‘papers’) that has been accepted by qualified experts for inclusion in a conference or a journal.

These papers are typically explanations of a peice of work and its results, or reviews (“metastudies” or “literature reviews”) of a number of those explanations to produce a summary of the current findings on a topic.  Unlike the wiki then, and typical internet webs, knowledge is ‘pushed’ via small individual updates rather than as a ‘pulled’ lookup to reference pages.

Journal Peer Review

Somebody reviewing...Journal publication was an opportunity to present findings for rigorous debate and analysis, and only in the last few decades has expert review become a standard way to assess acceptability. As there are few if any other checks on how well the work has been carried out, academics have come to rely on this ‘peer review‘ as a kind of quality assurance, a way of setting a ‘basic standard’.

In practice, two or three qualified experts are identified by the editor and tasked, in their spare time and usually with no reward, to read a paper and mark it as suitable or not, and suggest corrections and changes if necessary. (Which results in this…) And that is, essentially, it. Journals may provide a set of guidelines for the reviewers, eg “How to critically appraise an article” (Nature magazine)

As Fiona Godlee said way back in 2000: “peer review is very slow, takes up a lot of academic time, is highly selective, is very variable, arbitrary, prone to bias, open to abuse, patchy at detecting important methodological defects, and almost useless at detecting fraud and misconduct.”

“Peer review” is sometimes extended to post-publication: other scientists will examine the published work and so expose any issues with it.  This should encourage the authors to provide good evidence and good arguments. Published papers will have value because the authors face such expert debate (the ‘intellectual scrum’) that their reputations will be at risk if they do not make sure the work was properly done.

But the idea that hordes of scientists pick over every published paper is an unsupported myth. In fact the scale of systematic scientific carelessness and even deliberate fraud (see later article…) shows that the pressures to be thorough are evidently not sufficient to cope with ordinary human failures, let alone deliberate subversion.

In principle, anyone can submit a paper to a journal to be considered for publication. In practice, few papers will be seriously considered unless they come from an authoritative source; an established researcher in the community, or an acolyte of one. Similarly, the ‘PhD gateway’ qualification to research communities hinders experienced staff from other research disciplines, expert enterprises and commerce from entering such communities. The result is a set of relatively small, isolated communities of individuals steeped in their own ways that are considered suitable for authoring journal reviewed papers, or that are considered qualified to review papers.

Conference Proceedings

A conference in OsloConferences are opportunities to present concepts for discussion. A time to throw out concepts and results that would be broadly interesting, but are not yet settled enough to be published in a journal.

In principle the ideas can then be dissected by large audiences and improved on and matured, but in practice there is usually only time for a few questions after a presentation, the questions and answers are not usually structured as a rigorous debate or chaired to ensure answers match the questions, and are rarely if ever formally recorded. This lack of rigour is recognised within academia, as conference proceedings do not carry the same cachet as journal publications.

Assessing

The Literature Review

The Literature Review is a common activity at the start of a peice of work, intended to establish what is currently known across the academic communities.

Because the review of individual papers is quite ‘light’, the literature review authors must examine each article that appears relevant, and trace any assertions back to the existing experiment – if any – to see that the factiness of the statements are borne out by suitable tests. In the age of the internet, this is only somewhat laborious as long as the papers are online and available.

Replication

“Replication” is the ultimate check that the findings of an article are indeed true. Essentially some independent researchers run the experiment again and publish their results. Because this publication is as lightly reviewed as the original article, more replications are required until there are enough of them to be be convincing either way.

This can be quite tricky when the experiments are expensive, or the physics inconvenient. For example, experiments at CERN typically cannot be replicated elsewhere, as there is no other equipment like it. And experiments that measure historical data such as seismic activity, sea levels and objects in the night sky cannot be repeated.