27 Mar 2017

Bruce's Modelling Gripes, No.1: Unclear or confused modelling purpose

OK, maybe I am just becoming a grumpy ol man, but I thought I would start a series about tendencies in my own field that I think are bad :-), so here goes...

Modelling Gripe No.1: Unclear or confused modelling purpose

A model is a tool. Tools are useful for a particular purpose. To justify a new model one has to show it is good for its intended purpose.  Some modelling purposes include: prediction, explanation, analogy, theory exploration, illustration... others are listed by Epstein [1]. Even if a model is good for more than one purpose, it needs to be justified separately for each purpose claimed.

So here are 3 common confusions of purpose:


1.    Understanding Theory or Analogy -> Explanation. Once one has immersed oneself in a model, there is a danger that the world looks like this model to its author. Here the temptation is to immediately jump to an explanation of something in the world. A model can provide a way of looking at some phenomena, but just because one can view some phenomena in a particular way does not make it a good explanation.

2.    Explanation -> Prediction. A model that establishes an explanation traces a (complex) set of causal steps from the model set-up to outcomes that compare well with observed data. It is thus tempting to suggest that one can use this model to predict this observed data. However, establishing that a model is good for prediction requires its testing against unknown data many times – this goes way beyond what is needed to establish a candidate explanation for some phenomena.

3.    Illustration -> Understanding Theory. A neat illustration of an idea, suggests a mechanism. Thus the temptation is to use a model designed as an illustration or playful exploration as being sufficient for the purpose of a Understanding Theory. Understanding Theory involves the extensive testing of code to check the behaviour and any assumptions. An illustration, however suggestive, is not that rigorous. For example, it maybe that an illustrated process only appears under very particular circumstances, or it may be that the outcomes were due to aspects of the model that were thought to be unimportant. The work to rule out these kinds of possibility is what differentiates using a model as an illustration from modelling for Understanding Theory.

Unfortunately many authors are not clear in their papers about specifying exactly for which purpose they are justifying their model.  Maybe they have not thought about it, maybe they are just confused and maybe they are just being sloppy (e.g. assuming because its good for one purpose it is good for another).


6 Mar 2017

The Post-Truth Drift -- why it is partly the fault of Science (a short essay)

In a time which talks about being in a "post-truth" era of public discourse and where the reputation of "experts" as a group is questioned, it is easy to blame others for the predicament that Scientists find themselves (e.g. politicians, journalists, big business interests etc.). However, I argue that substantial part of the blame must fall on ourselves, the scientists  -- that we have (collectively), more than anyone, knocked away the pedestal on which they stood.

Firstly, scientists have increasingly allowed their work to be prematurely publicised - "announcing" breakthroughs with the first indicative results (or even before). It is, of course, understandable that scientists should believe in their own research, but it should be part of the discipline that we do not claim more than we have proved. Partly this is due to funding and institutional pressure, to quickly claim impact and progress (I remember an EU funding call that asked for "fundamental theoretical breakthroughs" and "policy impact" in the same project), but again it is part of the job to resist these pressures. More fundamentally, the basis for academic reputation has changed from cautious work to being first with new theories -- from collective to individual achievement.

The result of this over-hyping of results is that science loses its reputation for caution and reliability. This has been particularly stark in some of the "softer" sciences like nutrition or economics. All the clever mathematics in the world did not stop economists missing the last economic collapse - their lack of empirical foundations coming back to bite them. In the case of nutrition, a series of discoveries have been announced before their full complexity is understood.

However this goes a lot further than the softer sciences. The recent crisis in reproducibility in many fields indicates that publication has overtaken caution even for results that are not publicised outside their own field. This indicates that there is an imbalance in these fields with not enough people replicating and checking work and too many racing to discover things first. This is evident in some fields where there are a lot of researchers proposing or talking about abstract theories and not many doing the more concrete work, such as empirical measurement. Reputation should follow when an idea or model empirically checks out and not before.

Measuring academic reputation on citation-based indices reinforces this deleterious trend - one can get many citations for proposing an attractive or controversial idea, but it is independent of whether one was right or not. If we reward academics by their academic popularity with their peers rather than whether they were right, then that will affect the kind of academics we attract into the profession. Many fields are dominated by cliques who cite each other and (consciously or unconsciously) determine the methodological norms.

All fields need some methodological norms, however these norms can come about in ways that are independent of their success or reliability. Papers that grab a lot of attention can be more influential in these terms than those that turned out to be right. All fields seem to adjust their standards of success to ensure that the field, as a whole, can demonstrate progress and hence justify itself. When faced with highly complex phenomena, this can lead to a dilution of the criteria of success so this is achievable. In my field, abstract simulations without strong empirical foundations that provide a way of thinking about issues, gain more attention than they should and, more worryingly, are then advertised as able to perform "what if" analyses on policy inventions (implying their results will somehow correspond to reality). In economics, prediction rather than structural realism was declared as the aim of its modelling, but this weakened to predicting known out-of-sample data.

If all this weakening of criteria were internal and scientists were ultra-careful about not deceiving others into thinking their results were reliable, this would not be so bad. However, whether deliberately or otherwise, far too often the funders/policy makers/public are left with an impression that declared findings are more solidly based than they are. This is exacerbated by the grant funding process, where people who promise great results and impact are funded and more realistic proposals rejected. If one gets a grant, based upon such promises there is then pressure to justify outcomes that fall short of these, and to use language to obscure this.

Finally, when scientific advice and the policy world meet there is often fundamental misunderstanding, and this is partly the fault of the academics. In the wish for relevance and "impact" academics can be pressured into not being completely honest, and providing the policy makers with what they want regardless of whether this is justified by the science. One trouble with this interface is that there is not a clear line of responsibility -- if the advice from the scientists conflicts with those of the policy makers, what are they to do?  If they trust some complex process that they do not understand they are effectively delegating some of their responsibility, if they only trust it if agrees with their intuitions then this selection bias ensures that support rather than critique of decisions gets diffused.

The blurring of political and scientific debate that results from a non-cautious entry into policy debate has resulted in a conflation between debate over method and reliability of results to a confrontation of alternative results. Classically science has not debated results, but rather critiqued each other's method. If there are conflicting results this will not be resolved by debate but by further research. Alternative ideas should be tolerated until there is enough evidence to adjudicate between them. The competition should be in terms of sounder method, not in terms of which theory is better on any other grounds. Thus scientific debate has become conflated with political debate where, rightly, different ideas are contrasted and argued about.

There are some positive sides for science to the "post-truth" tendencies. The disconnected "ivory tower" school of research is rightly criticised. Whilst what academics do should not be constrained, what they use public money for has to be. The automatic deference that academics used to have from the public has also largely disappeared, meaning that results from scientists will be more readily questioned for its unconscious biases and meaning of its claims. To some extent the profession has become more porous with a wider range of people participating in the process of science - it is not just professors or boffins anymore.

Ultimately maintaining academic and research standards is the job of the academics themselves. The institutions they work in, the funders of research and the current governmental priorities mean that other involved actors will have other priorities. Universities compete in terms of the frameworks that government sets (REF, TEF etc.) or league tables constructed on simplistic indices. Funders of research are under pressure to claim research that has immediate and significant impact and publicity. It is only the academics themselves that can resist these pressures and so maintain their own, long-term reputations for independence and reliability. If we do not have these things, why should the public carry on financing us? What will people think of our science in 50 or 100 years time? Let the longer view prevail.