Follow up from AAPG workshop, Oct 2021. See also this link and this one.
At the recent AAPG online workshop I generated a little controversy by stating “models are not science”. Unavoidably I irritated some technical workers who have sat behind a screen doing modeling, as I had demoted their status as “scientists”. Sorry, but it is true.
Now, I have to declare what I mean by “models” as the term has a broad colloquial meaning so I need to be a little more precise.
Model:
- Something which accurately resembles or represents something else, esp. on a small scale; a person or thing that is the likeness of another
- A simplified or idealized description or conception of a particular system, situation, or process, often in mathematical terms, that is put forward as a basis for theoretical or empirical understanding, or for calculations, predictions, etc.; a conceptual or mental representation of something.
This second usage is the one we are concerned with. An idealised description, for theoretical or empirical understanding and predictions. Note that the model is not challenged by the deductive process of its application.
However there is also the related word Theory
- A set of hypotheses related by logical or mathematical arguments to explain and predict a wide variety of connected phenomena in general terms. E.g. the theory of relativity
This term differs from “hypothesis” as only after a hypothesis has survived some tests of falsifiability does it become a theory. Geological theories on the history of a region can be complex inter-linked subjects and I tend to use the word framework to described the current assembled bunch of linked and tested ideas.
So the question is, does your technical work count as a model or a theory? Is it a proposed set of ideas based on assumptions, analogues and selected expectations applied to your area, – or has it been tested and survived?
This is Karl Popper’s demarcation problem in science, separating what he classified as pseudoscience from real science. Popper noted that any theory which can accommodate nearly any new observation becomes untestable and therefore is not a science (in his book he used the examples “scientific socialism” and psychoanalysis as untestable pseudoscience, but relativity was falsifiable and therefore a science).
The boundary is often not clear and we must look out for transgressions.
Firstly, what is a test? It cannot be one of of the study’s own inputs/assumptions, or another model, even though it must be gratifying to see an increase in assumption A produces the expected change in product B. This is an unavoidable bias stemming from the design of the model. A test must be an independent, nature-based observation. If palaeontology sees progressively shallower faunas up section then a missing section, check if lithofacies sees increasing sandiness and a similar shift in lithofacies at the same point as the biostratigraphic hiatus? and is this reflected on seismic? Was the same character seen by an independent worker on an adjacent well? These are very simple examples of tests.
An important point. It is usually the small links in a framework that can be tested most rigorously, rather than the whole construct, and this must be done, and documented. This is why I like the word framework – something built of a web of tightly cross-linked observations, from which the whole theory rises. Geohistory plots are more sophisticated tests as multiple observations are laid out graphically, and stratigraphy, burial indices (vitrinite reflectance etc.), palaeobathymetry, lithofacies and rates of sedimentation must all balance. Such a plot must not imply a complex tectonic subsidence curve that does not fit structural or seismic observations. A geohistory plot is a mini-theory, testing multiple observational hypotheses at a site.
Staying on the right side of the demarcation boundary
Testing the components of a framework of knowledge gives it credibility, and prevents a science from drifting into being an authority-based dogma. So why do we ever drift away from this ideal approach? I think this bias comes from two sources:
- People fall in love with their models. Models are most important in non-experimental sciences and as it happens these same sciences have to rely more on proxy measurements. For example in geology, evolutionary stage is a proxy to give age, vitrinite reflectance to measure maximum burial, and many more. A model, such as a eustatic sea-level curve of binary rise/fall that you can apply to describe and date your sedimentary packages, or a computer program that generates neat graphics of basin form, is the beginning of a process of making sense of these messy proxies. This process requires much labour, and is a puzzle that rewards in stages of “solving” (e.g. correcting vitrinite to TVD in a deviated well, explain the appearance of a level of carbonates in a eustatic model, and so on). The more steps and time into this process, the less likely the worker will stop and ask “why am I doing this?” … “This isn’t going where I expect”, and especially “shall I undo months of work with a reality check that uses more of this messy proxy data?” There is too much effort and perhaps reputation already invested in the interim products. This is the sunk costs effect well known to psychologists (the longer you’ve been in a relationship, the harder it is to break up).
- Managers love models. Especially models that can claim to be cutting-edge. Deductive models have fairly predictable work flows. Projects can be broken down into stages of work, each a step towards the relatively predictable output, in a relatively predictable time. This makes budgeting easier. However, this is exactly what we do not want at the moment in SE Asia. Everyone at the AAPG October workshop knew the old paradigms were failing** and we need to replace them. For this we must innovate. But designing a project to innovate is risky as there is a roughly equal risk of project failure as there is for innovation. Only the most technically confident managers can guide such a project, and it will almost certainly not follow a pre-planned schedule. Most managers hate this project uncertainty and career risk and, given the choice, will opt for high-tech models instead.
Kuhn’s extraordinary research
I am digressing a little to put model-driven (=deductive) and inductive approaches in the context of what is happening now in South East Asian geological studies. Thomas Kuhn recognised that a period of what he termed “extraordinary research” was required for a science to undergo a paradigm shift. Now, this may well involve computers, and it might involve other technology, but most of all it is a change of mind-set. At the beginning the workers will not know where their new direction will ultimately take them. The process has to be nimbly managed, as it will be inefficient, and probably involve a small group of over-dedicated workers with a vague new idea, who are willing to turn around from blind-alleys and re-start again, and again. This is evidence-based induction (or abduction for the philosophical purists) which, above all, requires experience of the qualities of all the messy proxy data.
So in conclusion…
Let me slightly modify my opening statement “models are not science”. The purely deductive application of models is still not science, because the underlying assumptions are the accepted paradigm, and these are still not challenged by the deductive application. However, some model-based work does include slight testing of observations and are in the transition towards a science, as long as we document the tests and results (especially the failures; – these are the clues that we may be using the wrong paradigm). However at the current stage of geological research in SE Asia we have clearly illustrated much uncertainty and many unknowns, and this forces us to abandon the old deductive processes and investigate inductively. We need a big emphasis on clear science to finish the paradigm shift, or we are left with wishy-washy descriptions that we know are inadequate, and not worthy of investment.
** Old stratigraphic models could not predict the time and geographic distribution of the Batu Raja/ Subis/ Gomantong limestones, Luconia limestone, the horribly named Ngimbang Limestone and others. The new paradigm does, and more.
References
Kuhn, T.S., 1962. The structure of scientific revolutions. Chicago & London: Phoenix Books, The University of Chicago Press.
Popper, K., 1963. Conjectures and Refutations: the growth of scientific knowledge. London: Routledge and Keagan Paul.
Be First to Comment