Icaran Feathers: A Trail of Scientific Problems

by Shaun Terry

Note: Upon review, if I had it to do over again, I might have mentioned something in the title about a proposed affective turn

I. Introduction

In this paper, I will define what I view as important concepts for considering what I see as especially troublesome problems in science.

II. Do intentions matter?

One might conclude that intentionality is generally not as important as outcomes are. It is not necessarily helpful if someone does not intend harm. If someone hits us in the head, apologies alone will not help to diminish the size of the welt. But some cases are more complicated.

If someone is viewed as having intended to hurt someone, the intention in itself might be harmful. If someone intentionally hits us in the head, it might upset us that someone wanted to harm us. By demonstrating intent to harm, the person succeeds, leaving us with at least two kinds of injury.

More related to science, though, intentions can have a way of influencing the understood role of science. If science were presumed to have the intention of discovering things about the world, then some people might claim to know particular things about the world and, in some cases, people might claim to know what some of the things are that they know. Perhaps they would be wrong for thinking as much, but regardless of that, the idea can make a difference in how science is performed and how science is interpreted.

III. Rhetoric matters because context matters

Popper claims that Marx’s work is unfalsifiable. He might base his claim on the idea that any attempt to falsify Marxism could be met with the claim that the falsifier fell victim to False Consciousness. Popper’s point is one that has been taken seriously and is a contested point. But Popper’s point tells us that the framework that we use to observe a phenomenon influences our interpretation of the phenomenon.

Cartwright, on pg. 138 of “Ceteris Paribus Laws and the Socio-economic Machine,” gives us a framework to try to understand science:

We aim in science to discover the natures of things; we try to find out what capacities they have and in what circumstances and in what ways these capacities can be harnessed to produce predictable behaviours.

Science, in Cartwright’s view, seems to make ontological claims: “Something1 is” or “Something2 is not.” One might (rightly or wrongly) view experiments in science as different from experiments in everyday life in that everyday discoveries simply serve to tell us how we might achieve personal goals. The standard could be said to be different because the goal seems to be different.

In our daily lives, we try to establish how to produce particular outcomes, often leading to the expectation that what has produced preferable outcomes in the past will continue to do so in the future. Generally, we do not need to feel the need to know why a method might work; at some point, we simply accept that it does. Science is different.

I claim that science has, as a primary aim, to tell us what the world is. Achieving this goal seems to require conceiving of a means to prove what is in the world. To achieve this, science must be able to do more than to tell us anecdotally that something can produce a particular outcome. If that were the aim of science, it would not be of discrete use to us. Instead, science tries to do the seemingly impossible: to make predictions in the world. How we do this is complicated.

Popper views the means by which we make predictions to be an unimportant consideration, but let us inspect that. We sometimes use OLS regressions to help explain the world. If one variable moves, then we expect another to move. However, we sometimes observe correlations that seem lacking in explanatory power. In some cases, we assert some understanding of how things could and could not work in order to suppose that some of these correlations are meaningless. In order to understand what is in the world, we find it important to try to demonstrate how things work as they do. Otherwise, predictions do not tell us much. It may be that correlations reflect causations, reverse causations, or effects of other underlying causes.

Some people may claim that science is like everyday life: we know that we might or might not be wrong, but as long as it is the most effective way we know of to accomplish our goals, then it is sufficient until it begins to fail in a significant way. I would argue that this instrumentalist approach has problems, though.

In our daily lives, we do not generally presume that our method for doing something is necessarily the best method; we often leave a good amount of room for doubt, and people observing us do not assume that we must be doing things by the best method. Some scientists may have doubts, but what we can be more certain of is that policymakers and the general public have different views of science’s methods and conclusions than they have of everyday behaviors of people. This can complicate things.

When someone effectively crosses the street, they are not held to broad scrutiny; they are not accountable to the public. What scientists do can be fraught with broad political and ethical questions.

Views on science likely affect how science is done. A young student may be drawn to science by the claim that science helps to tell us what the world is and is not. It may be that science is the most effective means of proving ontological claims, but perhaps such bold claims should require equally bold scrutiny. Because science is viewed as having ontological implications, it is sometimes viewed as using superior methods. If some among the scientific community feels that science can tell us unequivocally what is in the world, then they may feel inclined to believe its claims and to protect its methods.

As such, the methods that science has used all seem to have run into the inductive problem, but we keep trying to find new ways to redeem science’s methods.

If someone who was familiar with induction and with popular methods in science, but who was not familiar with the debate over science’s methods, one might describe science’s methods as inductive. Popper, Kuhn, Lakatos, and many others have tried to address concerns over science’s methods——not by suggesting changes——but by changing how we view science’s methods. Why?

IV. The Inductive Problem

In our daily lives, we are often skeptical of change, especially when proposed changes mean possible changes to outcomes that are important to us. If someone says to stop farming the old way and to use new technology, we are likely to want some amount of evidence that this is a good idea before we make the change. Different people may require different amounts and kinds of evidence, but the person who is quick to make changes can seem hasty and reckless, while the person who waits to take a full account of the outcome is often viewed as wise. One might think of the old Chinese proverb about the farmer who runs into a seemingly bad situation, followed by a seemingly good one, followed by a bad one, and so on, all the while saying, “We’ll see” to every claim to his change of fortune. Consistency and reliability are often valorized at the expense of inconsistency and volatility.

If wrongly making a change to try to improve the situation could be viewed as a Type I error and failing to make a change that could improve the situation could be viewed as a Type II error, then the Type II error has advantages in the short term. If one commits a Type II error, they can still be confident in the short-term outcome. If one commits a Type I error, they might lose the farm.

Put another way, if someone needs to provide food to their family, then farming in the proven way seems like a responsible thing to do. If they become convinced that there is a better way to farm, then they should adopt the change, but maybe not until sufficient accounting for possible risks. The longer one waits, the more information one is likely to gain.

This seems to me like the best possible explanation for our seeming obsession with reframing the scientific method, despite that our basic ways of thinking of how to perform science tend to follow long-established patterns. While most scientists are not farmers, and certainly science does not seem to be very like farming, humans often seem to operate on the basis that Type II errors are generally more acceptable than Type I errors and scientists are humans. Some aspects of human psychology seem to be universal, as we have discussed. While scientists are aware of both kinds of errors, it seems reasonable that scientists’ humanness may still leave them vulnerable to bias against Type I errors. After all, not all scientists are philosophers of science.

Falsificationism tries to diminish concerns by saying that we design experiments for the sake of trying to negate theses, but the virtue in this seems only to be that we pay more attention to proposed black swans than we otherwise might. What Falsificationism tells us to do is to exhaust all means to find a falsification to a thesis in order to test it, but when do we decide that a falsification is a falsification? When we find what looks like a falsification, we want to verify it. We might run experiments to try to reproduce what appear to be falsifications. We might try experiments to find similar results in different contexts, hoping to accrue a sufficient number of what look like falsifications. While every case might not look this way, exactly, does this not present the same kind of problem that we encounter in induction?

The problem with induction seems to be that the information that we have is incomplete, so we cannot rely on it. The problem with Falsificationism seems to be that we cannot be sure that a falsification counts as falsification. In other words, the information we have is incomplete, so we cannot rely on it. In either case, our thesis is always vulnerable to negation by unknown information, and this is the problem of the black swan.

I believe that there is no reason to assume that a falsification is reliable other than for the sake of convenience, just as the information we get through induction only seems to be treated as reliable for the sake of convenience. One attempts to falsify based on the convenient assumption that Falsificationism works, much like one induces based on the faulty, but convenient, assumption that induction works. In either case, it seems that one assumes to be able to generalize for no real reason other than to be able to generalize.

In what we have read, the problem seems to remain unsolved. Someone might claim that science is only instrumentalistic, but the problem remains: science plays a role in all our lives, so mistakes in science are important enough that if we treat science as though it is infallible, the implications can be grave.

I see value in valorizing the everyday wise man who says “We’ll see,” for some time, making himself more susceptible to Type II errors than to Type I errors. But treating science as though it can avoid the problems with induction seems to produce the risk of giving license to people to conduct science without properly problematizing it. Instead, we might recognize that there is an inductive problem in the way we do science, consider what should be the necessary threshold for making dramatic changes, and assume that science owes more to the public than individuals owe to one another. Perhaps, at every step, we ought to be as clear as possible about what we are and are not accomplishing in science.

V. Trouble with the Positive-Normative distinction

Hume thinks that what we ought to do comes from our sentiments. As I demonstrated in an earlier paper, our decisions are often made subconsciously, leaving our conscious minds to later rationalize our decisions. We sometimes seem to be fooled into thinking that we have more agency than we sometimes do. Thus, we can notice that decision-making is at least sometimes governed by the reality that we rightly or wrongly perceive around us more than it is governed by our conscious minds.

When our accepted reality changes, it can change the decisions made by our subconscious minds from the decisions that would have been made based on the previously accepted version of reality. The new decision outcome is not one that is chosen consciously, but one that is determined by the shift in perceived reality. This does not address why Hume wanted to separate is from ought——or positive from normative——which appears to be an important point.

Hume thought that perceptible reality should be considered separate from deductive logic. His point for saying so was to argue against the use of induction. However, in some sense, the separation of the sensorial world from deductive logic is merely an application of the mind-body problem, and it does not take into account contemporary understandings of human minds.

The mind seems to conceive both of the sensorial world and of logic, even if conceptions of reality are nearly universally accepted as reflections of ontological truths. On the other hand, Hume argued that logical reasoning was separate from reality as Hume perceived it. However, it is worth noting that both kinds of thinking are constructed by our minds; every observation of reality and every logical argument comes to existence through our minds, leaving each kind of thought subject to the mind’s possible distortions. The point here is not to claim that observations of what we think of as reality are equivalent to logical reasoning——that would appear to me absurd. Instead, what I mean to point out is that the two share something: what people perceive to be reality is not necessarily what reality is and people do not necessarily make logical decisions about what they ought to do by the means people sometimes think that they do. Put another way, our perceptions and our decision-making both seem to be contingent, subject to more than external realities and conscious decision-making. This can complicate the positive-normative distinction.

We often seem to be confused about what constitutes the positive; we might never fully understand it. The normative seems to be governed, at least to some degree, by subconscious thinking. If what we ought to do is not always determined by conscious decision-making——and if changes in our accepted reality affect our subconscious minds——then the effect of shifts to our understanding of the world can necessarily imply changes in what we decide we ought to do. Further, it seems that we do not always have the sort of agency that might keep this perceptual shift from influencing our decisions.

I do not believe this to imply that induction is like deduction, nor do I necessarily believe that there is a logically deductive argument for what ought to be done. What I argue is that the positive and the normative are not necessarily what we think they are. It seems reasonable to suggest that changes in what we perceive the world to be (what is) could necessarily imply changes in what we think we ought to do. In other words, perhaps the is implies the ought.

Earlier, I claimed that intentions sometimes seem to matter more than at other times. Intention might not matter when determining whether is sometimes implies ought. That is to say that it might not matter if an implication is intended and it might not matter why the is question arises. What matters here seems only to be whether the results of experiments sometimes tell us what we ought to do. The mechanism by which this happens does not seem to uncomplicate matters. If what we find to be true somehow——but necessarily——leads to different choices than we would otherwise make, then the positive has implications for the normative, regardless of intention.

It is not simply a matter of people being able to assimilate new information and apply values to situations in ways that account for new information. Instead, new information changes how we act, regardless of conscious thoughts. We have no control. Is seems to imply ought.

VI. Conclusion

What if there is bias in science, if the inductive problem is persistent, and if the positive-normative distinction is problematic? I doubt that I agree with Hume on everything, but insofar as Hume pushed back against some Enlightenment trends, I support much of his message. The reason that I bring this up is that I think it helps to consider how to react to these problems.

If we think it problematic that science and reason are viewed as virtuous where sentimentality is not, then perhaps we miss something that we are continuing to learn in cognitive sciences: decision-making may not always be as logical, or as conscious, as some people might believe. If that is the case, then perhaps there is value in cultivating healthy understanding of our sentiments in order to avoid some problems.

It seems reasonable enough to imagine that someone could go into science wanting to explain the world, thinking of science as especially fruitful for unequivocally answering important questions. This might lead someone to protect science, despite its problems. If someone’s methods run into the inductive problem, they might not be inclined to hedge against their findings; after all, these methods are normal and we know that they work. Scientific findings might, then, be overstated. We might base faith in science on the fact that we accept some scientific findings as facts, shifting our accepted reality. This, then, might necessarily lead to changes in behaviors. What a mess.

This might all seem highly contingent, and I would argue that it is, but then, I would argue that everything else might be, as well.

Advertisements