Humanistic Realism in Science: A Defense of the Irrational

by Shaun Terry

“Our motives and even our purely scientific ideals, including the ideal of a disinterested search for truth, are deeply anchored in extra-scientific and, in part, in religious evaluations. Thus the ‘objective’ or the ‘value-free’ scientist is hardly the ideal scientist. Without passion we can achieve nothing – certainly not in pure science. The phrase ‘the passion for the truth’ is no mere metaphor.” – Karl Popper, “The Logic of the Social Sciences” (p. 97)

“And thus in the last analysis Economics does depend, if not for its existence, at least for its significance, on an ultimate valuation – the affirmation that rationality and the ability to choose with knowledge is desirable. If irrationality, if the surrender to the blind force of external stimuli and uncoordinated impulse at every moment is a good to be preferred above all others, then it is true the raison d’être of Economics disappears. . .The revolt against reason is essentially a revolt against life itself.” – Lionel Robbins, The Nature and Significance of Economic Science (pp. 157-158)

 

Karl Popper and Lionel Robbins want science to be very sciencey. Science, in their view, should be about cold calculations of what the world around us is. It would seem that an objective view of the world could help us to make good decisions.

However, I do not believe that any objective view of anything can be grasped by humans; I do not believe that humans are rational in any meaningful way (except to say that humans can sometimes be more rational than some humans sometimes are); and I believe that the distinction between the positive and the normative is basically a false one. I will explain all of this throughout this paper.

Essentially, Popper defines science as those behaviors that test theories. Scientific theories, in Popper’s view, are those that are falsifiable, so theories that do not seem to be at all falsifiable do not qualify as science. According to Popper, behaviors that do not aim to test, for the sake of falsification, are not scientific.

Robbins defines economic science as that which positively describes behaviors that result from scarcity. In Robbins’s view, economic science should concern itself only with providing an account of any such behavior. His view of science, here, is to quantify the relevant information and do nothing more, i.e. economic science is a positive, as opposed to a normative, endeavor.

What they seem to share is their Humean devotion to separating the normative from the positive. Robbins is more explicit on the subject, while I believe Popper’s position depends on some implication. When Popper points out cases of pseudoscience (Freud, Marx), what Popper seems to be taking issue with is the advocacy for theories in spite of the empirical realities that they face. These pseudosciences seem to be interested in defending their theories out of normative concerns instead of being interested in allowing falsification to lead us to better theories. Positive science may be brutal, but perhaps we must simply allow it to run its course.

I think that Popper would agree that science, being purely about falsifying theories, is to be kept separate from policy decisions and other normative concerns, even if those normative questions can use science to help them make their decisions. Pseudoscience is the exceptional case. Pseudoscience can run into problems because pseudoscience often allows the normative to corrupt concerns for the positive. Robbins seems to be a bit clearer on the subject: positive economics does not deal with normative questions, except to provide data with which to make decisions.

In the quote by Robbins, using a logic that seems to be falling more and more out of fashion, he rationalizes the importance of economics as an academic field. It seems that academia is growing increasingly sensitive to the lived experiences of various people, especially the experiences of those among historically underrepresented and misrepresented groups, but Robbins is from an earlier time. Enlightenment and post-Enlightenment thinking has put a good deal of focus on the positive virtues of rationality, but as we have learned more about the possible dangers that can come from ignoring the irrational, along with the virtues of our cognitive processes that are not consciously performed, academia seems to have largely shifted toward greater respect for our irrational selves. For this reason and others, what Robbins said may strike at least some of us as problematic.

Robbins may have defined as rational any act which helps us to achieve our aims, but a look at this definition finds problems. If we act in a way that is incidentally helpful to us, is that rational? For an extreme example, if someone attempts suicide, only to become famous in doing so, leading them to incredible wealth and to getting all the mental health care that they need, and if this results in their happiness and improved health, is that rational or irrational? Someone might argue that the definition, then, would be improved by having it speak to intentionality.

Rationality is often viewed as being a conscious process — one that speaks to people’s intentions. But humans do many things that solve immediate problems and ignore more lasting ones. Often, immediate problems are not as important as more lasting problems, but we often sacrifice the long-term for the short-term, anyway. In this way, people often cause predictable problems by not fully considering the consequences of what they do. That is to say that they are not always fully intentional. To that point, it is also true that we may misunderstand problems or misunderstand our broader situations. This can come from distortions that arise from psychological traumas, socializations, and other effects on our brains that alter our judgements. This all speaks to part of the very problem in defining “rational.”

The way the word “rational” is often used takes Robbins’s definition and accounts for a good deal more. We could argue over what Robbins means by “uncoordinated impulse,” but I do not think it would do any good. Simply, no creature could survive were its thoughts and actions uncoordinated. Any distinction between rational and irrational needs for “rationality” to incorporate something more than the fact that the actions coordinate and assist in survival. Otherwise, the distinction is arbitrary, as such a conception of irrationality essentially ensures death, and thus, no rationality or irrationality. This irrationality does not occur by virtue of the fact that it ensures its own demise. Therefore, such a distinction is false in that one exists and the other does not. If “rational” is to be a word, it must mean more than what Robbins suggests.

The interesting point, then, is that we can assume that Robbins submits his argument to admit that there actually is an irrational basis for economics: the value judgement that allows us to determine that rationality is good. Of course Robbins contradicts himself here, but I sense that he is basically admitting as much and that he is okay with it.

The problem appears significant, though. The decision to value rationality is necessarily an irrational one. What makes anything better than anything else? We might presume that it is simply what has been evolutionarily advantageous to us, and in some senses, logical processes seem to have served us well.

If we can tentatively accept that rationality should be defined as intentional thought, and if we can believe in any science at all, then humans are mostly irrational. Prefrontal (or meta-) cognition seems to make up very little of what we do and this kind of thinking is what we might think of as being rational. Popper’s rationality principle faces similar issues to Robbins’s definition of rationality.

The quote from Popper suggests an understanding of something that seems clear to me: there is no reason to test anything at all if the results are not important to us, and such import relies on irrational valuations. Humans basically agree on the most fundamental moral questions: violence is wrong, theft is wrong, dishonesty is wrong, playing dubstep in public is wrong, etc., but why? What makes violence, theft, or dishonesty (or dubstep) wrong? Frankly, I can see two ways to view moral codes upon which we universally agree: 1) we simply feel it; 2) it can be explained as that which has most effectively led to the survival of humans.

Essentially, at least some of our universal sense of morality — that is, that to which we more surely seem to agree — seems to be a function of pro-social, and otherwise adaptive, behavioral patternings. These ethical matters may become complicated based on a number of factors, but we seem to have evolved in such a way as to instinctively agree on some questions of how we should behave.

Humans generally have common goals and desires, including that we should try to help one another survive and that we should try to help one another obtain what is needed in order to survive. Our happinesses or unhappinesses can have serious implications on our own individual survivals, and generally, we instinctively tend to aim to be happy and to help one another be happy as a means of our own survivals.

I believe that Popper’s quote speaks to this understanding. We recognize common goals and attitudes, so we work toward achieving those things. There must be things that motivate us to do science, and if we did not agree on what many of those things were, then I do not believe that science would get done, especially not in the highly effective ways it often seems to get done.

The fact that some things motivate us necessarily requires that we make distinctions; otherwise, we would simply wander around doing random things at random times in random ways. However, and to repeat, it seems that these motivations could only ever be based on irrational value judgements. To whatever degree humans can be “rational,” it is likely only: 1) in relation to other human and non-human beings; 2) in terms of fulfilling irrational desires, which is what Robbins’s conception of economics speaks to.

This seems to be the basic problem that Popper and Robbins share here. They both argue that something irrational must drive scientific thinking, but they both seem to prefer a hard line between positive and normative concerns. These thoughts seem incongruous. Positive science aims to describe what the world is, and if we are to distinguish between positive and normative, then it seems reasonable enough that we should be able to do so along the lines of whether the statement or question in question is or is not imbued with irrational thought matter. Positive science is that which sees the world clearly with no concern for anything other than the visible truth. Normative science — or art, if you prefer — concerns itself with irrational human ideals. This brings up several possible problems that I can see.

First, if irrational concerns motivate our scientific endeavors, then we are likely to only ever conduct scientific behaviors that reflect irrational concerns. How do we make a positive account of the world if we limit our scope? This may seem like a minor point, but the kinds of problems that science aims to solve are often complicated ones. If we systematically limit our understanding of the world, then we are likely to miss a lot of answers and to misunderstand a good deal of the world. Here, normative concerns inform the methodology of positive science as to distort the positive scientific picture in particular ways.

This concern can lead to asymmetrical effects, in terms of whom it affects and how. So long as scientists need be motivated by irrational concerns, scientists must be driven by the concerns that scientists or funders have, rather than the concerns that all humans have. This can make a difference because some kinds of people may be over — or underrepresented among the scientific community. We might not be surprised if solutions arrived at through science might be more helpful to wealthy, Western, white, male people, if they make up a greater proportion of scientists or funders than do non-wealthy, non-Western, non-white, non-male people. It is likely true that relevant questions go unasked by scientists and this can be quite problematic. In this case, the normative concerns of scientists are not necessarily consistent with the normative concerns of non-scientists.

Science is not a singular, fixed, transparent, static, easily assessed thing. I bring this up because scientific processes can be vulnerable to criticism. Some experiments do not work and theories generally run into problems. We do not necessarily have a means which allows us to make the most useful choice in deciding how to resolve these things. There are longstanding practices in which we are fairly confident and there are cutting edge practices that might be riskier. When trying to solve reasonable scientific questions, there may be scientific endeavors that produce results that depend on operational choices. Their resolutions can be arbitrary and can be vulnerable to personal biases. The ways by which we conduct science rely on choices and not all of those choices need be the most rational choices, so here, normative concerns affect positive outcomes. But even at the point of interpreting findings, there could be problems.

When reporting research, there is often an interpretive phase of what we might describe as the “positive scientific” findings. In my view, this phase can be crucial. Even if the person reading the interpretation is an expert in the field, they may be affected by the interpretation given by the interpreter. For those who are not relevant experts, they would seem to be subject to even greater influence by the interpretation given. The interpretation is both important and sensitive. If we all have irrational concerns, then we can expect that, outside the view of our conscious minds, our irrational concerns are likely to affect much of what we do. In fact, this is basically what the term “bias” means, does it not? Here, the normative concerns of the interpreter of the data affect the understanding of positive results, possibly leading to differences in how future positive scientific endeavors are undertaken.

The reporting of scientific findings can have implications for how we behave. In class, we have talked about the commodity options market, and we have talked about Sorosian reflexivity. In science, we could think of clear, simple examples, as in when scientists tell us what makes us healthy and happy. If scientists find that exercise, meditation, or undertaking the Communist Revolution™ makes us healthiest and/or happiest, then it would seem wise to do those things. In such a case, it would seem clear that the positive becomes the normative. If a scientist tells us that eating meat causes global warming that will eventually kill us, it would be wise to stop eating as much meat; there is here a blurring of lines. Why would the scientist choose to study whether or not meat production causes global warming? And do the findings not tell me what I should do?

In the purest sense, scientific findings do not always explicitly dictate suggested behaviors, but clearly, the subject matter is chosen with irrational human concerns in mind; simply, the whole point of the scientific endeavor is often to suggest human behaviors. Other cases are less clear on the matter, but still, I would argue that the problem tends to persist.

If we think of a less clear example, it may help to better illustrate how problematic the distinction between positive and normative can be. If someone were to study some basic question in chemistry or astrophysics, each of these still can tell us a good deal about how humans should behave, if we can assume that humans mean to optimize their health and happiness. How chemicals are composed or how they react with one another can be very useful information for humans. How stars behave could also have a dramatic impact on the decisions that humans make.

With that in mind, is there ever really a distinction between positive and normative? Why learn about the world? We have to have reasons for doing what we do, but contrary to what the word “reason” might imply, this word very often speaks to feelings, evolutionary impulses, irrational motivations.

In this way, perhaps it is a bit ironic that earlier philosophers referred to the normative as “art.” After all, when scientists do their work for the least normative reasons, it may be due to some aesthetic concerns: “I find Math’s potential elegance exciting,” “I chose marine biology because fish are so beautiful,” etc.

Moreover, the more we learn about the world, the more we think about the implications of those things. It is hard to imagine a scientific question that would not tell us something about how we might behave. Perhaps this is at least partly because the questions we tend to ask can also tell us about who we are and what is important to us. To combine this point about the blurry line that distinguishes positive from normative with the point I made earlier about who scientists are, if our understanding of what makes humans healthiest is influenced by the biases of scientists and/or funders, then could we perhaps create conditions that would make happiest and healthiest the groups from which scientists and/or funders come and not necessarily all humans? I see this as potentially a big problem, and in this light, the question of why the treatments of some diseases and not others, and whom those diseases affect, takes on new potential meaning.

In the end, what scientists say about what the world is might tell us something about how we should behave, but this is a complicated thing. Scientists do important work and make important discoveries, but it is not so simple as to describe scientists as objective observers telling us what the world is and making things better for us. In fact, scientists have feelings and biases, too, and for some people, scientists actually make the world worse. Further, unmitigated valorization of cold, rational treatments of the world around us does a good amount of damage. This could be redeemed if it were the case that people could ever be truly rational and that it would be good if people could ever be truly rational. I am not sure that either is true, and I suggest that if we could acknowledge this, we could do science in a more honest and, perhaps, more productive way.

Advertisements