Topic I. Role of Science in a Democracy
Course Overview
The focus in this course is on the errors humans tend to make, and the approaches science methodology has given us (and we are still developing) to prevent or at least minimize those errors. Learn more »
- What is the role of scientific expertise in a democracy? Where does the authority of science come from? Where does the authority of democratic decision-making come from?
- Topic 1 introduces the course, beginning with the question of when science or scientific reasoning is relevant. We begin by distinguishing facts and values, and using this distinction to consider the affordances and limitations of scientific inquiry and expertise for personal and political decision-making. These opening questions will return at the end of the semester, when we consider processes for effectively integrating epistocratic (expertise-based) and democratic input, in light of the capacities and limitations of scientific practice and ordinary citizens which will have been examined throughout the course.
- Science is based in the assumption that we all share a public reality. We will contrast theories of truth as correspondence vs truth as coherence, as well as underdetermination and social factors in science. Despite many limitations, science is effective primarily because it is self-correcting; that is, it involves a constant critique of the reliability and validity of our measures and the reality of the entities they seek to measure. Science assumes (or has come to believe in) an objective reality that we all share, which is at least in part knowable.
- It has been suggested that our trust in "a reality out there" is often strengthened by our actively interacting with the passively-percieved world (banging the table in front of us with our hand). Our “direct” experience with a scientist’s reality is expanding further: from the human senses to the human armed with instruments. The novel instruments that are now with us constantly (e.g. GPS, camera) allow us to interactively explore parts of the world that until recently were inaccessible or accessible only passively, through expensive technology or images made by scientists. This interactive experience of previously inaccessible aspects of the world now revealed by technology is broadening our sense of what counts as "real." We carry with us a growing range of interactive tools, these days primarily in the compact form of our smartphones.
- We examine sources of statistical uncertainty/error (which can be averaged down) and systematic uncertainty (which can’t). We also connect these concepts to related terminology (jargon) from other fields: precision vs. accuracy, variance vs. bias, and reproducibility vs. validity.
- “Scientific optimism” is a rarely-discussed feature of the culture of science, a kind of psychological trick/technique to keep focused on a problem much longer than the usual attention span. Scientists adopt a can-do attitude, and convince themselves that the problem is solvable. This is an antidote and a contrast to almost all of the other skeptical, self-doubting aspects of scientific culture. With this “scientific optimism,” scientists can successfully take on problems that take years or even decades to solve, with hundreds of steps and iterations involved in developing techniques, inventing technologies, collecting and analyzing data.
- There is a history of problems becoming solvable once the news goes round that another group somewhere in the world has solved it. Belief that a problem is solvable makes it worth sticking to it long enough to solve it. Scientific optimism can be seen as an intentional self-delusion that a problem is solvable. (In the end, of course, this scientific optimism must be weighed against the cost of working on a problem that turns out not to be solvable given our current capabilities/knowledge, but nevertheless it has proven useful in overcoming the human tendency to give up too soon.)
- One consequence of scientific optimism is that one approaches group problem-solving with an eye to enlarging the pie of resources, rather than fighting over scraps in a game assumed to be zero-sum.
- In casual conversations, we use the word “causation” in many ways. Sometimes when we say “x causes y,” we’re pointing out who are what or who is responsible for y. Sometimes we’re pointing out something about the causal mechanism or process that leads to y. In this class, the focus is on yet a third reason we use causal language: to identify the “levers” in the world that we can push or pull to bring things about. If we want to generate good policies, for example, it’s important to know what some intervention will bring about, and it’s this sense of causation that is more relevant here. Moreover, science often proceeds by identifying causal relationships in the sense defined below well before the mechanism by which it does so is understood. It is therefore valuable to have a definition of causation that captures this aspect of scientific advance.
- In this class, we will examine causal relationships using variables, interventions,and randomized controlled trials.
- In the previous class we began our discussion of causality, distinguishing it from mere association and considering ideal kinds of evidence for causality, when we can run randomized controlled trials. However, in many cases, it is not possible to run RCTs to test causal hypotheses, for ethical or practical reasons. In this class, we consider other forms of evidence for causality, which cannot individually be as conclusive as RCTs but together can still present compelling evidence for causal theories.
- It is often important to distinguish claims of singular causation, where A caused B, from claims of general causation, where variable X tends to affect variable Y. RCTs can only provide evidence of general causation, which might inform our understanding of particular instances of singular causation but cannot allow us to conclude causality with certainty. Both general and singular causation are subjects of scientific investigation. For example, whether Zika causes microencephalitis is a question of general causation, while whether an asteroid caused the mass extinction of the dinosaurs is a question of singular causation. We also distinguish productive vs. dependent causation, and its implications for responsibility in legal and moral dilemmas like the Trolley Problem.
- Defining causal relationships using "variables" and "interventions." How we say that this particular thing caused that particular thing. Connections with the Trolley Problem and legal responsibility. Causal relations in observational sciences (e.g., paleontology, cosmology) where experiments are not generally possible.
- What does a scientist mean by “signal” and “noise”? We humans are always hunting for signal in noise; that is, we are looking for regularities, causal relationships and communications (the signal) amidst various distractions, both random and intentional (the noise). Scientists have developed a variety of ways to do this, including “filters" both technological and conceptual.
- Humans are so good at finding signals in noise that sometimes they do so even when there is no signal. Many techniques of science and much of statistics is aimed at avoiding fooling yourself this way. A further problem is that we often aren’t aware of how much noise we have searched through, when we believe we have found a signal—the “Look Elsewhere Effect." For example, we tend to think coincidences are meaningful. Statistics was invented primarily to deal with the problem of distinguishing real signal from noise fluctuations that look like signal.
- It is often necessary to make decisions or judgments under conditions of uncertainty. When this happens, two kinds of errors are possible; we might think that something is present when it is not (a Type 1, or false positive, error), or we might think that something is absent when it is present (a Type 2, or false negative error). In different contexts, these two types of errors may come with different costs. When one kind of error is worse than the other, it is prudent to err on the side of making the less bad error. Sometimes it even makes sense to make one kind of error quite a lot in order to avoid making the other kind of error. For example, even though the large majority of tumors are benign, it makes sense to get tumors biopsied because if you do have a cancerous tumor and assume it is benign (a false negative), it can kill you.
- An important element of the culture of science is the use of “tentative” propositions, often quantified. These can be as confident as 99.99999%—you would bet your life on it—but it would still be understood to be held as a proposition which could be wrong. This makes it psychologically easier for a scientist to be open to being wrong—and to look actively for ways they might have gotten it wrong. This cultural understanding of the importance of recognizing and reporting one's credence level leads to insistence on including error bars on graphs: a data point is completely meaningless without an error bar.
- Most people’s estimates of their confidence is wrong in characteristic ways: high confidence tends to be over-estimated and low confidence tends to be under-estimated. This can be trained to be closer to an accurate calibration, most effectively with repeated, unambiguous, and immediate feedback. One problem that arises from poor calibration is that juries often use witness’ confidence to gauge the likelihood that they are correct, but this often yields poor results due to the poor calibration of the witnesses.
- A useful “jargon” of the scientists is to speak of orders of understanding or orders of explanation. “A zeroth order explanation/cause” or “a first order explanation/cause,” is a major cause/factor, as opposed to “a second order” or “third order explanation/cause," which would be real causes with smaller effect sizes. This is useful because explanations for how things/actors in the world behave can often be parsed into a primary explanation, a secondary less important cause, a third order, still less significant cause, etc.
- Physicists train their students in doing “Fermi problems,” back-of-the-envelope estimates of quantities that arise in physics problems and in life. This is useful as an approach to performing “sanity checks” of claims in the world and of your own ideas, beliefs, and inventions. Checking numbers with quick Fermi estimates may be even more important in a world in which it is difficult to evaluate the credibility of numbers available by Googling.
- Here we will explore the most widespread heuristics and biases which psychologists of judgment and decision-making have discovered in everyday reasoning: the availability heuristic, representativeness heuristic, and anchoring heuristic, and biases like optimism bias, hindsight bias, and status quo bias. Many examples are drawn from Daniel Kahneman's book, Thinking Fast and Slow.
- Science has a particularly bad track record when it comes to studies of human sub-populations for the purpose of setting policy—particularly when groups in power study groups out of power. We should be aware of this, and wary of misusing science in such a way as to perpetuate injustice.
- Distinguishing pathological science, pseudo-science, fraudulent science, poorly-done science, good science that happens to get the wrong answer (which should happen statistically for 1 in 20 papers that give a 0.05 confidence level result). What do practicing scientists do when they try to judge a paper in a field or sub-field outside their immediate area of expertise?
- This class explores confirmation bias in the search for and assessment of evidence. In particular, we consider the ways that people tend to seek out and think about evidence in such a way as to reinforce their existing opinions, rather than testing them against new information or alternative views.
- Science is not a single “scientific method” (as often taught in school), but better characterized as an ever-evolving collection of tricks and techniques to compensate for our mental (and, occasionally, physical) failings and build on our strengths—and, in particular, to help us avoid fooling ourselves. These techniques must constantly be re-invented, as we develop new ways to study and explain the world. In the last few decades we have entered a period in which most scientific analyses are complicated enough to require significant debugging before a result is clear. This has exposed another way we sometimes fool ourselves: the tendency to look for bugs and problems with a measurement only when the result surprises us. Where previously we recognized the need for “double blind” experimentation for medical studies, now some fields of science have started introducing blind analysis, where the results are not seen during the development and debugging of the analysis—and there is a commitment to publish the results, however they turn out, when the analysis is “un-blinded” and the results interpreted.
- Sometimes groups of people reach better conclusions than people working independently, and sometimes they reach worse conclusions. There are features of group reasoning that can help, and features of group reasoning that can hurt. Here we explore how to avoid the pitfalls of group reasoning and to maximize the benefits.
- Often people think that science is necessarily reductionist, but in fact we can observe many patterns that are emergent, i.e., visible only at higher levels of organization. That is, some phenomena are only describable in terms of higher-level, nonreductionist patterns. In emergent phenomena, complex patterns (like organisms with emotions) can emerge from surprisingly simple sets of rules (like natural selection). Humans often mistake emergent phenomena as either magically inexplicable or intentionally planned by some conductor/choreographer/director. This is especially likely if one is not aware that causal explanations can depend on emergence. The internet in general, and social media in particular, are relatively untested domains in which new sets of rules (algorithms that choose what to show, likes, etc.) are being tried out. These new sets of rules give rise to unintended emergent phenomena, such as the propagation of misinformation. In the case of social media, it seems to grow conspiracy theories by connecting people with similar views and exacerbating confirmation bias. At the same time, emergent phenomena of this new social world online may seem so choreographed that they give rise to new conspiracy theories. These two patterns may exacerbate the historically documented tendency of people to believe in false conspiracy theories, through interpreting surprising emergent patterns as deliberate and communicating with others who agree. On the other hand, it is also possible that the digital revolution makes actual conspiracies easier, as the internet facilitates communication and therefore coordination across distances.
- Reason by itself, without the arational elements of values, goals, priorities, principles, preferences, fears, desires, and ambitions, does not yield decisions: Decision-making requires weaving the rational with all of these arational elements that get humans to approach problems in the first place. Consequently, we must look for, study, and develop principled approaches to coordinating all these elements appropriately in our decision-making processes. Without such scaffolds, the rationality will frequently be what gets neglected. In the following classes, we explore some of the techniques that have been used to scaffold this kind of principled decision-making. None of these existing approaches accomplishes everything we would like. Nonetheless, they offer examples of techniques that we can recombine creatively with further new ideas and approaches to allow us to make better decisions in groups, appropriately applying rationality to achieve the complex goals of the relevant communities. We begin by exploring the desiderata that optimal decison-making processes should fulfill.
- The Denver Bullet Study offers one approach to integrating facts and values in a controversial real-world problem, drawing facts from a set of experts, gauging the values of different stakeholders, and bringing these together for a final decision.
- In the two classes of this week, we try out an approach to random-sample-representative decision-making, using a panel of experts to answer questions generated by small deliberative groups, each with a moderator. We also have a presentation from the professor who developed this technique, describing its use around the world. One example topic we have used is Fracking for Natural Gas.
- Here, we explore scenario planning, a technique for systematically considering possible futures. This is valuable for planning because we often do not know exactly what the future will look like, and need to plan for multiple contingencies.
- Students design their own decision-making processes, utilizing their favorite aspects of the processes we have discussed.
- Adversarial vs. Inquisitorial Modes of Truth-Seeking