Filtering on

    • OVERVIEW

      • When is science relevant? The many uses of a scientific approach.  
      • What is the role of scientific expertise in a democracy? Where does the authority of science come from? Where does the authority of democratic decision-making come from?  
      • Topic 1 introduces the course, beginning with the question of when science or scientific reasoning is relevant. We begin by distinguishing facts and values, and using this distinction to consider the affordances and limitations of scientific inquiry and expertise for personal and political decision-making. These opening questions will return at the end of the semester, when we consider processes for effectively integrating epistocratic (expertise-based) and democratic input, in light of the capacities and limitations of scientific practice and ordinary citizens which will have been examined throughout the course.
      • Addressing the Question: How should we use science to make better decisions?
      • Relevant to: Philosophical Underpinnings
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
        • “But the efficacy of wearing a bicycle helmet is a simple factual question that we should be able to get an answer for.”   
        • “It was embarrassing to discover how often my choice in the grocery store was determined by something irrelevant to the actual contents of the item.”      
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • Commonest difficulty is in distinguishing facts and values in fraught real-world contexts, when not reminded to do so. E.g., students' deference to scientists for questions is predicted by the ideological fraughtness of the topic, not whether it is a question of fact or value.  
        • “What could science possibly have to tell us about love!”     
        • "Science can't tell us anything about happiness, because people don't agree about what makes us happy, anyway."  
        • "I think democracy is always better, because people know what they want and everyone deserves to try to get what they want. Everyone has their own facts, and people should be allowed to pursue their vision of the world without interference from scientists."    
        • "I think we should always just defer to experts on everything, because experts know what's best for everyone and regular people don't have time to learn that much or think that hard anyway."    
        • "Shall I refuse my dinner because I do not fully understand the process of digestion?" - Oliver Heaviside
        • Mistaking "claim of fact" for "claim of value" due to lack of evidence or debate around the topic.
    • LEARNING GOALS

      • A. ATTITUDES
      • B. CONCEPT ACQUISITION
        • Facts vs. Values
          • a. Facts: Objectively true claims about reality. Everything that is the case. What is, descriptively, including spatial relations, causal relations, attributes of objects, etc.
          • b. Values: What is of value, important, of worth. Oughts, shoulds, etcs.
        • Democracy vs. Epistocracy
          • a. Democracy: A system of government wherein a society’s citizens have more or less equal input into policies.
          • b. Epistocracy: A system of government wherein a particular subset of a society—privileged by their education or other markers of expertise—decides policies.
        • Scientific expertise has utility for political decision-making.  
        • Social and behavioral aspects of the world can be approached scientifically and, therefore, have relevant experts.  
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
          • Reading Question: What are the major limitations of democracy? What are its affordances (that is, what functions or goals can it serve well)?
      • Clicker Questions
        • You're on a hike in Marin with your friends and you black out. Next thing you know, you're in an ambulance racing towards the hospital. When you arrive at the hospital, the doctors tell you that one of two things is wrong with you, and they aren't certain which. Either (1.) You are going to die in the next few hours unless you agree to an invasive, dangerous heart surgery, or (2.) You'll probably be fine with some medication, and there's plenty of time to get more tests over the next few days. How do you decide whether to get the heart surgery or take the medication?
          • a. Go with the judgment of the most experienced doctors at the hospital.
          • b. Get everyone in the hospital, patients, visitors, doctors, nurses, janitors, and your worried friends, to vote on 1 or 2, and go with the majority vote.
        • You're the school president at a new school, and you are in charge of setting up the process to choose a school mascot. Students have already proposed a list of twenty possibilities. Which is the best way to make this choice?
          • a. Invite in an external historian of school mascots to choose, since they are an expert.
          • b. Ask the teachers and administrators to vote, since they know more than the students.
          • c. Give one vote token to every person in the school, teachers and students, and go with the most popular.
        • Discuss: What's the difference between questions 1 (what to do about a heart attack) and 2 (how to choose a school mascot)?
        • If you had to choose between living in a pure democracy or a pure epistocracy, which would you choose?
          • A. Democracy
          • B. Epistocracy
      • Discussion Questions
        • Why did you give the answer you did to the clicker question (better to live in a pure democracy or a pure epistocracy)? What are the costs of your choice?
          1. In what ways is the current US political system epistocratic and democratic?
          1. If you could create a government from scratch, to what extent and in what way would it be epistocratic vs. democratic? Why?
          • How would your system come to a decision on a healthcare plan?
          1. Utopias:
          • A. Imagine a utopia in which democracy functioned optimally (as well as it conceivably could). What would such a society be like? How is it different from our society? How hard would it be to bring about such a society?
          • B. Now imagine a utopia in which epistocracy functioned optimally. What would this society be like? How realistic is it?
          • C. Would you prefer to live in the democratic utopia or the epistocratic utopia? Why?
          1. How is disagreement about matters of fact different from disagreement about matters of value? How does disagreement about each typically play out? Why are they different?
      • Class Exercises
      • Practice Problems
        • Which of the following is a statement of facts and which of value? Discuss.
          • Socrates is mortal. (F)
          • Everyone should learn a little logic. (V)
          • America is the greatest country on earth. (V, unless "great" is interpreted to mean "powerful," in which case F could be argued.)
          • Dogs have four legs. (F)
          • Trout are the best fish. (V)
          • Everybody loves trout. (F, albeit false)
          • Opera is valuable to society. (V)
          • Soap operas affect social norms. (F)
          • Democracy is the worst system of government, except for all the others. (V)
          • Capital punishment is morally permissible. (V)
          • Capital punishment is an effective deterrent for crime. (F, albeit debatable)
          • We do not protect the oceans well enough. (V, although very easy to argue on the basis of self-interest)
      • Homework
        • Pre-Course Survey
    • Data Science Applications:
      • Hard to figure out the data science elements
    • OVERVIEW

      • Science is grounded in belief in a common, shared reality with some degree of regularity.
      • Science is based in the assumption that we all share a public reality. We will contrast theories of truth as correspondence vs truth as coherence, as well as underdetermination and social factors in science. Despite many limitations, science is effective primarily because it is self-correcting; that is, it involves a constant critique of the reliability and validity of our measures and the reality of the entities they seek to measure. Science assumes (or has come to believe in) an objective reality that we all share, which is at least in part knowable.   
      • Addressing the Question: Why is Science Effective?
        • The Reality Assumption
        • Science as Self-Correcting
    • TOPIC RESOURCES

    • EXAMPLES

      • Introductory Examples
        • When scientists first developed thermometers, several different substances were used. The problem was, these substances had different rates of expansion, yielding different ways to quantify "temperature." For example, water, alcohol, and mercury expand at different rates: if you set up thermometers with "0 degrees" equalized, each of the substances will hit "100 degrees" at a different temperature. How, then, do we know which kind of thermometer to use? Is the temperature "really" 100 degrees when a mercury thermometer says so, or when a water thermometer says so?
      • Exemplary Quotes
        • “Science means, first of all, a certain dispassionate method. To suppose that it means a certain set of results that one should pin one’s faith upon and hug forever is sadly to mistake its genius, and degrades the scientific body to the status of a sect.” – William James, “What Psychical Research Has Accomplished,” in Will to Believe
        • “They seem to think that anybody’s opinion is as good as anybody else’s on this matter where there is only one reality out there. It may be hard to figure out, but it’s still there anyway.”
        • “Either the earth is going to warm by >4 degrees over the next 50 years because of human-added greenhouse gasses or not—whether or not the proponents on each side of the debate are biased! ‘Nature always bats last.’”
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • Commonest mistakes are: (a.) taking the logs of the science-raft for "ideals" rather than claims and (b.) not grasping just how science is self-correcting.
        • “Well, I just happen to think that if you punish people whenever they misread a word they will learn to read much faster—and most people agree with me. So...”
        • "Science is just another religion, no better and no worse than any other. They use textbooks as their scripture, and scientists are their priests. You should choose whichever authority feels most right to you or stick with the authority you were raised with, because there's no other way to choose between them."
    • LEARNING GOALS

    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
        • ‘Autism’ should be defined:
          • a. Operationally, in terms of observable symptoms. There could be different, equally good definitions.
          • b. Conventionally, by arbitrarily fastening onto one set of symptoms.  
          • c. There’s a real phenomenon out there,‘autism’, but room for continuous progress indefining it. 
        • Suppose there is such a condition as autism 'out there.' Who should be deciding whether it's a disorder or merely part of unproblematic neurodiversity?
          • a. Psychiatrists
          • b. People diagnosed with autism spectrum disorder
          • c. Everyone
          • d. Some other group
      • Discussion Questions
        • If every belief  were truly just as good as any other, what implications would that have for...
          • A. How we should reason about what to believe?  
          • B.  How we should reason about what to do?  
          • C. Human communication?  
          • D. What could be meant when someone calls a claim “true”?  
        • If there were not a shared reality, what would that mean for...
          • science?  
          • group decision-making?
          • communication?
        • Can you think of other epistemic frameworks that, like science, are self-correcting? If so, how are they similar? How different?  
        • What differentiates science from a religion? Describe two elements of science that are not true of religion.
        • Suppose there is scientific consensus on an issue, but you have an intuition that runs against that scientific consensus.  Imagine you are obliged to advocate one side or the other (at least provisionally).  
      • Class Exercises
      • Homework
        • Why wouldn't it be possible to throw out all our beliefs and start completely from scratch?
        • We have mental representations of all kinds of entities that we have never observed with our naked senses, like microbes, the rings around Jupiter, and black holes. How are our representations of these things different from our representations of directly observed entities like kittens and mangoes?
    • OVERVIEW

      • Science uses both our direct senses and a variety of instruments to extend our ability to observe phenomena. We trust our instruments for the same reasons we trust our senses; interactive exploration and comparison.
      • It has been suggested that our trust in "a reality out there" is often strengthened by our actively interacting with the passively-percieved world (banging the table in front of us with our hand). Our “direct” experience with a scientist’s reality is expanding further: from the human senses to the human armed with instruments. The novel instruments that are now with us constantly (e.g. GPS, camera) allow us to interactively explore parts of the world that until recently were inaccessible or accessible only passively, through expensive technology or images made by scientists. This interactive experience of previously inaccessible aspects of the world now revealed by technology is broadening our sense of what counts as "real." We carry with us a growing range of interactive tools, these days primarily in the compact form of our smartphones. 
      • Addressing the Question: Why is Science Effective?
        • Validation of Instruments
          • Conception
          • Techniques
          • Challenges
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
        • "It sure helped public health and medicine once we realized there were things affecting our health that were just a little too small to see. I wonder if we could have figured that out without the invention of the microscope. I guess we might have just thought there were more invisible entities out there."
        • “It didn’t occur to anybody that there was a such a clear periodic pattern in the populations of those wolves and rabbits until somebody just started writing down every sighting—we’re pretty bad at estimating and remembering times between occasional events.”
        • “It’s amazing to first see a slow motion picture of a violin string making a note—wouldn’t it be great if our eyes and brains were fast enough to do this?”
        • “Do you think that modern technology offering us more different vantage points fundamentally changes our position on any practical questions?   For example, does it make a difference that in relatively recent decades we have gotten used to seeing the earth as a whole from space?”
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • Students tend to overextend the category of "scientific instruments to enable wider observation" to appeals to authority like books and Google and to tools that add to rather than clarify observations, like PokemonGo and psychedelics.
        • Interactive exploration posed a particular challenge for students. Perhaps because of this, many students had difficulty proposing a way to test an instrument.
        • “We can't really know anything about other galaxies, because we can only see them through fancy instruments and we can never know if the instruments are telling us the truth.”
        • Students continued to be confused about interactive exploration and often did not incorporate the validation of instruments into their understanding.
    • LEARNING GOALS

      • A. ATTITUDES
        • Place appropriate trust in instruments where direct observation is not possible (or is less precise/accurate).   
        • Our raw senses yield a rough representation of reality, sufficient for most people to have a firm belief in a shared objective reality at that level of description. This can be further expanded and refined by systematic observation and instrumentation.
      • B. CONCEPT ACQUISITION
        • An instrument can have greater precision and accuracy than direct observation or the instruments used to test and calibrate it.  
        • Validity of a Measure: Is the measure yielding information about the target entity in out in the world, given that the target is something real (i.e., concept is valid).  
        • Challenges in validating the use of an instrument:  
        • Techniques for validating instruments:  
          • a. Interactive exploration: Testing an instrument by changing the thing it is measuring in ways you know through other means, and seeing if the instrument recognizes the changes appropriately (e.g. does driving increase a car's odometer; see how singing higher and lower notes affects a sound spectrograph; sprayable electrons in Hacking reading).  
          • b. Comparison of multiple instruments (e.g. thermometers)  
          • c. Comparison to direct observation (e.g. naked sight compared to sight with a magnifying glass)  
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
        • Which statement best captures your stance:
          • A. I strongly feel the pull of the arguments about limits of our understanding of reality.
          • B. I strongly feel their pull, but I think there is a little room for our understanding of reality to be essentially correct some of the time.
          • C. I strongly feel their pull, but I am still somehow pretty sure our basic sense of reality is generally right in many of the most important ways.
          • D. I’m strongly confident of our ever-growing capacity to capture reality, almost all of it.
      • Discussion Questions
        • How do we know we can rely on our senses? How do we know when we can't?  
        • How can we observe a thing that we can never perceive directly with our senses?  
        • Describe an entity which you believe exists for which you have only very indirect evidence. Why do you believe it exists? Is there anything that might convince you it did not exist? Is there anything that might convince you that although something like it does exist, it has quite different properties than you had thought? Possible answers: electrons, quarks, black holes, dark matter, souls, God, right and wrong, soulmates, etc .  
        • Suppose you were working for an extremely eccentric art collector. He asks you to measure the beauty of each item in his collection in a systematic way which is not reducible simply to your opinion. How would you go about operationalizing beauty?  
      • Class Exercises
        • CO2 Meter
          • Requires a Windows Computer
          • Blow into CO2 meter to demonstrate that CO2 in the room can be "interactively explored."
        • Diffraction-grating glasses
        • IPhone app provides interactive sound spectrogram (Spectogram Pro). Use slide whistle, stringed instrument, whistle (interactive exploration high vs. low), difference in timbre between male & female voices. What differences does the spectrograph instrument show between these sounds? How do these differences map onto differences you can hear? What does the spectrograph show that you can't know just by listening? Do you believe that what the spectrograph shows that you can't hear is real? Why or why not?
        • iPhone app (Vernier Video Physics) shows quantitative analysis of slices of time after videotaping the movement of a tossed ball (falling and bouncing).
        • A contrasting example (i.e., without much of the interactivity needed to make the "reality" evident): CO2 concentration in room is measured over the course of the class, and the resulting graph of CO2 over time is shown on the projector. You should see a significant increase. Graph should indicate increasing levels of CO2 throughout the class, as the students filling the room fill it with CO2.#h
        • Give instructions to find level function on iPhone (this is unlabeled). Ask students to figure out what it does. Discuss how they know that's what it does.
      • Practice Problems
        • You have three meat thermometers, and they all give different temperatures for your holiday turkey. How would you go about deciding which thermometer to trust?
        • You get a fancy new telescope for looking at stars too distant to see with the naked eye. How would you go about testing if it is showing you real celestial bodies, and not just artifacts of the telescope?
        • Play some more with the spectroscope on your phones. How do you know it is really showing you sounds, and not responding to something else internal or in the environment? Hint: Interactive exploration!!!
      • Homework
        • What is the “fundamental philosophical puzzle” that faced scientists trying to calibrate thermometers in the eighteenth century? Explain the key point in your own words. (Based on the reading by Chang: “Spirit, Air and Quicksilver” chapter from Inventing Temperature)
    • Data Science Applications:
      • What situations do you not have contact with reality?
      • How valid are your measures for your target entities?
    • OVERVIEW

      • This topic explores the sources of error and uncertainty in data.
      • We examine sources of statistical uncertainty/error (which can be averaged down) and systematic uncertainty (which can’t). We also connect these concepts to related terminology (jargon) from other fields: precision vs. accuracy, variance vs. bias, and reproducibility vs. validity. 
      • Addressing the Question:How confident should we be?
        • Statistical Uncertainty
        • Systematic Uncertainty
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
        • "It won't do us any good to average lots and lots of test subjects' heights together if our tape measure got shrunk in the wash!"
        • "If we estimate the effect of a drug on weight by randomly assigning people to take the drug vs. not take it and then measuring their weight after a year, we could subtract the average weight loss of drug-takers vs. non-drug-takers to get the effect size of the drug on weight loss. But the people know if they're taking a drug for weight loss, so there could be a placebo effect creating a systematic bias. So the better way to do the experiment is to give the control group sugar pills. Then we can be more confident that any weight loss is due to the drug, and not a systematic bias created by the placebo effect."
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • Difficulty distinguishing degrees of statistical error or uncertainty
          • "We can't say one instrument has more statistical uncertainty than the other, because everything has statistical uncertainty."
        • Failing to Notice Possibility of Statistical Error
          • "I know that everyone likes chocolate, because I asked three of my friends and all of them said they like chocolate."   
          • "Scientists ran a randomized controlled study with 30 people and four conditions, and the results were statistically significant, so we know the drug works." 
        • Failing to Notice Possibility of Systematic Error
          • “There’s no way that X [fill in with name of candidate] could win the election!  Everyone that I know is voting for Y [fill in with name of other candidate].”
          • "We surveyed over a hundred thousand people, so our study is definitely an accurate picture of how Americans think about science."
          • "We asked students to share if they were sexually active with a show of hands, and almost no one raised their hand. So we know that it's rare at our school."
    • LEARNING GOALS

      • A. ATTITUDES
      • B. CONCEPT ACQUISITION
        • Statistical Uncertainty/Error: Differences between reality and our measurement on the basis of random imprecisions.
          • a. All measurements have a certain amount of variance, which are just differences between multiple measurements due to error and/or genuine variation in the sample. These differences will not all go in the same direction.  
          • b. Statistical uncertainty can be reduced by averaging a larger amount of data.  
        • Systematic Uncertainty/Error: Differences between reality and our measurement that skew our results in one direction.
          • a. Such measurements will show a consistent bias, that is, a consistent deviation from reality in one direction.  
          • b. Systematic uncertainty cannot be reduced by averaging a larger amount of data.  
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings
      • Clicker Questions
        • Sara is running a mile, which is a four-lap race around a track. Her coach wants to time how fast she runs on the third lap of the four-lap race. As Sara goes past the line on the track that marks the the start of the laps, her coach presses a button on a stopwatch to start a timer. When Sara goes past that line again, her coach presses the button again to stop the stopwatch. Which of the following will be a noticeable source of uncertainty in the coach’s measurement of how long it took Sara to run the lap?
          • A. Primarily statistical uncertainty
          • B. Primarily systematic uncertainty
          • C. Both statistical and systematic uncertainty
          • D. Neither statistical nor systematic uncertainty
        • You have heard that a cat can run up to 30 miles per hour and alligators can move at 11 miles per hour on land, and you want to know how fast you can move. Your friend has the fully automatic electronic timing system that is used for the Olympics, and you use it to time how fast you can run 100 meters at full speed. Your friend fires a starter’s pistol in the air to start your sprint, and the timer automatically starts at the sound of the pistol. The timer automatically stops when you cross a beam of light shining across the finish line. Which of the following will be a noticeable source of uncertainty when you compare your speed to the Olympic record?
          • A. Primarily statistical uncertainty
          • B. Primarily systematic uncertainty
          • C. Both statistical and systematic uncertainty
          • D. Neither statistical nor systematic uncertainty
        • Vote for a candidate for the Berkeley School Board Director
          • A. Judy Appel
          • B. Norma Harrison
          • C. Tracy Hollander
          • D. Jane Shelton
          • E. Beatrice Cutler
      • Discussion Questions
        • How could we find out if there were a systematic bias in our senses?
        • Consider the problem of measuring your class' understanding of types of uncertainty. What is a possible source of systematic uncertainty in measuring each individual's true understanding? What is a possible source of statistical uncertainty? How could both be reduced? Hint: A larger number of and more varied problems on quizzes What would be the cost of reducing these uncertainties in this case? Hint: Too much time spent on quizzes
      • Class Exercises
        • Students line up in “human histograms,” demonstrating statistical dispersion and systematic bias.
        • Small group discussions, clicker-questions, and exercises to have students identify the statistical versus the systematic uncertainties in a number of scenarios. 
      • Practice Problems
      • Homework Questions
        • It’s a very close race between two presidential candidates. You have access to two polls, both of which show the race to be very close, but predict a different winner. Poll A consists of 50,000 college students. Poll B is comprised of a sample of 50 people drawn from the U.S. census. How much do you trust the result of each poll and why? Based just on the results of these two polls, which candidate do you think is more likely to become president, the one winning Poll A or the one winning Poll B, or is there not enough information to decide?
        • Suppose that, after going through all of the processes described in his article to estimate the uncertainty in the election, Nate Silver's election forecasting model had indicated that there was a 99.9% chance that Obama would win the election (instead of an 83.7% chance). Would you be 99.9% sure Obama would win? Why or why not? Relate your answer to the Pengra & Dillman reading on the two types of uncertainty.
        • One way to gather data about how people might respond to a product or idea is to conduct a “focus group,” in which several participants share their impressions in a group conversation. Based on the reading, why is or isn’t this a good way to obtain reliable information?
    • OVERVIEW

      • Without scientific optimism, the idea that science is necessarily iterative and if we as scientists keep looking we will eventually gain insights, scientists would have discovered far less than they have.
      • “Scientific optimism” is a rarely-discussed feature of the culture of science, a kind of psychological trick/technique to keep focused on a problem much longer than the usual attention span. Scientists adopt a can-do attitude, and convince themselves that the problem is solvable. This is an antidote and a contrast to almost all of the other skeptical, self-doubting aspects of scientific culture. With this “scientific optimism,” scientists can successfully take on problems that take years or even decades to solve, with hundreds of steps and iterations involved in developing techniques, inventing technologies, collecting and analyzing data.
      • There is a history of problems becoming solvable once the news goes round that another group somewhere in the world has solved it. Belief that a problem is solvable makes it worth sticking to it long enough to solve it. Scientific optimism can be seen as an intentional self-delusion that a problem is solvable. (In the end, of course, this scientific optimism must be weighed against the cost of working on a problem that turns out not to be solvable given our current capabilities/knowledge, but nevertheless it has proven useful in overcoming the human tendency to give up too soon.)  
      • One consequence of scientific optimism is that one approaches group problem-solving with an eye to enlarging the pie of resources, rather than fighting over scraps in a game assumed to be zero-sum.
      • Addressing the Question: Why is Science Effective?
        • Scientific Optimism & Creativity
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
        • “I think we’re getting frustrated too quickly, and giving up too easily on each possible approach.     Imagine that we had just heard that the other team had gotten this to work—we would be wracking our brains for weeks trying to figure out how they did it, not just the hour-and-a-half we just tried.  This is a really hard problem, and we have to expect that it’s going to take a while to get some approaches to solving it.”
        • “I know she seems a little overly optimistic, but when I talked to her over lunch I realized that she is just trying to develop a “can-do” spirit so that we will all have the chance to try to solve the problem.”
        • “We’re capable, we know all the people we need to figure this out, and we’ve solved comparably difficult problems before... so one way or the other we’re going to find out how to make this work.”
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • “Many people have tried to solve this problem of increasing illiteracy and failed, so we shouldn’t throw more good money after bad — some problems are just intractable.”
        • "Scientists have been trying to figure out what dark matter is for decades, and we still basically have no idea. We'll probably never know, so it's not worth working on."  
    • LEARNING GOALS

      • A. ATTITUDES
      • B. CONCEPT ACQUISITION
        • Scientific Optimism: An attitude of optimism that persistence and iteration on a difficult scientific problem will eventually pay off with interesting insights into your problem.  
        • Skeptical/Gatekeeping Function: Science is in the business of rigorously testing claims against experience, rather than merely accepting them.  
        • Discovery/Innovation Function: Science is in the business of generating new theories for how to explain the world. This is both difficult (requires resources, uncertain success) and important (need to make decisions, wouldn’t have anything to “gatekeep” if new scientific ideas weren’t being generated). 
        • Omnivorous Science: Constantly learning new techniques, exposure to a variety of hypotheses & theories, interdisciplinary discussion, etc.  Important to progress because there are payoffs for learning novel experimental/technological/theoretical techniques and questions/problems from many domains of science, even beyond the one that one starts from. 
        • Zero Sum Games vs. Enlarging the Pie: A can-do spirit goes along with optimism that problems can be solved by enlarging the pie, not just redistributing zero-sum goods.
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
        • What is the second-longest that you have ever spent trying to solve a problem/puzzle?
      • Discussion Questions
        • What are the potential costs of insufficient scientific optimism?  
        • What are the potential costs of excessive scientific optimism?  
        • Consider how science works to gradually advance our understanding of the world. Describe how one feature of the scientific process could be usefully applied to policy-making (e.g. scientific optimism, peer review, etc.). Make sure to explain how your suggestions could improve policy-making processes. or
      • Class Exercises
        • Spinning cylinders: A challenging puzzle (involving spinning a piece of plastic tubing, with markings on it) is presented to the students. The experimental conditions end up giving experiential demonstration of the usefulness of scientific optimism.  
    • OVERVIEW

      • An introduction to the scientific approach to determining causal relationships.
      • In casual conversations, we use the word “causation” in many ways. Sometimes when we say “x causes y,” we’re pointing out who are what or who is responsible for y. Sometimes we’re pointing out something about the causal mechanism or process that leads to y. In this class, the focus is on yet a third reason we use causal language: to identify the “levers” in the world that we can push or pull to bring things about. If we want to generate good policies, for example, it’s important to know what some intervention will bring about, and it’s this sense of causation that is more relevant here. Moreover, science often proceeds by identifying causal relationships in the sense defined below well before the mechanism by which it does so is understood. It is therefore valuable to have a definition of causation that captures this aspect of scientific advance.
      • In this class, we will examine causal relationships using variables, interventions,and randomized controlled trials.
      • Addressing the Question: How do we find out how things work?
        • Correlation vs. Causation
        • Randomized Controlled Trials
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
        • “Let’s think causally here.  There are lots of words and concepts that we’re getting confused by here, but let’s remember that right now all we care about is what is causing what.”
        • “What if causation goes the other way, or there’s a common cause?  We’re getting all upset about the violence on television causing the violence in the streets because they seem to go up and down together in prevalence, but how do we know that it isn’t the other way around, or that they aren’t both being caused by some third factor.   Maybe we can look at the timing of one with respect to the other?  Or could we possibly control one of the factors by itself and see what happens?”
        • “Even if we don’t know how, this seems to work.  I know it seems crazy that you can fix this educational problem of delayed reading simply by feeding cereal to the kids every morning, but this was a pretty impressive randomized controlled trial so it’s hard to come up with another explanation.”
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • Misconceptions of Induction
          • "Aspirin is no better than a placebo. I used to give my sister bread pills when she asked me to get her aspirin, and she always said how much better they made her feel and never noticed the difference."
          • "Our analyses of 833 diverse middle schoolers found that most of them learned better with hands-on activities. But we can't conclude anything about children who weren't in our study, because we didn't collect any data about them."
          • "Eighty percent of people who took the drug got better. But the drug didn't really work, because a quarter of people who took the placebo got better even though they didn't take the drug, and twenty percent of people who took the drug didn't get any better at all."  
          • "RCTs cannot give sufficient evidence for causation that salt causes heart disease, because there might be other factors at play."
        • Misconceptions of Control Condition
          • "We should give the experimental drug to all eight hundred people in a study, instead of giving it to just half of them, because we want to maximize our sample size."
          • "If you give a drug to 500 people with a disease, and 80% of them get better, then we know the drug works."
          • Students incorrectly assumed that confounding variables cannot be accounted for and did not consider that randomized assignment with a sufficient sample size can cancel out statistical biases.
          • "It’s impossible to know if lack of sunlight causes myopia because there are just too many other things that could be interacting with eyesight, such as genetics, to be able to know for sure it’s connected to sunlight."#jc
          • "It’s impossible to know… because you’d have to test someone as a baby before their eyes have been exposed to sunlight." Related: "You’d have to test them throughout their whole life to find out how lack of sunlight impacted their vision."#jc
        • Misconceptions of Randomization
    • LEARNING GOALS

      • A. ATTITUDES
      • B. CONCEPT ACQUISITION
        • Correlation is insufficient to demonstrate causation because there are other causal structures that lead to correlation: 
        • Randomized Controlled Trial (RCT): An attempt to identify causal relations by randomly assigning subjects into two groups and then performing an experimental intervention on the subjects in one of the groups.   
          • a. Experimental Intervention: The act of an experimenter changing one variable in a possible causal network. 
          • b. Randomized Assignment: Given sufficient sample size, randomized assignment rules out confounds by  distributing variation randomly between the two groups, thereby avoiding systematic differences except as the result of the intended intervention. 
          • c. Control Condition: Comparison of an experimental to a control condition is necessary in order to distinguish effect of intervention from changes that would have occurred without the intervention.  
          • d. Sampling: A study of a well-chosen sample can tell you something about the population (through induction), especially if it was selected in such a way as to avoid any systematic differences between the sample and the rest of the population. It is often difficult or even impossible to capture a perfectly representative sample, so scientists do the best they can. For example, many psychology studies are done with college students because they are accessible, but such samples differ systematically from the general population. Inferences from samples to a larger population need to take such differences into account.
        • Causation: X causes Y if and only if X and Y are correlated under interventions on X.   
          • a. This is a technical notion, which overlaps with but is slightly different from everyday usage. For example, everyday usage of the word “cause” can be influenced by moral considerations, the complexity of a causal mechanism, and/or the nature of the mechanism. We typically don’t say that the big bang "caused" this sequence of letters, or that the presence of oxygen caused the forest fire, etc, but scientifically they are part of the causal history of those phenomena. 
          • b. There can be other evidence for causation, even when actually performing an intervention is not feasible. However, saying something is a cause implies that there is in principle a relationship under an intervention.  
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
        • Suppose there’s an epidemic of Lyn’s disease and a new drug is proposed as a treatment. 100 (or 10,000) patients with the disease are given the drug, and 79 (or 8,700) of them recover. Does this result:
          • A. Give no info about the presence or absence of a causal link
          • B. Establish that the treatment makes no difference.
          • C. Tentatively confirm the efficacy of the treatment, though more evidence may be needed.
          • D. Demonstrate conclusively the existence of an effect.
      • Discussion Questions
        • Why are humans so interested in the causal structure of the world? Hint: It's essential for controlling one's environment.
        • Why are false beliefs about causality so common? E.g. most superstitions are about causality, like astrology, spells, curses, the feeling that you can sometimes will a traffic light to change, etc.
        • Why does randomized assignment to conditions matter in an RCT?
        • Why do RCTs need control conditions?
      • Class Exercises
        • Online Exercise: Causality Lab   
        • Online Exercise: Try five practice rounds at Guess the Correlation. Then refresh and test your average error based on the next five rounds. http://guessthecorrelation.com/  
      • Homework Question
        • Try guessing correlations at http://guessthecorrelation.com/ (Links to an external site.). Once you guess one within .10 of the right answer, screenshot it and upload it.
        • If, among our population of zoo animals, the frequency of ear infections has a -.8 correlation with size, can we infer that ear infections cause creatures to get smaller? Why or why not?
      • Practice Problems
        • Suppose you want to find out whether pumpkins grow bigger when they are given mineral-rich water from a special well. Describe all essential features of a randomized controlled trial that would allow you to determine with reasonable confidence whether or not this is the case.
    • OVERVIEW  

      • Building on Correlation and Causation, we examine how to collect evidence for causality in more difficult cases.
      • In the previous class we began our discussion of causality, distinguishing it from mere association and considering ideal kinds of evidence for causality, when we can run randomized controlled trials. However, in many cases, it is not possible to run RCTs to test causal hypotheses, for ethical or practical reasons. In this class, we consider other forms of evidence for causality, which cannot individually be as conclusive as RCTs but together can still present compelling evidence for causal theories.
      • Addressing the Question: How do we find out how things work?
        • Non-RCT evidence for causality__
    • TOPIC RESOURCES

    • EXAMPLES  

      • Exemplary Quotes
        • “Ok, I agree that ‘correlation doesn’t prove causation’ in general, but in a case like this where we have lots of other kinds of evidence it sure gives us a pretty strong guess about causation.”
        • “There is an answer to this causal question.  Just because we can’t ethically do a randomized controlled study with these patients, it doesn’t mean that we can’t make progress establishing the causal link between these treatment options and the outcome.   After all, we have pretty good evidence that the energy from the sun is caused by nuclear fusion and we haven’t done any randomized controlled experiments!”
        • "One hundred years after that, French chemist Antoine Lavoisier used a device called an “ice calorimeter” to gauge the energy burn from animals —like guinea pigs — in cages by watching how quickly ice or snow around the cages melted. This research suggested that the heat and gases respired by animals, including humans, related to the energy they burn."
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
    • LEARNING GOALS  

      • A. ATTITUDES
        • Appreciate that we can sometimes get very good evidence for a causal hypothesis, even in the absence of decisive RCTs.  
        • Be wary of potential confounds in apparent evidence for causality.  
      • B. CONCEPT ACQUISITION
        • In many cases it is not possible to conduct a true RCT to test causality, for practical or ethical reasons.
        • There are non-RCT forms of evidence for causal hypotheses, which are less conclusive than RCTs but together can offer strong evidence for causation. These include:
          • a. Prior plausibility: Can a plausible mechanism be constructed, or is there some other basis for interpreting the current evidence in terms of one causal structure over another, such as data from other studies?  
          • b. Temporality/temporal sequence: Did the hypothesized cause precede the effect? 
          • c. Dose-response curve: Do the quantities of the hypothesized cause correlate with the quantity, severity, or frequency of the hypothesized effect across ranges?  
          • d. Consistency across contexts: Does the correlation appear across diverse contexts? 
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS  

      • Suggested Readings & Reading Questions
      • Clicker Questions
      • Discussion Questions
        • Brainstorm in small groups three causal links that you are pretty confident are real without RCT evidence. Why are you confident about these causal links? Hint: Think about causal links in everyday life, like allergies or breaking a glass.
      • Practice Problems
        • Observing that major rainstorms tend to develop on the edge of massive cold fronts, scientists conjecture that the cold fronts cause the storms. Give an alternative hypothesis that could explain the data. Then say what evidence could help rule out the alternative hypothesis, and how convincing it would be.
          • Alternative Hypothesis:
          • Distinguishing Experiment:
            • http://ww2010.atmos.uiuc.edu/(Gh)/guides/mtr/af/frnts/cfrnt/prcp.rxml
      • Class Exercises
    • Data Science Applications
      • A complex dataset where they needed to plot time sequence to make sense of it?
    • OVERVIEW

      • Distinguishing singular causation (A caused B) from general causation (X tends to cause Y).
      • It is often important to distinguish claims of singular causation, where A caused B, from claims of general causation, where variable X tends to affect variable Y. RCTs can only provide evidence of general causation, which might inform our understanding of particular instances of singular causation but cannot allow us to conclude causality with certainty. Both general and singular causation are subjects of scientific investigation. For example, whether Zika causes microencephalitis is a question of general causation, while whether an asteroid caused the mass extinction of the dinosaurs is a question of singular causation. We also distinguish productive vs. dependent causation, and its implications for responsibility in legal and moral dilemmas like the Trolley Problem.
      • Defining causal relationships using "variables" and "interventions." How we say that this particular thing caused that particular thing. Connections with the Trolley Problem and legal responsibility. Causal relations in observational sciences (e.g., paleontology, cosmology) where experiments are not generally possible. 
      • Addressing the Question:How do we find out how things work?
        • Singular vs. General Causation
        • Productive vs. Dependent Causation
    • TOPIC RESOURCES

    • EXAMPLES

    • LEARNING GOALS

      • A. ATTITUDES
      • B. CONCEPT ACQUISITION
        • Multiple Causation: Any given effect may be brought about by a complex combination of many causes (which may interact with each other), with varying degrees of influence on the outcome. (see Topic XIII).  
        • Singular Causation: A causal relation between specific events — i.e., Event A caused Event B.  
        • General Causation: A causal relation between variables — i.e., X causes Y.  
        • Causation as Production: There is a spatiotemporally connected series of causal connections between two events or event types (i.e., the kind of causation people have in mind when they say there is no action at a distance; e.g., commission).  
        • Causation as Dependence: If X hadn’t happened, Y would not have happened (counterfactual dependence, e.g. omission, double prevention [prevention of a prevention]). 
        • Decision-making involves not only assessment of the outcome, but also the agent’s causal role in the production of the outcome (i.e., omission vs. commission, e.g. trolley dilemma).  
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
        • No Reading
      • Clicker Questions
        • After a pilot study in which we gave Vertuzi to 100 patients and found 72 of them recovered, we repeat the trial with 100,000 patients, and this time we find that over 82,000 of them recover.
          • A. Give no information about the presence or absence of a causal link
          • B. Establish that the treatment makes no difference
          • C. Tentatively confirm the efficacy of the treatment, though more evidence may be needed
          • D. Demonstrate conclusively the existence of a link
        • We are morally assessable only to the extent that what we are assessed for depends on factors under our control. Two people ought not to be morally assessed differently if the only other differences between them are due to factors beyond their control.
          • A. Agree
          • B. Disagree
        • X causes Y implies that ‘If there was an intervention on X, there would have been a difference to whether Y happened’
          • A. Yes
          • B. No
      • Discussion Questions
      • Practice Problems
        • For each of the following, say whether it is a case of singular causation or general causation; and productive causation or dependent causation; or if there is no causal link. A child is in the street, and a car is coming.
          • A dog jumps into the street and pushes the child out of the way of the car. The child is unhurt, but the dog is killed.
            • A. Car comes --> Dog dies. Singular, Productive
            • B. Child runs --> Dog dies. Singular, Dependent
            • C. Car comes --> Child survives. No causal link
          • Working dogs are very helpful to humans. Police dogs sniff bombs and drugs, seeing eye dogs help blind humans get around, and herding dogs keep sheep or goats from wandering off or getting eaten. During the recent fires in Santa Rosa, a Great Pyrenees called Odin refused to leave his seven goats to get into the car when the fires were coming. His human family had to leave him behind, sure that he and his goats would perish. When they returned, their home and the land around it were in ashes. But Odin came running up. His paws were burnt, but he had not lost a single goat. Somehow two fawns had joined his herd, and he was protecting them, too. News Story
            • A. Herding dogs --> Safety of goats. General, Productive
            • B. Santa Rosa Fires --> Fawns join Odin's flock. Singular, Dependent.
            • C. Fire --> Getting burnt. General, Productive.
            • D. Santa Rosa fires --> Odin's paws burnt. Singular, Productive.
      • Class Exercises
        • Online exercise: Causality Lab
        • Discuss sand resonance demo: https://www.youtube.com/watch?v=wvJAgrUBF4w&t=70s
    • Data Science Applications
      • ?? philosophical underpinnings ???
    • OVERVIEW

      • The challenges of finding the information we want amidst messy data.
      • What does a scientist mean by “signal” and “noise”? We humans are always hunting for signal in noise; that is, we are looking for regularities, causal relationships and communications (the signal) amidst various distractions, both random and intentional (the noise). Scientists have developed a variety of ways to do this, including “filters" both technological and conceptual. 
      • Addressing the Question:How confident should we be?
        • Signal vs. Noise
    • TOPIC RESOURCES

    • EXAMPLES

      • Introductory Examples
        • Detecting fish jumps on a lake on a day when the wind is causing waves.
        • Getting the words of a radio personality through static.
        • Hearing your conversational partner at a party where lots of conversations are happening.
        • Figuring out if there's a meaningful difference between the control condition and experimental condition in an RCT.
        • Finding the facts on a topic where there's a lot of disinformation floating around.
        • Two astrophysics examples are dramatic: gravitational wave detection (LIGO) and the false detection of a exoplanet around a pulsar, when neither was really there.
      • Exemplary Quotes
        • “It’s really hard to see the effect since there are so many other issues going on that act as noise, but there really appears to be a remarkable correlation between a young child’s ability to defer gratification and later successes in life.”
        • “The problem is that nowadays we are inundated with stories about every scary crime that happens anywhere in the world, so this “noise" confuses us and we can’t see the striking “signal” that crime in our country has gone down dramatically in the past three decades.”
        • "Any signal can count as noise, just like any noise can be considered as signal; it depends on what you're trying to see."
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • "Psychologists can never learn anything from surveys, because people don't pay close enough attention."
        • A few students thought that noise was only noise because it’s not what one is looking for, completely leaving out the other aspects of noise i.e. it distracts from perceiving the signal.
        • Some students also thought that something is only a signal because it carries information, and so they identified two signals for the second part of the question which was supposed to emphasize a shift in what the signal was based on the scientists’ focus.
        • It might be worth also going over whether or not noise has to be random.
    • LEARNING GOALS

      • A. ATTITUDES
      • B. CONCEPT ACQUISITION
        • Signal: Aspects of observations or stimuli that provide useful information about the target of interest, as opposed to noise.  
        • Noise: The aspects of observations or stimuli that distract from, dilute, or get confused with signal, and are not signal (i.e., do not provide useful information about the target of interest).   
        • Observations/stimuli subject to confusion between signal and noise include communication, measurements, descriptions, etc.   
        • Signal-to-Noise Ratio: The relative strength of signal compared to the relative strength of noise in a given context. Obtaining meaningful information from the world requires distinguishing signal from noise. Therefore, human cognition (both scientific and otherwise) relies on techniques and tools to suppress noise and/or amplify signal (i.e., increase signal-to-noise ratio).    
        • It is possible to design filters to increase the signal-to-noise ratio, if you know where the noise is going to appear.  
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
        • Students identify “noise” and “signal” in different situations (the same event can be “signal” or “noise”depending on context).
        • You are watching a movie on TV. Which of the following is “noise” to you as a movie-watcher?
          • A. A loud crash as a brick flies in the window into the room where the hero is standing.
          • B. An emergency announcement interrupting the movie to warn of an approaching wildfire.
          • C. The pea-soup fog in a scene where the hero is feeling his way through the desolate woods.
          • D. A dramatic political speech by the hero making an important point about democracy.
          • E. All of the above.
        • Same items as above, but now: Which of the following is “noise” from the point of view of the hero?
      • Discussion Questions
      • Class Exercises
        • Playing a sound with Morse code signal hidden in static. Demonstrate how our ear/brain is highly developed to find the signal.  
        • Students write down a short phrase that they proceed, by stages, to hide in more and more noise (random substituted letters). Show the concept of “signal-to-noise ratio” as away to quantify at what point they can no longer recognize the message (the signal).  
        • Play the game Telephone with loud music on, and with silence. In which does the message change the least? (i.e., in which case does the "noise" make the signal harder to keep track of?).
        • Visual version: Handwrite a sentence in pencil. One at a time, each student copies what they think it says, then adds three lines somewhere in the text that alter the letters. By the end, it should be almost impossible to read. Do this with a random sentence, and with a famous sentence (e.g. “We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America”). The famous sentence should be easier to read because it is familiar; we can more easily recognize the pattern.  Contrast with a nonsense line with strange words, e.g. some lines by Lewis Carroll students are not likely to recognize.  Demonstrates that it’s harder to detect a pattern when it’s different from the patterns we’re most used to finding.
      • Practice Problems
        • When using iTunes “shuffle” feature, each song is played only once. However, if you turn shuffle on and off, the order is reshuffled each time, independently of the shuffle before. Consider the following user complaint :
          • i swear i hear the same bands and songs over and over on shuffle. i've got over 4000 songs on my iphone and there's bands on there i never hear while there's bands and songs i hear every single day.https://forums.macrumors.com/threads/is-shuffle-really-random.1133358/
        • Is the recurrence of certain bands and the exclusion of others proof that shuffle isn’t random? Why or why not?
        • Originally, within any particular shuffle you were just as likely to get songs in any order as any other. So if you had 101 songs on the playlist, and the first one was from Dark Side of the Moon, a given different song from Dark Side of the Moon would have a 1/100 chance of being played next. Users complained when getting multiple songs off the same album in a row that shuffle wasn’t really creating a random order. Were they correct to complain? That is, did they have reason to think the order wasn’t random? Why or why not?
        • In response to user complaints, Apple changed the shuffle algorithm to make it less likely you’d hear two songs from the same album in a row. So if you had 101 songs on your playlist and the first one was from Dark Side of the Moon, a given different song from Dark Side of the Moon would have significantly less than a 1/100 chance of being played next. Suppose users complained that the new order (which is still what Apple uses) wasn’t random. Would they have been correct to complain? That is, did they have reason to think the order wasn’t random? Why or why not?
      • Homework
        • Give a new example of a signal that you might be trying to detect (that is, an example not mentioned in the reading). Give an example of noise that might interfere with your detection of this signal. Explain why this is noise. Finally, explain something you could do to minimize the effect of this noise on your signal.
    • OVERVIEW

      • We often find mistake noise for signal; how do we minimize these mistakes, given that they are not always easy to tell apart?
      • Humans are so good at finding signals in noise that sometimes they do so even when there is no signal. Many techniques of science and much of statistics is aimed at avoiding fooling yourself this way. A further problem is that we often aren’t aware of how much noise we have searched through, when we believe we have found a signal—the “Look Elsewhere Effect." For example, we tend to think coincidences are meaningful. Statistics was invented primarily to deal with the problem of distinguishing real signal from noise fluctuations that look like signal.
      • Addressing the Question: How confident should we be?
        • False pattern detection
        • Guarding against false pattern detection
    • TOPIC RESOURCES

    • EXAMPLES

      • Introductory Examples
        • Higgs boson: Is this peak real or is it just a statistical fluctuation?
        • Pulsar that people thought might be extraterrestrials.
        • Animals in cloud shapes.
        • Running into an acquaintance while traveling on the other side of the world from where you both live.
        • Running into someone when you were just thinking about them the previous day.
        • Dreaming an event and then something similar happening within the next month.
        • Finding that two people in your class share a birthday, and thinking it indicates a special connection.
      • Exemplary Quotes
        • "I know it seems super meaningful that we ran into each other in Australia, when neither of us live in Australia, but I guess the chances of running into someone you know at some point, if you travel a lot and know a lot of people, are pretty high."
        • "There are many many cases of people making insanely correct predictions, so many that some people are convinced clairvoyance is real. But there are many more cases of people making totally wrong predictions. So it's probably just noise; with enough predictions, someone will be correct by luck."
        • If I ace a test every time I wear a certain t-shirt, then over time I might begin to attribute my success to the t-shirt and continue wearing it, deeming it "lucky". This could easily just be coincidence, but since I notice this pattern of correlation I am likely to keep up the pattern until it fails me.
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • Students had great difficulty recognizing the Look Elsewhere Effect in sets of studies, in part because they struggled with the conceptual underpinnings of statistical significance.
        • "If you draw lines correctly between the stars, you can make out a message from the aliens. You just have to know which stars to connect to see the message."  
    • LEARNING GOALS

      • A. ATTITUDES
      • B. CONCEPT ACQUISITION
        • People are (evolutionarily?) disposed to over-perceive signal (i.e., noise often gets misinterpreted as signal), perhaps because the cost of missing real signal (false negatives) is typically higher than the cost of mistaking noise for signal (false positives).   
        • People tend to see any regularity as a pattern (i.e., see more signal than there is), even when “patterns” occur by chance (i.e. are pure noise), e.g.: People underestimate the frequency of apparent patterns produced by randomness, leading to overperception of spurious signal much more frequently than people account for.  (Events that are just coincidental are much more likely than most people expect.)  
        • Gambler’s fallacy: Expecting that streaks will be broken, such that future results will “average out” earlier ones, even when all trials are independent. 
        • Hot-hand fallacy: Expecting that streaks will continue, even when all trials are independent.  
        • Look Elsewhere Effect: Even if there is a low probability of pure noise passing a given threshold for signal, if we look at enough noise some of it will pass that threshold by chance. I.e., if there is a low probability of obtaining a false positive in any given instance, the more times you try (the more questions you ask, measures you take, or studies you run without statistical correction), the more you increase the probability of getting a false positive.  This occurs when one:
        • Statistical Significance: How unlikely a given set of results would be if the null hypothesis were true (i.e. if the hypothesized effect did not actually exist).  
        • [Technical term: P-values: the probability of getting a result as extreme or more if in fact the hypothesis is false, simply through random noise.]  
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
        • Students identify Look Elsewhere Effect mistakes in various scenarios (including medical studies). 
        • A friend tells you that, when conducting coin flips, there were ten heads in a row. This is, of course, a surprising result. In what situation would it be most surprising? 
          • A. She is the only person flipping a coin, and she flipped it only ten times.
          • B. There are many people flipping coins, and everyone flips ten times.
          • C. There are many people flipping coins, and each person conducts 100 flips.
          • D. She is the only person flipping a coin, and she conducts 100 flips.
      • Discussion Questions
        • Students discuss Look Elsewhere Effect mistakes in various scenarios, with Clicker Questions.
        • Listen to excerpt from Radiolab, "A Very Lucky Wind" (first part of their episode on Stochasticity) and discuss.
      • Practice Problems
        • Discuss: https://xkcd.com/882/
      • Class Exercises
        • Professor leaves room and students write down two lists of 40 coin-toss results: “heads, tails, tails, heads...,” the first generated by students sequentially calling out “heads” or “tails,” trying to simulate random coin flips and the second by actually flipping coins. The professor returns, and has to guess which is random and which is simulated random.
        • Stock picking activity. Students guess whether each of six fictional stocks will rise or fall. The instructor picks if each stock will rise or fall by flipping a coin, and then asks the students if anyone got all six right. Typically, at least one student will, just by chance, even though it is clearly a matter of chance.
      • Homework Questions
        • Describe a case not discussed in class where someone (or many people) see signal where in fact there is only noise.
    • OVERVIEW

      • Considering the relative costs of each possible mistake helps us make better decisions under conditions of uncertainty, when we cannot eliminate the possibility of a mistake either way.
      • It is often necessary to make decisions or judgments under conditions of uncertainty. When this happens, two kinds of errors are possible; we might think that something is present when it is not (a Type 1, or false positive, error), or we might think that something is absent when it is present (a Type 2, or false negative error). In different contexts, these two types of errors may come with different costs. When one kind of error is worse than the other, it is prudent to err on the side of making the less bad error. Sometimes it even makes sense to make one kind of error quite a lot in order to avoid making the other kind of error. For example, even though the large majority of tumors are benign, it makes sense to get tumors biopsied because if you do have a cancerous tumor and assume it is benign (a false negative), it can kill you.
      • Addressing the Question: How confident should we be?
        • False Positive Errors
        • False Negative Errors
        • Balancing risks under conditions of uncertainty
    • TOPIC RESOURCES

    • EXAMPLES  

      • Exemplary Quotes
        • "The oncoming asteroid has only a 1% chance of hitting Earth. But if it does, life on Earth will be destroyed. It'll be expensive to stop the asteroid, but the risk is bad enough it's worth it."
        • "It's true that sometimes seatbelts cause deaths, when people get stuck in them and can't get out. But they more often save lives, so it is prudent to wear your seatbelt whenever you drive."
        • "I asked them to imagine that they faced a choice between two types of radiation therapy for early-stage breast cancer. The first treatment would leave them with a 15% chance of local recurrence and a 10% chance of moderate or severe breast fibrosis. The second treatment would leave them with only an 8% chance of local recurrence but a 30% chance of moderate or severe fibrosis. The radiation oncologists raised their hands in almost equal numbers for the two treatments. Some believed the higher risk of fibrosis was unacceptable, given the treatability of most local recurrences, whereas others believed the trauma of recurrence outweighed the discomfort of fibrosis. But sometimes physicians' values differ in important ways from those of many patients. When such value judgments are incorporated into professional treatment guidelines, without any explicit acknowledgment that a reasonable patient might choose an alternative course of treatment, they take potential choices away from patients." https://www.nejm.org/doi/full/10.1056/NEJMp1504245
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • The commonest error for these concepts is simply failing to notice that they are relevant considerations. When cued, students tend to be good at applying them.
        • "We don't have to prepare for the hurricane they're forecasting might hit, because most of the time they say a hurricane might come, it turns out to be not that bad."
    • LEARNING GOALS  

      • A. ATTITUDES
        • Given some degree of uncertainty, appreciate that different kinds of errors come with different costs, such that in some cases it is worthwhile to presume the less likely alternative because the error you risk is less costly. 
      • B. CONCEPT ACQUISITION
        • False Positive/Type I Errors: A test yields a positive result, but in fact the condition is not present.  
        • False Negative/Type II Errors: A test yields a negative result, but in fact the condition is present.  
        • There is always the possibility of a trade-off—for a given test, one can reduce the risk of false positives by increasing the risk of false negatives, and vice-versa.   
        • Good decision-making under uncertainty involves having sufficient signal (an adequate test) and setting your threshold appropriately for the relative costs of false positives and false negatives.     
        • In some classification cases like pornography identification or graduate school admissions, there may not be a “truth of the matter” so there aren’t true “false” positives or “true” negatives, although a threshold must still be set.  
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS  

      • Suggested Readings & Reading Questions
      • Clicker Questions
      • Discussion Questions
        • In the Stanley Kubrick film Dr Strangelove, which takes place during the Cold War, a rogue American general sends the code to send planes to drop nuclear bombs on Russia because he wants to start a nuclear war. One of the pilots believes that this could only have happened if Russia had struck first. Under these conditions (leaving aside the Doomsday Machine), what are the costs of a false positive (believing Russia struck first when it didn't) and what are the costs of a false positive (believing it is some kind of mistake, when in fact Russia did strike first)?
      • Class Exercises
      • Practice Problems
        • Suppose you have very bad migraines. Your doctors tell you migraines can be sparked by a lot of different things, which vary for different people: most kinds of food, coffee, stress, and/or bright sunlight. You want to find out what is causing your migraines, so you cut your diet to potatoes and then slowly add one food back at a time, waiting a few days between each addition to see if you get a migraine. The whole thing is stressful, and you’re so hungry and grumpy you lose your sunglasses on Monday. On Thursday, you eat chocolate again for the first time. That afternoon, you get a relatively small migraine. You are very sad, because chocolate is your very favorite food.
          • a.) Should you wait a few days and try eating a small amount of chocolate again, or should you ban it from your cupboards? Why?
          • b.)What if your migraines came with sudden, intense, lasting vertigo, and you work in skyscraper construction and can’t take off work?
      • Homework
        • Recount an instance in which you thought you made a right diagnosis of a situation, but then found out that you were wrong. Explain whether it was an instance of a false positive or a false negative. What makes it a false positive/negative as compared to a false negative/positive? Which type of error do you think would have been better to make in this situation, and why?
    • Data Science Applications
      • Some kind of risk analysis task
    • OVERVIEW

      • Using meta-judgments of the likelihood that your best judgment is right— how confident you are—enables decisions that take uncertainty into account.
      • An important element of the culture of science is the use of “tentative” propositions, often quantified. These can be as confident as 99.99999%—you would bet your life on it—but it would still be understood to be held as a proposition which could be wrong. This makes it psychologically easier for a scientist to be open to being wrong—and to look actively for ways they might have gotten it wrong. This cultural understanding of the importance of recognizing and reporting one's credence level leads to insistence on including error bars on graphs: a data point is completely meaningless without an error bar.  
      • Addressing the Question: How confident should we be?
        • Credence/Confidence Levels #mq
        • Calibration of Credence/Confidence Levels
    • TOPIC RESOURCES

    • EXAMPLES

      • Introductory Examples
        • Weather forecasts
        • Using polls to predict elections
        • You might decide your credence level that your crush will say yes if you ask them to the dance, and use that to decide whether to ask or not.
        • Credence levels predicting the probabilities of natural disasters within specific time frames (earthquakes, floods, wildfires etc.), and using these to make decisions about disaster preparation.
        • Use credence levels about getting into various colleges to decide what to use as a safety school.
        • Using credence levels about passing tests to decide whether to study more.
        • Saul's story of a physicist who cancelled a lecture five minutes in because the presenter wasn't sure how his error bars were calculated.
      • Exemplary Quotes
        • "I'm 95% confident that this battery is not going to explode. But more than even a 1% chance of our robot exploding would lead is too risky, so we shouldn't use this battery until we are more confident it won't explode."
        • “Uncertainty, in the presence of vivid hopes and fears, is painful, but must be endured if we wish to live without the support of comforting fairy tales. It is not good either to forget the questions that philosophy asks, or to persuade ourselves that we have found indubitable answers to them. To teach how to live without certainty, and yet without being paralyzed by hesitation, is perhaps the chief thing that philosophy, in our age, can still do for those who study it.” -Bertrand Russell, History of Western Philosophy, p. xiv
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • Some students mistake "confidence" for the colloquial sense of "high confidence."
        • "Dr. Ryan, are you absolutely certain? We can't authorize spending hundreds of millions of dollars sending a fleet to Patagonia unless we're completely certain about the outcome."
    • LEARNING GOALS

      • A. ATTITUDES
        • Recognize that every proposition comes with a degree of uncertainty.  
          • e.g., not be impressed by statements made with 100% confidence, no error bars or confidence intervals on claimed measurements, seeking definitive answers when only probabilistic information is available, not recognizing that probabilistic information is better than no information.
        • Value and defend scientific expressions of uncertainty.   
      • B. CONCEPT ACQUISITION
        • Credence: level of confidence that a claim is true, from 0 to 1.    
        • Confidence: essentially a synonym for credence, as in “level of confidence,” instead of colloquial meaning, “state of having a lot of confidence.”  
        • Accuracy: How frequently one is correct; proximity to a true value.  
        • Calibration: How closely confidence and accuracy correspond; that is, how accurate a person or system is at estimating the probability that they are correct.  
        • Because every proposition comes with a degree of uncertainty:
          • a. Partial and probabilistic information still has value.  
          • b. Back-up plans are important because no information is absolutely certain.  
          • c. It is important to invest in calibrating where you are more and less likely to be right, as opposed to being overinvested in being “right.”  
          • d. Scientific culture primarily uses a language of probabilities, not certain facts.  
          • e. Even correctly-done science will obtain incorrect results some of the time.  
      • B. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Introductory Examples
        • Saul's story of a physicist who cancelled a lecture five minutes in because the presenter wasn't sure how his error bars were calculated.
      • Suggested Readings & Reading Questions
      • Clicker Questions
        • If a President put forward a new policy proposal on health-care reform, which statement would make you feel more confident?
          • A. “The policy that I’m putting forward is the right policy for America. I guarantee that it is what’s best for the country.”
          • B. “I think that the policy I’m putting forward is the one that is most likely to be the right policy for America. There is no guarantee that it is will work -- in fact, I give it only a 75% chance, but the alternatives that have been presented are all much less likely to succeed.”
        • If you cross Oxford Street after class next Thursday, what is the likelihood that you will get hit by a car?
          • A. about 1 in 1,000 (one in a thousand)
          • B. about 1 in 100,000 (one in a hundred thousand)
          • C. about 1 in 10,000,000 (one in ten million)
          • D. about 1 in 1,000,000,000 (one in a billion)
          • E. about 1 in 100,000,000,000 (one in a hundred billion)
        • How large would the risk of getting hit by a car when crossing Oxford St. need to be in order to affect your plans?
          • A. about 1 in 1,000 (one in a thousand)
          • B. about 1 in 100,000 (one in a hundred thousand)
          • C. about 1 in 10,000,000 (one in ten million)
          • D. about 1 in 1,000,000,000 (one in a billion)
          • E. about 1 in 100,000,000,000 (one in a hundred billion)
      • Discussion Questions
        • Do you have any beliefs for which you have less than 1.0 credence but do not know how to do without?
        • Discuss any current controversial topic, but for each statement anyone makes, they have to give it a credence level.
      • Class Exercises
        • Students guess the answers to 10 binary questions, write credence levels for each one. See the answers, calculate your calibration.
        • Many examples given from recent science presentations.  
        • A arbitrary topic is chosen for small group discussion (e.g. “Does testing in the schools help or hurt education?”), but during the discussion the students have to state their credence level (by saying a number between 0 and 100%) after every statement that they make which could have a credence level associated with it.
    • Data Science Applications
      • If we could get some system to institute credence level toggles for each statement, we could get a lot of data to test for calibration and see if it helps. something like that.
    • OVERVIEW

      • It is important to check the calibration of credence levels; that is, how good one's judgments are about how likely each of one's claims is to be right.
      • Most people’s estimates of their confidence is wrong in characteristic ways: high confidence tends to be over-estimated and low confidence tends to be under-estimated. This can be trained to be closer to an accurate calibration, most effectively with repeated, unambiguous, and immediate feedback. One problem that arises from poor calibration is that juries often use witness’ confidence to gauge the likelihood that they are correct, but this often yields poor results due to the poor calibration of the witnesses. 
      • Addressing the Question: How confident should we be?
        • The Value of Uncertain Information
        • Epistemic Caution
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
        • "Weather forecasters often seem wrong, but they only give probabilities, and their probabilities are really well calibrated. So we should trust weather forecasters, but remember that a 90% chance of rain also means a 10% chance of no rain."
        • "Being well calibrated does not require always predicting the correct outcome but requires being able to predict how often one will be wrong."
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • Some confusion between confidence, calibration, and accuracy.
        • The temptation to rely on confidence over calibration is sometimes hard to resist.
        • "He seems super confident, and she said she was only 85% sure, so we should trust him over her."
    • LEARNING GOALS

      • A. ATTITUDES
      • B. CONCEPT ACQUISITION
        • Confidence Interval: A range within which a true value of interest lies with a specified probability.  
          • Most commonly a 95% confidence interval, which means there is a 5% chance the true value lies outside the range specified.
        • Error Bars: Smaller bars on a graph that show the range of likely true values around the observed value, typically a 95% confidence interval, or the observed value +/- the standard error or standard deviation. 
        • Scientific culture at its best reinforces the importance of uncertainty by offering respect and career advancement to people on the basis of calibration as well as accuracy. In attaching the ego to calibration as well as accuracy, this discourages scientists from being overly attached to their ideas being “right,” encouraging them to prioritize truth over having been right. 
        • People (including many experts) tend to over-estimate their accuracy at high confidence levels (and under-estimate it at low-confidence levels).     
        • People often use a source’s confidence as a cue to credibility, but appropriately discount confidence when they have evidence of poor calibration.   
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
      • Discussion Questions
        • What are the costs of underestimating one's credence level at a low level of confidence?
        • What are the costs of overestimating one's credence level at a high level of confidence?
        • What are some ordinary-life scenarios in which talk of confidence intervals might be useful? Hint: When deciding whether or not to stay in a job & wait for a raise, when financial planning, when betting, etc.
      • Class Exercises
        • Credence-calibration questionnaires to show students’ calibration, and training exercises.   

      • Homework Questions
        • Make ten predictions about ten events you expect to happen within the next week (total of 10, not 100!). Write your credence level for each one: that is, how confident you are it will happen within the next week, 1-100. After the week has passed, mark the ones that happened and calculate your calibration score. How well did you do? Did you predict some kinds of events better than others?
        • What’s a subject in which you have to make repeated predictions in everyday life? It could be academic, social, or personal. (Some possible examples: how you’ll do on tests, what grade you’ll get on papers, whether you will enjoy a new course, whether you’ll get jobs or grants you apply for, whether someone will respond to your message on a dating site, whether what you’re wearing will be appropriate for an event, etc.)
          • A.  How accurate are your predictions? What makes it difficult to be more accurate?
          • B. Is there any way that you could make your predictions more accurate? Why don’t you do this? (You may have a good reason; sometimes increased accuracy requires more time and effort than it’s worth.)
          • C.  How well calibrated is your confidence in your predictions? Do you tend to be overconfident or underconfident? Why?
          • D. Is there any way that you could make your confidence in your predictions better calibrated?
    • OVERVIEW

      • Because each event and/or phenomenon has many causal factors, it is often important to distinguish which factors affect it the most and which factors play a smaller role.
      • A useful “jargon” of the scientists is to speak of orders of understanding or orders of explanation. “A zeroth order explanation/cause” or “a first order explanation/cause,” is a major cause/factor, as opposed to “a second order” or “third order explanation/cause," which would be real causes with smaller effect sizes. This is useful because explanations for how things/actors in the world behave can often be parsed into a primary explanation, a secondary less important cause, a third order, still less significant cause, etc.
      • Addressing the Question: How do we find out how the world works?
        • Orders of Explanation
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
        • “Ok, at first glance this dramatic increase in breast cancer in Korea seems like an intractably complicated problem, but maybe there is one aspect like diet or environmental change that is the primary driver.  If we can identify it first, then we can look for the next most important cause.”
        • “It turned out that the prices of these stocks was to first order being determined by the buy/sell orders of just a few major pension funds.   After that, the second order effect was the automatic buying and selling from the index funds.   In fact, the small investors that we thought were important barely affected the prices of these stocks at all—a third or fourth order effect at best.” 
        • “It used to be that the annual population of predator bears was the first order determinant of the annual salmon population, but nowadays the bears are a second order effect, and the fishing industry is the primary determinant.”
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
    • LEARNING GOALS

      • A. ATTITUDES
      • B. CONCEPT ACQUISITION
        • Multiple Causation: Any given effect may be brought about by a complex combination of many causes (which may interact with each other), with varying degrees of influence on the outcome.  
        • Orders of Understanding: When there are multiple causes of a given outcome, it is often the case that some causes are much more impactful than others. In these cases, we draw a rough qualitative distinction between the cause(s) with the greatest impact for a given effect (first order cause/explanation), the causes with somewhat less impact (second order), and the much less influential causes (third and higher order).  
        • Effect Size: The size of the effect under examination. (e.g. how much being overweight affects health is the effect size of obesity on health). 
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
        • Students practice (individually and in small groups) identifying first-, second-, and third-order causes/explanations in a variety of scenarios.
        • The final such question for this class concerns parsing some of the biggest drivers of government spending, specifically what are the relative “orders” of the contributions to total government spending from the costs of education, incarceration, and social security.
      • Discussion Questions
        • In small groups, students practice identifying first-, second-, and third-order causes/explanations in a variety of scenarios.
        • What do you think are the first order, second order, and third order causes of decreasing rates of marriage among Americans? How might you go about testing your hypotheses about this?
      • Practice Problems
      • Class Exercises
      • Homework
        • Asimov writes: "[W]hen people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together." What does he mean by this? How does it relate to the concept of “orders of approximation?” [Hint: we are asking about the general idea; the flat vs spherical earth is just an example.]
    • OVERVIEW

      • Estimating quantities based on what we know.
      • Physicists train their students in doing “Fermi problems,” back-of-the-envelope estimates of quantities that arise in physics problems and in life. This is useful as an approach to performing “sanity checks” of claims in the world and of your own ideas, beliefs, and inventions. Checking numbers with quick Fermi estimates may be even more important in a world in which it is difficult to evaluate the credibility of numbers available by Googling. 
      • Addressing the Question: How do we find out how the world works?
        • Estimating quantities
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
        • “He’s suggesting that the Federal budget deficit is due to the money we spend on job training programs.  But that’s ridiculous!   Even if every single person out of work -- let’s imagine that it is 10% of the working-age population (say 10 million people out of work) -- went to a job training program that cost as much as a year of college at a good university (say, $40,000), that would cost 400 billion dollars.   Hmm... well that’s not quite as small as I expected, but it’s still not trillions of dollars, and furthermore I am sure we aren’t spending that much on each person for job training. Let see, can I estimate that cost per person in some more realistic way than using college costs...”
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • "I don't know how many pianos there are in Chicago so I can' t estimate how many piano tuners are currently working there."
        • "Keyla asked me how many firecrackers were shot last night. I tried googling it, but couldn't find out. I guess we'll never know now, will we?"
    • LEARNING GOALS

    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
        • No readings
      • Homework
        • The purpose of today’s homework assignment is to give you some practice with the process of Fermi estimation which was outlined in the Santos reading. Make Fermi estimates on Question 1 and Question 2, below, following the steps: (i) identify what quantities to multiply together, (ii) make rough estimates of each of those quantities, (iii) do the math to get your answer, and (iv) state your answer clearly. Except for step iv, you do not need to write in complete sentences. Your answer can be formatted just like the sample answer to the sample question given below.
          • Question 1: How much money do you spend on coffee (or other warm beverage, if you are not a coffee drinker) in a year?
          • Question 2: If you took all of the household garbage in the US generated in a year and spread it out in the San Francisco Bay, what percentage of the surface of the bay would be covered in garbage?
          • Sample question: What is the total length of fingernail growth that one could achieve in a lifetime? Answer to sample question:
            • i) Quantities to multiply together: (length grown per clipping ) x ( of clippings per week) x ( of weeks per lifetime). ii) Estimates: length grown per clipping: .002 meters. of clippings per week: 1. of weeks per lifetime: 4160. iii) Do the math:(.002) x (1) x (4160) = 8.3 meters. iv) Answer: I estimate that a person’s fingernails grow about 8.3 meters (27 feet) in a lifetime. hw
      • Clicker Questions
        • How many pounds of food was thrown out (sent to landfills or incinerators) in the United States last year? 
          • A.Less than 100 million pounds
          • B.Between 100 million pounds and 1 billion pounds
          • C.Between 1 billion pounds and 10 billion pounds
          • D.Between 10 billion pounds and 100 billion pounds
          • E.More than 100 billion pounds 
        • Which of these three does the government spend the most on (including federal, state, and local government spending)?  The second most?
            1. Most on Education, then Incarceration, then Social Security
            1. Most on Education, then Social Security, then Incarceration  
            1. Most on Incarceration, then Education, then Social Security 
            1. Most on Incarceration, then Social Security, then Education 
            1. Most on Social Security, then Education, then Incarceration 
            1. Most on Social Security, then Incarceration, then Education 
      • Discussion Questions
      • Class Exercises
        • Together with the whole class, the professor shows how to develop Fermi-problem estimates of a given quantity, e.g. the amount per year that Americans spend on gas for personal transportation.  
        • In small groups, students work on several Fermi problems to develop facility with the approach.
        • In small groups, students use Fermi estimates to re-think the first-order, second-order, etc. parsing of US government spending on education, incarceration, and social security (following up the final activity from Topic XIIV, Orders of Understanding). The students frequently reach completely different orderings than they did in the previous class—and come within ~20% of the actual amounts spent.   
    • OVERVIEW

      • Some of the psychological biases that make our probability judgments go awry.
      • Here we will explore the most widespread heuristics and biases which psychologists of judgment and decision-making have discovered in everyday reasoning: the availability heuristic, representativeness heuristic, and anchoring heuristic, and biases like optimism bias, hindsight bias, and status quo bias. Many examples are drawn from Daniel Kahneman's book, Thinking Fast and Slow.
      • Addressing the Question: How can we avoid going wrong?
        • Reasoning Biases
          • Availability Heuristic
          • Representativeness Heuristic
          • Anchoring Heuristic
          • Base Rate Neglect
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
        • Representativeness Heruistic
          • "He's a great speaker for a mathematician, and mathematicians are not usually good speakers. Maybe he's done some theater, too. But most mathematicians have not done theater, so it's also possible he's just really good at public speaking."
        • Availability Heuristic
          • "When asked whether lightning or sharks are responsible for the most human deaths, most tend to answer sharks since sharks are often portrayed in fiction or documentaries as violent animals when  in reality only 19 shark attacks are recorded each year in the United States versus 51 for lightning strikes."
        • Anchoring Heuristic
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • Representativeness Heuristic
          • "He's a great speaker for a mathematician, and mathematicians are not usually good speakers. People who do theater are good speakers. So he must be a mathematician who does drama."
        • Availability Heuristic
        • Anchoring Heuristic
          • "My strategy to buy souvenir goods when on holiday trips is to ask the vendor for the price and negotiate my way down 30% from the initial etiquette price"
    • LEARNING GOALS

      • A. ATTITUDES
        • Value processes of discovering and correcting errors, recognizing that formal statistics and logic are needed to counteract various types of (frequently unrecognized) errors, despite high confidence. Be aware of the pitfalls of human reasoning.  
      • B. CONCEPT ACQUISITION
        • Availability heuristic: Cases in which people use how readily something comes to mind as a proxy for an estimate of its probability. 
        • Representativeness heuristic: Cases in which how representative something is of a category or outcome is used as a proxy /for evaluating how likely the category membership or outcome is (not taking base rates into account). 
        • Anchoring heuristic: Cases in which an estimate is made by anchoring on a number that’s provided and potentially irrelevant and adjusting, typically insufficiently.   
        • Base rate neglect: People frequently overlook the importance of base rates when calculating the probability of an event based on probabilities that seem more relevant to the specific case.   
        • Base rates: The base frequency of a given attribute in a whole population.   
          • a. Bayesian reasoning: Systematically combining information regarding new evidence with prior beliefs to determine the probability of a hypothesis.
          • b. Bayes Rule: $$P(A|B) = (P(B|A)*P(A))/P(B)$$ where $$A$$ & $$B$$ are events and $$P(B)$$ ≠ $$0$$.
            • The formal statistical rule for applying Bayesian reasoning. When updating beliefs, final credence level should be influenced by initial credence level and the strength of the new evidence.
        • Peak-End Rule: The tendency to remember the peak, or highlight, and the very end of an experience, and to take them as more representative of the experience as a whole than they really are.
      • C. CONCEPT APPLICATION
        • Recognize and resist instances of the availability heuristic in everyday and scientific contexts.   
        • Recognize and resist instances of the representativeness heuristic in everyday and scientific contexts.  
        • Recognize and resist instances of the anchoring heuristic in everyday and scientific contexts.   
        • Recognize and resist instances of base rate neglect in everyday and scientific contexts.  
        • Given an example in which a person updates her belief, identify the two factors that should influence her final credence level (initial credence level and strength of the evidence), and recognize that Bayes rule provides a formal specification of how to do so. 
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
        • Suppose you flip a fair coin. Which of the following sequences of heads and tails is more likely: HHHHH or HTHHT?
          • A. HHHHH 
          • B. HTHHT 
          • C. They're equally likeley 
        • Dr. Six flips 6 coins at a time and counts how many heads and tails she gets. Every time she gets twice as many heads as tails (i.e., 4+heads), she eats an M&M. Dr. Twelve flips 12 coins at a time and counts how many heads and tails she gets. Every time she gets at least twice as many heads as tails (i.e., 8+heads), she eats an M&M. After 100 sets of flips, who will have eaten more M&Ms?
          • A. Dr. Six.  
          • B. Dr. Twelve. 
          • C. They’ll have eaten about the same number. 
        • Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
          • A. Linda is a bankteller. 
          • B. Linda likes to cook and plays the trumpet. 
          • C. Linda is a writer. 
          • D. Linda is a bankteller and active in the feminist movement. 
        • A bat and a ball together cost $1.10. The bat costs a dollar more than the ball. How much does the ball cost?
          • A. Ten cents.  
          • B. Five cents.  
          • C. Other.  
      • Discussion Questions
        • Base rate neglect often turns up when people use statistics out-of-context to make their evidence seem stronger than it is. How might someone use base rate neglect to argue that some groups are more violent than others? How might you refute them (by pointing out that they are neglecting base rates)?
        • How is base rate neglect connected to the need for a control condition in RCTs?
      • Class Exercises
        • Small group exercises and clicker questions to demonstrate these effects with the students. 
      • Homework
        • Kahneman and Tversky (1974) end their paper on heuristics and biases with the words: “A better understanding of these heuristics and of the biases to which they lead could improve judgments and decisions in situations of uncertainty.” Give an example of a real-world situation in which one of the heuristics they discuss could bias judgments, and suggest a strategy for improving judgments. In other words, how might you get people to avoid using the heuristic as a basis for their judgment, and instead rely on a better alternative?
    • OVERVIEW

      • The capacity for science to be misused to reinforce existing power structures.
      • Science has a particularly bad track record when it comes to studies of human sub-populations for the purpose of setting policy—particularly when groups in power study groups out of power. We should be aware of this, and wary of misusing science in such a way as to perpetuate injustice.
      • Addressing the Question: How can we avoid going wrong?
        • The abuse of science to perpetuate social injustice
        • Temptation to use science to justify/defend one’s own group
        • The Just World Fallacy
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
        • "The data describe in The Bell Curve shows that Black students perform more poorly on IQ tests than White students. But historically, tests like that have been used to justify existing power structures and racial oppression, so maybe we should think about that more carefully before we interpret it to mean that White students are smarter. The IQ tests were written by Whites, for students who had grown up in similar environments. Maybe there are cultural biases. And hang on, there's a lot of vocabulary on those tests; that requires education, and we know that there are systemic racial inequalities in the education system. That by itself could explain the difference."
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
    • LEARNING GOALS

    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
        • Scores on intelligence tests appear to be reliable (in that people reliably get similar measurements on repeated testing). Is IQ further a valid measurement, in that it measures what we want it to?  
          • A Yes 
          • B No 
          • C Don’t know/ other reaction 
        • The fact that you’ve found a way of reliably putting a number on individuals or groups doesn’t of itself imply that:
          • A. The number has any causal significance at all, or 
          • B. It has the kind of causal significance you take it to have. 
      • Discussion Questions
        • Why are our judgments about other people and groups of people so often mistaken? Hint: Allude to sources of both systematic and statistical error.
        • Which segments of society are most likely to be tempted by the Just World Fallacy? Why?
      • Class Exercises
      • Homework Questions
    • OVERVIEW

      • How to catch bad science.
      • Distinguishing pathological science, pseudo-science, fraudulent science, poorly-done science, good science that happens to get the wrong answer (which should happen statistically for 1 in 20 papers that give a 0.05 confidence level result). What do practicing scientists do when they try to judge a paper in a field or sub-field outside their immediate area of expertise? 
      • Addressing the Question: How can we avoid going wrong?
        • Bad science and pseudoscience
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • "I took a shot of apple cider vinegar for a Month and just overall felt like it started me on the right track in the morning." https://spoonuniversity.com/lifestyle/i-took-a-shot-of-apple-cider-vinegar-for-a-month-and-i-won-t-stop
    • LEARNING GOALS

      • A. ATTITUDES
      • B. CONCEPT ACQUISITION
        • The boundaries demarcating science from non-science and distinguishing among the categories of pathological science, pseudo-science, fraudulent science, poorly-done science, and good science can often be difficult, with overlapping and fuzzy boundaries between categories. 
        • Pathological Science Indicators:
          • a. The effect is produced by a barely detectable cause, and the magnitude of the effect is substantially independent of the intensity of the cause.  
          • b. The effect is barely detectable, or has very low statistical significance. Claims of great accuracy. 
          • c. Involving fantastic theories contrary to experience. 
          • d. Criticisms are met with ad hoc excuses. 
          • e. Ratio of supporters to critics rises to near 50%, then drops back to near zero.  
          • f. Conclusion-motivated design & analysis. 
          • g. Pseudo-science is characterized by using scientific vocabulary without aligning with the corresponding concepts or engaging in real scientific practices (i.e.,  science being “skin deep,” not scientific below the surface). 
        • Fraudulent science involves intentional deception, such as deliberately fabricating data or deliberately deceiving the reader about the strength of evidence.  
        • Poorly-done science, e.g. failure to consider confounds, failure to use best practices in terms of data collection and analysis (e.g., small sample size, look elsewhere effect).  
        • Unintentional self-deception can be involved in justifying poor practices and/or interpretations in pathological, pseudo-, & poorly-done science. 
        • Motivation to support a particular conclusion (i.e., science undertaken to support a given conclusion, rather than to discover the truth) can be a feature of poorly done or pathological science.  
        • Good science:
          • a. Will get the wrong answer some of the time, e.g., via statistical flukes.  
          • b. Entails good faith engagement with the alternative hypotheses through a search for evidence that you are wrong.  
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
        • Case Studies (Possible Examples of Pathological Science):
          • Wolfe-Simon et al., 2011, "A bacterium that can grow by using arsenic instead of phosphorus."
          • Chaplin, 2007, "The Memory of Water."
          • Adam et al., "Measurement of the neutrino velocity with the OPERA detectorin the CNGS beam."
      • Clicker Questions
        • Identifying how many of Langumuir’s 6 Pathological Science indicators are in play in specific papers relating surprising scientice results.  
      • Discussion Questions
      • Class Exercises
        • Summaries of three relatively recent surprising science results (e.g. super-luminal neutrinos, bacteria with arsenic in their DNA, water with memory, cold fusion) and their follow-up in the scientific community are distributed among the groups. Each group explains the summary they read to other groups, so all have thought about each example. The groups discuss and vote with clickers on whether each article falls into the category of pathological science, poorly-done science, etc.   
      • Homework
        • Please read the article assigned to you based on your seating chart group (the seating chart for Week 9 is posted on the syllabus). These articles (from a variety of sciences) may be challenging to understand, but we encourage you to try to understand the main ideas from the articles and be prepared to discuss them in class on Wednesday. Please answer the following question for your homework: Based on Langmuir's criteria, do you think that the study conducted in the article you read would qualify as pathological science? In addition to a paragraph explanation summary of your thoughts, make sure to fill out and turn in the “Langmuir Scoresheet” as part of your assignment. [link]
    • OVERVIEW

      • Our tendency to preserve our existing or preferred beliefs, even against the evidence.
      • This class explores confirmation bias in the search for and assessment of evidence. In particular, we consider the ways that people tend to seek out and think about evidence in such a way as to reinforce their existing opinions, rather than testing them against new information or alternative views.
      • Addressing the Question: How can we go avoid going wrong?
        • Confirmation Bias
          • Selective Exposure
          • Biased Assimilation
    • TOPIC RESOURCES

    • EXAMPLES

      • Concrete Examples
        • When someone starts with a strong preference for dogs vs. cats, they tend to seek out, remember, and believe information suggesting dogs are better than cats.
      • Exemplary Quotes
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
    • LEARNING GOALS

      • A. ATTITUDES
      • B. CONCEPT ACQUISITION
        • Confirmation bias: Seeking or otherwise favoring evidence consistent with what is already believed or what is being tested. 
          • a. Selective exposure: Selectively seeking or exposing oneself to evidence that is likely to conform to prior beliefs or working hypotheses.
          • b. Biased assimilation: Systematically favoring or discounting evidence to render that evidence more compatible with pre-existing beliefs or working hypotheses.
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
          • Reading Question: Read the first two pages of the Fischer and Greitemeyer article. The reading discusses the selective-exposure effect, which is one instance of a broader phenomenon known as "confirmation bias." Provide an example of a selective exposure effect in real life (personal anecdote, news story, scientific study, etc.). Explain how the scenario you chose is an example of this type of bias, and how it might lead to suboptimal decision-making.
      • Clicker Questions
        • Cards have a letter on one side and a number on the other. Rule: If a card has a vowel on one side, it has an even number on the other side. Which card(s) would you turn: E, 7, 4, M 
      • Discussion Questions
        • On social media like Twitter and Facebook, people tend to follow their friends, especially friends that share their opinions. People also tend to share articles that support their viewpoint. This leads to selective exposure, wherein people are exposed much more to arguments that support what they already believe and much less to arguments against what they believe. In other words, there is a bias in what they see that confirms what they already think. How would this increase polarization of opinions?
        • Assuming you wanted to, how could you go about reducing your selective exposure?
        • What are the human motives that would sustain and encourage confirmation bias?
      • Class Exercises
      • Practice Problems
        • In Sweden, there is a syndrome called uppgivenhetssyndrom, in which children become completely comatose and unresponsive despite apparently having nothing physically wrong with them. Their reflexes and blood pressure remain normal.  Yet they are unresponsive to pain, and must be fed through feeding tubes stuck down their throats.  This syndrome exclusively affects refugee children in Sweden whose families are threatened with deportation; it has never been diagnosed outside of Sweden. It has affected hundreds of refugee children in Sweden, primarily children from former Soviet bloc states. Some children have remained comatose for years.  Initially, the families were deported anyway. However, photographs of unconscious children being deported on stretchers raised a public outcry. More recently, most families with an affected child have received reconsideration by the Board of Immigration. At present, the only known cure is for the family to be approved for permanent residency. Even after families are approved, it takes weeks or even months for the children to recover. Since the condition is thought to arise from external circumstances, doctors have primarily focused on keeping the children alive, not waking them up by medical means.
    • OVERVIEW

      • Blind analysis, the practice of deciding how we will analyze data before finding out if the analysis we have chosen supports our hypothesis, counteracts confirmation bias.
      • Science is not a single “scientific method” (as often taught in school), but better characterized as an ever-evolving collection of tricks and techniques to compensate for our mental (and, occasionally, physical) failings and build on our strengths—and, in particular, to help us avoid fooling ourselves. These techniques must constantly be re-invented, as we develop new ways to study and explain the world. In the last few decades we have entered a period in which most scientific analyses are complicated enough to require significant debugging before a result is clear. This has exposed another way we sometimes fool ourselves: the tendency to look for bugs and problems with a measurement only when the result surprises us. Where previously we recognized the need for “double blind” experimentation for medical studies, now some fields of science have started introducing blind analysis, where the results are not seen during the development and debugging of the analysis—and there is a commitment to publish the results, however they turn out, when the analysis is “un-blinded” and the results interpreted. 
      • Addressing the Question: How can we avoid going wrong?
        • Blind Analysis
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • Confusion between blind analysis and double blind experiments is common. These concepts are related but distinct.
    • LEARNING GOALS

      • A. ATTITUDES
        • One should always be looking for ways that we get things wrong (by fooling ourselves or due to bugs in our reasoning processes) so that we can invent better procedures. 
      • B. CONCEPT ACQUISITION
        • Blind analysis: Making all decisions regarding data analysis before the results of interest are unveiled, such that expectations about the results do not bias the analysis. Usually co-occurs with a commitment to publicize the results however they turn out. 
        • Examples of analysis decisions for which blind analysis could be useful: 
        • Confirmation bias drives the need for blind analysis.  
        • Confirmation bias is pervasive and doesn’t necessarily indicate any fraudulent activity.  
        • Approaches to reducing confirmation bias other than blind analysis:
          • a. Preregistration: A research group publicly commits to a specific set of methods and analyses before they conduct their research. 
          • b. Registered replication: [A] research group(s) commits to a specific set of methods and procedures to verify the result of an earlier work (typically with the input of the original research team). Results are publicized regardless of outcome. 
          • c. Adversarial collaboration: Scientists with opposing views agree to all the details of how data should be gathered and analyzed before any of the results are known. 
          • d. Peer review: New results are evaluated by other experts in the same field to determine whether they are valid. This only reduces confirmation bias if reviewers don’t share biases. 
        • Scientists are constantly looking for bugs in scientific practices in order to fix them. Blind analysis is just the latest example of scientists recognizing a bug in their practice (e.g., a way of being fooled) and adjusting practice to account for/remove the bug.    
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
        • Students are presented historical graphs of improved published measurements of a physical parameter over the decades and must identify the ones that retrospectively show evidence of biases that could have been avoided by blind analysis.   
      • Discussion Questions
        • Why is it important for scientists to publicize results even if they don't get what they predicted?
      • Class Exercises
        • Students make a measurement which is somewhat tricky to perform with two-digit precision, and experimental conditions are set up to show that the part of the class that was “blinded” gets a more accurate result.  
        • There could be a variant of the above incorporating blind analysis into a more typical lab class, where there is an expectation about the results, half the class does the analysis blinded, half the class does the analysis unblinded.
    • OVERVIEW

      • Addressing the Question: How should we use science to make better decisions?
        • Wisdom of Crowds
        • Herd Thinking
      • Explore ways that groups fall short of their optimal reasoning ability. There are better and worse ways to aggregate a group’s knowledge. 
      • Sometimes groups of people reach better conclusions than people working independently, and sometimes they reach worse conclusions. There are features of group reasoning that can help, and features of group reasoning that can hurt. Here we explore how to avoid the pitfalls of group reasoning and to maximize the benefits.
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
        • "We can get a pretty good estimate of the weight of this turkey by asking everyone in the family to write their guess privately on a piece of paper, and then averaging the answers."
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • "We can be pretty confident we are right, because all five members of our family agree that vaccines cause autism. Since all five of us think so, we must be right."
    • LEARNING GOALS

      • A. ATTITUDES
        • Not take for granted that consensus offers the best conclusions. 
        • Take seriously (but not as absolute!) the consensus of a group which has reasoned about a question in a careful, appropriate way. 
        • Take seriously (but not as absolute!) the average of a large group's independent estimates of a number, under appropriate conditions. 
      • B. CONCEPT ACQUISITION
        • Wisdom of Crowds: Sometimes groups make better judgments than individuals. This happens when: 
          • a. Judgments are genuinely independent, preventing herd thinking. 
          • b. Members of the group do not share the same biases. 
          • c. There are enough people in the group to balance out random biases or fluctuations (analogous to the need for an adequate sample size). 
          • d. Works especially well when estimating a quantity, where errors may be large but are not systematic. 
        • Herd Thinking: Sometimes groups make worse judgments than individuals. This happens when: 
          • a. Judgments of individuals are influenced by the judgments of others, leading to groupthink and sometimes polarization. 
          • b. Members of the group share biases, which can be exaggerated by discussion and cannot be decreased by averaging judgments. 
        • The enterprise of science is essentially social, and advances in part because scientists look actively for what other scientists might have gotten wrong. This process, including peer review, enables science to iteratively improve.
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
      • Discussion Questions
        • Can you think of a group you've worked with that engaged in groupthink/herd thinking? What happened?
        • Each person think of the best-functioning group they've worked with. What made that group stand out?
      • Class Exercises
        • Students answer typical wisdom-of-crowd estimate questions using their clickers—but they can update their estimates as they see the histogram with the other students’ guesses. Afterwards, it is shown that the accuracy of the class’ mean estimate actually got worse as they continued to update their estimate—showing (if it works) that wisdom of crowd works best if the inputs are independent.    
      • Homework
        • One way to gather data about how people might respond to a product or idea is to conduct a “focus group,” in which several participants share their impressions in a group conversation. Based on the reading, why is or isn’t this a good way to obtain reliable information?
    • OVERVIEW

      • Many phenomena in science are emergent, i.e., visible only at higher levels of organization. This tends to occur when large numbers of elements interact, e.g. as in individuals on social media. The rise of conspiracy theories and polarization via confirmation bias on social media is an important case of a recent emergent phenomenon.
      • Often people think that science is necessarily reductionist, but in fact we can observe many patterns that are emergent, i.e., visible only at higher levels of organization. That is, some phenomena are only describable in terms of higher-level, nonreductionist patterns. In emergent phenomena, complex patterns (like organisms with emotions) can emerge from surprisingly simple sets of rules (like natural selection). Humans often mistake emergent phenomena as either magically inexplicable or intentionally planned by some conductor/choreographer/director. This is especially likely if one is not aware that causal explanations can depend on emergence. The internet in general, and social media in particular, are relatively untested domains in which new sets of rules (algorithms that choose what to show, likes, etc.) are being tried out. These new sets of rules give rise to unintended emergent phenomena, such as the propagation of misinformation. In the case of social media, it seems to grow conspiracy theories by connecting people with similar views and exacerbating confirmation bias. At the same time, emergent phenomena of this new social world online may seem so choreographed that they give rise to new conspiracy theories. These two patterns may exacerbate the historically documented tendency of people to believe in false conspiracy theories, through interpreting surprising emergent patterns as deliberate and communicating with others who agree. On the other hand, it is also possible that the digital revolution makes actual conspiracies easier, as the internet facilitates communication and therefore coordination across distances.
    • EXAMPLES

      • Exemplars
        • Perhaps we are most familiar with this from the example of some objects feeling hot and some cold, which is a collective effect of the average motion of all the atoms or molecules making up the object, not of each individual atom or molecule.
        • A great example of this sort of emergent phenomenon can be seen in Conway’s Game of Life (https://en.wikipedia.org/wiki/Conway%27sGameof_Life). For the SSS course we would like to act out a Game of Life activity with students playing the roles of the cells (perhaps in a football field, perhaps online).
        • Genes are "selected" by natural selection pressures that make genes which improve survival and reproduction more common through, literally, survival and reproduction, while genes which undermine survival or reproduction are less common because they are part of an organism that dies sooner and repoduces less. Although a gene may be described in reductionist terms of its molecular makeup and structure, its function in the organism must be described at the level of the whole organism in order to show why selection pressures push for or against it. Thus, a full explanation for why a gene is one way and not another (molecularly) must refer to its effect on the organism as a whole.
        • The spiral shape of a hurricane or storm is an emergent phenomenon from the movements and temperatures of the gases and liquid droplets that make it up.
        • "Consider, for example, a tornado. At any moment, a tornado depends for its existence on dust and debris, and ultimately on whatever micro-entities compose it; and its properties and behaviors likewise depend, one way or another, on the properties and interacting behaviors of its fundamental components. Yet the tornado’s identity does not depend on any specific composing micro-entity or configuration, and its features and behaviors appear to differ in kind from those of its most basic constituents, as is reflected in the fact that one can have a rather good understanding of how tornadoes work while being entirely ignorant of particle physics." - Stanford Encyclopedia of Philosophy,"Emergent properties," Timothy O'Connor.
      • Cautionary Quotes: Mistakes, Misconceptions, and Misunderstandings
        • "My Facebook wall is covered with articles about the protests, so Mark Zuckerburg must want people riled up."
        • "Everyone I know voted for Graham, but he lost the election. It must have been rigged."
        • "Neuroscientists can't learn anything from psychologists, because they're actually looking at the brain, which is the cause of all human behavior."
        • "Someday scientists will know exactly what happiness is, because we'll be able to see it in the brain. Then philosophers and psychologists will be out of a job."
        • "If you can't explain it in terms of the movement of particles, you haven't explained it at all."
        • "If Twitter is exacerbating polarization, it must serve Jack Dorsey's ends in some way."
        • "If the world were really round, there wouldn't be so many people at these Flat Earther conventions."
        • "So many people attended the first Women's March in January 2017, it could not possibly have been organic. George Soros must have been paying them all."
    • LEARNING GOALS

      • ATTITUDES
        • Be wary of the assumption that all empirical patterns can be adequately explained in reductionist terms.
        • Be wary of assuming a deliberate agent is behind events.
          • This includes what looks like coordinated action on the internet.
        • Be wary of conspiracy theories, given the long history of false ones, especially in American politics.
      • CONCEPT ACQUISITION
        • There are different levels of description for complex phenomena, some (lower level) having to do with the individual motions of the parts, some (higher level) having to do with collective properties of the grouping. For example, cells can be explained at a molecular (lower) level, but some properties of cells only make sense at a functional (higher) level.
        • Reductionist Explanations: Reductionist explanations attempt to explain phenomena in terms of lower level, mechanistically simple and direct causal relations of their parts, as in one billiard ball hitting another billiard ball and thereby causing it to move.
        • Emergent Phenomena: The effect of many, many parts all doing simple things can lead to some very interesting patterns and behaviors only visible in the whole system. In fact, watching the whole system we can see scientifically testable causal relations that can only be explained by these higher level patterns, even though those patterns must emerge from all the individual elements doing their own things.
        • Much of science studies emergent causal explanations for phenomena, like the thermodynamic patterns of a gas made of molecules, or patterns of weather, mind, and society.
        • Humans have a tendency to overperceive agency in external phenomena in general (e.g. anthropomorphizing), making them prone to mistaking emergent phenomena as intentional.
        • A particularly important example of emergent phenomena in our modern world arises when people are highly connected to each other in social networks, with simple rules (likes, friending, re-tweeting, etc.) determining the interactions.
        • Social media are a fertile environment for unintended emergent phenomena, such as the propagation of misinformation (particularly misinformation that triggers strong enough emotions, like fear, that people are inclined to pass the misinformation on to their social-network ties).
        • Conspiracy Theory: A belief that a phenomenon is best explained by reference to a conspiracy, a secret group of people with a secret plot for their own purposes.
        • Some criteria to help recognize conspiracy theories that are unlikely to be true include:
          • Arguments involve "connecting the dots" between apparently disconnected events.
          • Accomplishing the conspiracy would require superhuman powers or secrecy from large numbers of people.
          • The conspiracy is highly ambitious or aims at world domination.
          • The conspiracy is highly complex.
          • The theory tends to intermingle facts and speculation without distinguishing between the two.
          • The conspiracy theorists refuse to consider alternative explanations or countervailing evidence.
          • The conspiracy theorists are indiscriminately suspicious of all members of a certain class, e.g. government officials, scientists, etc.
      • CONCEPT APPLICATION
        • Recognize the limitations of reductionism in particular cases.
          • Be wary of claims that genes fully explain emotional, cognitive, or behavioral phenomena, as most often these have both genetic and environmental factors.
          • Be wary of claims that neuroscience observations have fully "explained" emotional or behavioral phenomena.
        • Recognize some phenomena as emergent, and therefore not fully explicable by either reductionism or deliberate agents.
        • Recognize conspiracy theories that are likely to be mistaken.
        • Recognize emergent patterns in social media.
        • Recognize that social media can exacerbate false conspiracy theories in two ways:
          • Apparently deliberate social phenomena sometimes explained by reference to conspiracy can often be better explained by reference to emergence, especially in social media.
          • Social media creates emergent echo chambers that can increase polarization and false beliefs.
        • Recognize how the rapid changes of the internet could make real conspiracies either easier or more difficult:
          • The internet might make real conspiracies easier, since it facilitates communication and coordination.
          • The internet might make real conspiracies harder, since it is easier to leak and harder to keep secrets.
    • CLASS ELEMENTS
      • Possible Readings:
        • Richard Hofstadter (1964), “The Paranoid Style in American Politics.”
        • Philip Kitcher (1962), "Genes," The British Journal for the Philosophy of Science, 33 (4) 337-359.
        • Michael Shermer, "The Conspiracy Theory Detector," Scientific American
      • Clicker Questions
      • Discussion Questions
      • Class Exercises
        • Conway's Game of Life
    • OVERVIEW

      • How should (and shouldn't) values/emotions/goals/desires and conflicts of interest properly be woven together with science's hyper-rational elements in decision making processes? 
      • Reason by itself, without the arational elements of values, goals, priorities, principles, preferences, fears, desires, and ambitions, does not yield decisions: Decision-making requires weaving the rational with all of these arational elements that get humans to approach problems in the first place. Consequently, we must look for, study, and develop principled approaches to coordinating all these elements appropriately in our decision-making processes. Without such scaffolds, the rationality will frequently be what gets neglected. In the following classes, we explore some of the techniques that have been used to scaffold this kind of principled decision-making. None of these existing approaches accomplishes everything we would like. Nonetheless, they offer examples of techniques that we can recombine creatively with further new ideas and approaches to allow us to make better decisions in groups, appropriately applying rationality to achieve the complex goals of the relevant communities. We begin by exploring the desiderata that optimal decison-making processes should fulfill.
      • Addressing the Question: How should we use science to make better decisions?
        • Wisdom of Crowds
        • Herd Thinking
    • TOPIC RESOURCES

    • EXAMPLES

    • LEARNING GOALS

    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
      • Discussion Questions
        • Think about the last time you were tempted to go do something fun with your friends when you had work to do. How did you cope with this value conflict between having fun with friends/developing friendships and getting your work done/learning/doing well in school? How do you usually cope with that conflict? Is there any way to fulfill both goals more completely? Will there always be a conflict between them (at times)?
        • Suppose you are the principal of a new charter school. You have a finite amount of money. Students and teachers want smaller classes. Teachers want better pay and benefits, and classroom equipment. Students want gym equipment, lockers, art and drama classes, and a track field. How do you go about prioritizing goals when your resources are limited and different stakeholders want different things?
      • Class Exercises
      • Homework Questions
        • List your top five values (for society).
        • What is one issue for which two or more of your top values might conflict? How do you cope with this value conflict? Do you have any beliefs about facts which allow you to avoid value conflict? If so, how sure are you that those beliefs are true?
        • Consider [X complicated decision relevant to class, in our case, the decisions for which students were about to design decision processes in small groups]. In 1-2 sentences, state why this decision is important. Then list 5 desiderata (desirable features) that you would like to be captured in a process for making this decision. Use the course topics for inspiration. For example, what desideratum could help one address systematic/statistical uncertainty in the decision making process?
    • OVERVIEW
      • The Denver Bullet Study offers one approach to integrating facts and values in a controversial real-world problem, drawing facts from a set of experts, gauging the values of different stakeholders, and bringing these together for a final decision.
      • Addressing the Question: How should we use science to make better decisions?
    • TOPIC RESOURCES

    • EXAMPLES
    • LEARNING GOALS
      • A. ATTITUDES
        • Be optimistic that a community can come together to make a decision, even when people begin with heterogeneous values and beliefs.
      • B. CONCEPT ACQUISITION
        • Stakeholders: The set of people who have a stake in the outcome of a decision. This can include people who will implement the decision and all the people affected by it.
        • Experts: The set of people who have the most knowledge/information/expertise about the facts relevant to the decision.
        • Denver Bullet Study: An experiment in group deliberation in which a community came together to share values and knowledge to make a decision about what kind of bullet the Denver Police should use, which had enough stopping power to keep cops safe but was not so harmful as to cause unnecessary damage to citizens (as did classic hollow bullets).
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

      • Students work out a problem involving values and factual/scientific issues, using method used in Denver bullet study. 
    • ASSESSMENTS
    • OVERVIEW

      • A third approach to integrating facts and values under conditions of uncertainty about what the future will be like.
      • Here, we explore scenario planning, a technique for systematically considering possible futures. This is valuable for planning because we often do not know exactly what the future will look like, and need to plan for multiple contingencies.
      • Addressing the Question: How should we use science to make better decisions?
    • TOPIC RESOURCES

    • EXAMPLES

      • Exemplary Quotes
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
    • LEARNING GOALS

      • A. ATTITUDES
        • Avoid assuming the future will continue in one direction, and ready to consider a variety of possible futures when planning. 
      • B. CONCEPT ACQUISITION
        • Scenario Planning: A mode of problem solving which involves considering two important and uncertain dimensions along which the future might vary, imagining what each possible quadrant might look like, and considering how decisions made now will affect the likelihood and desirability of each quadrant. 
      • C. CONCEPT APPLICATION
    • CLASS ELEMENTS

    • OVERVIEW
      • Addressing the Question: How should we use science to make better decisions?
      • Students design their own decision-making processes, utilizing their favorite aspects of the processes we have discussed.
      • Adversarial vs. Inquisitorial Modes of Truth-Seeking 
    • TOPIC RESOURCES

    • EXAMPLES
      • Exemplary Quotes
      • Cautionary Quotes: Mistakes, Misconceptions, & Misunderstandings
        • Students often confuse "desiderata," or goals of decision-making processes, with the processes designed to achieve those desiderata. E.g., asked to come up with desiderata, they suggest "use a representative sample," which is a process which achieves the desiderata, "allow representatives of all relevant groups to have a voice," "ensure that our solution will work for everybody," or "avoid systematic bias."
    • LEARNING GOALS
    • CLASS ELEMENTS

      • Suggested Readings & Reading Questions
      • Clicker Questions
      • Discussion Questions
      • Class Exercises