Exam Information







Note:  The information in this page strictly applies only to the course as I taught it, through the Summer of 2016.  Other instructors will have other policies, and presumably will post their exams elsewhere.

This course has two midterm exams, both of which are administered online; and a comprehensive final exam administered on campus (or proctored off campus).  See the Syllabus and Calendar for dates, times, and places.

All exams are closed-book, closed notes. Students for whom English is a foreign language may use dictionaries and translators at the discretion of the instructor.

Link to Changes to Final Exam Format Beginning Summer 2015


Studying for Exams

The best way to prepare for exams is to keep up with assigned readings, attend lectures, and deploy effective, efficient study skills. 

For two essays by your instructor concerning effective learning (and teaching) strategies, see:



Getting the Most Out of the Text

Chief among these is the study-strategy known as PQ4R (or, alternatively, SQ4R), initially proposed as the SQ3R method by Francis P. Robinson (1946), and expanded and promoted by John Anderson of Carnegie-Mellon University, a cognitive psychologist who is concerned with the applications of cognitive psychology to education. I quote from Anderson's text, Cognitive Psychology and Its Implications (Worth, 2000, pp. 5-6 and 192-193):

  • Preview (or Survey): Survey the chapter to determine the general topics being discussed. identify the sections to be read as units. Apply the next four steps to each section.
    • Read the section headings and summary statements to get a generals sense of where the chapter is going and how much material will be devoted to each topic. Try to understand each summary statement and ask yourself whether this is something you knew or believed before you read the text.
  • Query (or Questions): Make up questions about the section. Often, simply transforming section headings results in adequate questions.
    • Make up a study question. From the section heading make up a related question that you will try to answer while reading the text.... This will give you an active goal to pursue while you read the section.
  • Read: Read the section carefully, trying to answer the questions you have made up about it.
    • Read the section to understand it and answer your question.
  • Reflect: Reflect on the text as you are reading it, trying to understand it, to think of examples, and to relate the material to prior knowledge.
    • Try to relate what you are reading to situations in your own life.
  • Recite: After finishing a section, try to recall the information contained in it. Try answering the questions you made up for the section. If you cannot recall enough, reread the portions you had trouble remembering.
    • At the end of each section, read the summary and ask yourself if that is the main point you got out of the section and why it is the main point. Sometimes you may be required to go back and reread some parts of the section.
  • Review: After you have finished the chapter, go through it mentally, recalling its main points.
    • Again try answering the questions you made up.
    • Go through the text, mentally reviewing the main points. Try answering the questions you made up [Query] plus any other questions that occur to you. Often, when preparing for an exam, it is a good idea to ask yourself what kind of exam questions you would make up for the chapter.

These pointers are based on two principles that we will discuss in the lectures on memory:

  • Elaboration: memory is better when we relate the to-be-remembered material to things we already know. Relating what you read to the questions you develop creates a more elaborate memory.
  • Schematic Processing: memory is better for material that is related to our expectations. Surveying the text before reading it seriously helps you develop the appropriate expectations.

As a rule, if you can understand the points made in the chapter summary, you have gotten what you are supposed to get from the text.



Getting the Most Out of the Lectures

Of course, PQ4R works for lectures, too. That's one reason that I distribute the lecture illustrations in advance -- despite the fact that doing so spoils some of the "surprise value" that certain slides might otherwise have. And that's also why I distribute the lecture illustrations in a "3/page" format that facilitates taking notes right on the printout.

  • Preview: Before the lecture starts, look over the slides.
  • Query: After looking them over, try to make up a question about each slide.
  • "Read": Attend the lecture with the printout in hand, taking notes in the space provided, trying to answer whatever question you had formulated earlier.
  • Reflect: As soon after class as possible, within 24 hours, go over the slides and your notes. Rewrite your notes, filling in any gaps and making connections between individual slides, and between material from the lectures and what you're reading in the textbook. (See "Six Learning Strategies", immediately below, for more on this point.)
  • Recite: Go back to the slides, and try to reconstruct, from memory, the points that were made in lecture.
  • Review: Finally, review the main points, especially the basic concepts and principles discussed in the lecture.



"Six Learning Strategies"

Roald Hoffman and Saundra McGuire, two highly regarded professors of chemistry, have offered six strategies that facilitate effective learning of any academic subject - -whether in high school, college, or graduate school ("Learning and teaching Strategies", American Scientist, 2010). Most of what follows is in their own words -- I've eliminated quotation marks for ease of readability.

  1. Take your own notes, by hand, even if notes (or slides) are provided by the lecturer. That same evening, rewrite your notes, condensing, extending, and paraphrasing them "so that you make the meaning your own". When you take your own notes you are actively engaged with the material, and the act of rewriting will enhance your encoding of the material into long-term memory. Rewriting is essential to effective learning, even if you think you got it the first time.
  2. If you must miss a class, get notes from a fellow student. The ASUC lecture notes can help in this regard, as do the lecture supplements that I provide on the course website. But getting a fellow student's notes helps you develop relationships with other people who are learning the same material you are, at the same time -- people whom you can ask for help when there's something you don't understand.
  3. If there are sample problems in the text, work out the answer for yourself before looking at the answer provided in the text. Then, in addition to comparing your answers, compare your approach to the problem with the one provided in the text.
  4. Form a study group, and alternate individual and group study sessions. The Student Learning Center sponsors such a study group, and you might be able to form your own from the members of your discussion section. Keep it relatively small and manageable, no fewer than three, no more than six people. Make up practice quizzes and tests for each other.
  5. Teach the material to someone else -- like a fellow study group member. Or do some informal tutoring. There is no better way to learn material than to try to teach it.
  6. Set attainable goals and move slowly toward them, beginning with simple material and progressing to more complicated things.

There is a lot of misinformation about learning strategies and study skills.  John Dunlosky and his colleagues (2013) have pulled this literature together in a few succinct points:

What Works:

  1. Self-Testing: Quiz yourself with practice tests, flashcards, and the like.
  2. Distributed Practice: Spread your study over time; don't "cram" before an exam.
  3. Elaborative Interrogation: Ask yourself how what you're learning relates to what you already know .
  4. Self-Explanation: "Channel your inner four-year-old" by asking "Why?" the material you're learning is true.
  5. Interleaved Practice: Alternate your study time between one topic and another.

What Doesn't Work:

  1. Highlighting: Underlining is pretty much useless, if you don't do the other things outlined above.
  2. Rereading: Rote repetition doesn't promote long-term retention.
For details, see:
  • "Improving Students' Learning with Effective Learning Techniques: Promising Directions from Cognitive and Educational Psychology" by John Dunlosky, Katherine A Rawson, Elizabeth J. Marsh, Mitchell J. Nathan, and Daniel T. Willingham (Psychological Science in the Public Interest, 2013);
  • this review is summarized in "What Works, What Doesn't" by the same authors in Scientific American Mind, September/October 2013). 


Distributed Study

Another important principle of learning and memory is the distinction between massed and distributed practice. In general, memory is better is practice is spread out over time (and even location), than if it is all lumped together at once. So take your time going through each chapter. Don't read it all in one sitting, and don't read it multiple times in a single sitting! Spread the reading out, pace yourself, and things will go better.


Frequent Testing

Another principle is that memory is improved by repeated testing. It's not so much a matter of repeated reading, as it is of repeated testing. In the introductory course at the University of Texas, Austin, every class begins with a quiz.  We don't do that, but the principle still holds. 


Exam Scope

Exam questions always focus on basic concepts and principles, as opposed to trivia such as names and dates. If I should mention a name or date, it's usually because that is relevant to the concept or principle that I'm really interested in.

Then again, if you've taken an introductory psychology course, you should know who Pavlov is, and that he, like other pioneering psychologists, worked in the 19th century. But that's rarely the point of the question.

In general, there are two ways to get the right answer to one of my test questions:

  • You know the correct answer.
  • You reason to the correct answer from some concept or principle that you know.

I never ask intentionally tricky questions. Difficult questions, yes, sometimes. But never tricky questions, such as those that concern exceptions to the rule. That is because I'm interested in basic concepts and principles.


Pre-Exam Reviews

Prior to exams, the instructor will post a "Narrative Review" online.  There will also be a special discussion board devoted to questions about each exam. 

The GSIs are encouraged not to conduct additional review sessions during discussion section. This is because discussion sections are intended to supplement lecture and text material; they aren't intended for review purposes.


Design of Exams

The midterm exams are noncumulative in nature.

  • Midterm 1 includes all the material from the first day of the course up through the last class before the exam.
  • Midterm 2 includes material covered after Midterm 1, up through the last day of class before the exam.
  • The Final Exam includes material covered after Midterm 2, as a noncumulative portion, plus a cumulative portion that covers the entire semester.

As a rule, I try to achieve an even distribution of material presented in lecture, and material presented in the text. Of course, there is considerable overlap between the two - -although I emphasize things that the text does not, and the text, of course, covers more material than can possibly be covered in lecture, and goes into considerably more detail.

  • In devising the test, I try to have at least one question explicitly drawn from each of the lectures. So, for example:
    • you can expect at least one question on the structure of the nervous system (covered in the first lecture on the Biological Bases of Mind and Behavior);
    • at least one question on brain anatomy or the function of subcortical structures (covered in the second lecture); and
    • at least one question on functional specialization of the cerebral cortex (covered in the third lecture). There may be more drawn explicitly from the lecture, but there will be at least three.
  • I also try to have one question drawn from each major section of each chapter in the assigned reading. So, for example, with respect to Kalat's Chapter 3 (in the 10th Ed.), you can expect at least one question on each of the following topics:
    • Neurons and Behavior;
    • Drugs and Their Effects;
    • Brain and Behavior;
    • Genetics and Evolutionary Psychology.

Of course, that's a lot of questions, so I may not test on some sections, but precisely because there are a lot of questions, I rarely have more than one question per major section.

And, of course, there's considerable overlap. For example, in my lectures on the Biological Bases of Mind and Behavior, I included material from the following sections of  the text:

  • Neurons and Behavior;
  • Drugs and Their Effects;
  • Brain and Behavior.

Some material on genetics and evolutionary psychology will follow later (in lectures on Development and Learning, respectively).

So, one way or another, the balance of questions is about 50-50 between lectures and text.


Exam Format

Due to enrollment, but even more to a desire for reliable, objective evaluation of student learning, the midterm and final exams in this course are administered in multiple-choice format, and are machine-scored.


Note on Computer-Administered Exams

Midterm examinations are administered online, via the Canvas learning management system. For this reason, students do not need to purchase a red Scantron form for the midterms.

However, UCB policy requires final examinations to be proctored, and the red Scantron form will be used for the final examination administered on campus.

Students who live far from Berkeley may arrange to have their final examination proctored off-campus.  These students need not purchase any Scantron forms.  They will use a special answer sheet which the proctors will send to us when the exam is concluded.  

The midterm exams can be accessed from the "Quizzes" tab on the bCourses website.

  • Log in to bCourses.
  • Click on "Quizzes", and you'll see a link for Midterm Examination 1, as well as another link for Midterm 2.
  • The exam will be available beginning at 9 AM (Pacific Time) on the day of the exam, and will remain active until 9 AM on the following day. You may take the midterm exam any time during that 24-hour period.
  • After you begin the exam, you will have 50 minutes to complete the exam (DSP students will have additional time, depending on their accommodation).
    • To answer a question, click on the radio button corresponding to your choice.
    • Then continue in this manner, scrolling down as necessary, until you have answered all of the questions.
  • Once you have submitted the exam, no further revisions are possible to your answers.
    • At the end of 50 minutes (longer if you have an appropriate DSP accommodation), the exam window will close automatically and your exam will be submitted for you.
  • Note that the exam window will close at precisely 9 AM PT on the day after the exam window opened. So do not expect to be able to begin your midterm exams 23 hours and 59 minutes after the window opens, and then take 50 minutes to complete your exam. The exam window will close after you have been working for exactly 1 minute, and your exam will be automatically submitted before you have had a chance to answer many of the questions.

Remember that you will have exactly 50 minutes to complete the exam (unless you have a DSP accommodation).

Remember to submit the exam when you have finished.

Students will receive preliminary examination results on the Saturday following the exam (perhaps before, depending on how things go). Note that these are preliminary results. We always edit exams retroactively to identify, and correct, any items that may be miskeyed or otherwise "bad" (for details, see elsewhere in the "Exam Information" page in the Lecture Supplements.  This will take a day or two, after which we will post a notice of any corrections.

When final exam grades (corrected if necessary) have been posted, you will receive notification via bCourses. 

We will also post feedback on the exam, including a discussion of the correct answer for each question, in the Lecture Supplements. When the feedback is available, we will distribute an announcement to this effect.

If there are any difficulties with the exam, you should contact the UC Online help desk immediately. If there is any system-wide difficulty, such as a meteor shower that wipes out the bCourses server, we will deal with the situation when and if it arises.

Students should purchase only one (1) Scantron form, to be used for the final exam. 

Students who will have their final exam proctored off campus need not purchase any Scantron forms at all.
You will be supplied with a special answer sheet by your proctor.
If possible, bring a laptop computer to your proctored exam, so you can enter your responses online.

The correct Scantron form for the examinations in this course is red, and is labeled on one side as the "ParSCORE Student Enrollment Sheet" "Form No. F-288-PAR-L". and on the other side as the ParSCORE Test Form.

The correct form looks like this:



Do not purchase the green Scantron form (or any color, other than red). It won't work for us.

These machine-readable answer sheets must be filled in with a #2 pencil. So buy a couple of those, too -- and sharpen them before you come into the exam room, because there's probably not going to be a pencil-sharpener in the exam room.

Be sure to bring at least one "red" Scantron sheet,

plus a supply of sharpened #2 pencils, to the exam.

We will not be able to supply them for you.

Be sure to buy the right form!

Be sure to bring #2 pencils!

Scantron forms cannot be completed in ink.

Here's how to fill out the Scantron form:


On the front side of the Scantron form (labeled "Student Enrollment Sheet"):

  • Fill in your eight-digit UCB Student ID number. Begin in Column 1 (no leading zeroes)and ending in Column 8; fill in the rest of the number in the remaining spaces (leave the rightmost columns blank), and fill in the bubbles accordingly.
  • Then fill in your name, last name first, and fill in the bubbles accordingly.

Do not fill in any other spaces on this side of the Scantron form.

  • Do not fill in the Instructor, Class, or the Hour/Day.
  • Do not fill in your phone number.
  • Do not fill in anything for Code.

On the reverse side of the Scantron form (labeled "test Form"):

  • Fill in your eight-digit UCB Student ID number again (use just the first 8 columns -- no leading zeroes), and bubble in as before.
  • Fill in Test Form A.

Then fill in your answers beginning in the leftmost columns (#s 1-100).

Do not fill in any other sections on this side of the Scantron form.


  • Do not fill in Exam #.


Previous Exams

I don't intentionally repeat questions from past exams. Nevertheless, all previous exams in my offerings of Psych 1 (at Berkeley) are available in the Lecture Supplements, as a guide to studying.Click on "Exams".

To retrieve a particular exam, simply click on the links in the table below. The answers keyed "correct" are marked with asterisks (*). From Fall Semester 1998 through Summer Session 2014, and in Summer Sessions 2015-2018, all exams were prepared by John Kihlstrom.  For Fall and Spring semesters 2014-2018, and all terms after Summer Session 2018, all exams were prepared by Christopher Gade.  After Summer 2018, copies of subsequent exams will be available only via the course's Learning Management System, known as bCourses

On each of the exams, some of the items were deemed "bad" items, in that they did not perform well psychometrically (according to the standards discussed below), or (more rarely) because they were poorly written or poorly typed. Bad items are identified during the scoring of the exams, and errors are corrected by scoring the items in question correct for all responses. Bad items are indicated on most of the past exams, along with the appropriate corrections. Most exams are accompanied by explanations of the correct answers.

Even if a few bad items are not identified, the past exams are a good guide to the content and style of the exams given in this class.


Revision and Reorganization of the Course and Exams

Since I arrived at Berkeley, both my lectures and the textbook have undergone several revisions. Accordingly, there are some questions on past exams that are not pertinent to the current version of the course. But because concepts and principles change more slowly than picky details, even the very oldest exams are still a good resource.

Note, however, that when I offered the course in the Fall of 1998, there were three midterms and a final. Beginning in the Fall of 2000, there are only two midterms.  Some material covered on Midterm 2, in Fall 1998 is now covered on Midterm 1, and some material covered on Midterm 3 is now covered on Midterm 2.

Moreover, due to the vagaries of scheduling, sometimes the coverage of exams differs from year to year. For example, the material on Sensation and Perception sometimes appears on Midterm 1, and sometimes on Midterm 2; or, the material on Statistics sometimes appears on Midterm 1, and sometimes only on the Final. One way or another, the exams cover the entire course of readings and lectures.

Changes to Final Exam Format Beginning Summer 2015

Beginning in the Summer Session of 2015, the format of the Final Exam will be altered somewhat to include a different kind of multiple-choice question.  These questions will begin with a descriptive passage, perhaps accompanied by a graphic, and will ask you to combine your knowledge of psychology with your skills in scientific reasoning.  Not all Final Exam questions will be presented in this manner, or even most of them, but some of them will be.

This “passage-based” format is inspired by the new Medical College Admissions Test (MCAT), and emphasize reasoning from and about scientific evidence, rather than mere fact-retrieval.  Sample questions, derived from the Official Guide to the MCAT, 4th edition (American Association of Medical Colleges, 2015), are presented below as examples.  Correct answers are marked with an asterisk (*) and explained in the analyses.

Sample Passage A.  When rats are placed in a cage with a running wheel and their food intake is restricted to one hour per day, they begin to exercise excessively and reduce their eating significantly.  Under these circumstances, they die of starvation within 10 days.  This phenomenon is called activity-based anorexia (ABA).  Control rats that are on the same restricted food schedule without access to a running wheel adjust their eating and typically gain weight.

ABA is often studied as an animal model of exercise addiction.  Exercise stimulates the release of endorphins.  Kanarek and colleagues (2009) hypothesized that the increase in endorphins plays a role in exercise addiction.  In Study 1, a group of rats was trained to press a lever for the self-administration of heroin.  Afterwards, half of the rats were ABA-induced.  The researchers found that ABA-induced rats reduced their heroin intake.  In Study 2, rats were placed either in cages with running wheels or ages without running wheels.  Half of each group had 1-hour access to food per day, and the other half had 24-hour access to food.  After seven days, these rats were injected with naloxone, a chemical that binds to the endorphin receptors and blocks their functioning.  The rats were then monitored for responses typically associated with opiate withdrawal in laboratory animals.  The figure presents the results of Study 2.

Sample Question A1.  A hot-water tail-flick test measures the time it takes rats to remove their tail when it is dipped in hot water.  Rats housed with a funning wheel exhibit a delayed response in the test.  Based on this response, which type of sensory receptors ae most likely negatively regulated by exercise?

A. Baroreceptors

B. Nociceptors*

C. Mechanoreceptors

D. Chemoreceptors

Analysis: Nothing in the passage referred to air pressure, touch, or chemical stimulation.  Endorphins are endogenous opiates, and all opiates alter pain sensations, which arise from stimulation of pain receptors, or nociceptors.

 

Sample Question A2.  According to the hypothesis presented in the passage, which drug is most likely to cause a decline in the wheel-running behavior of ABA-induced rats?

A. Alcohol

B. Cocaine

C. Marijuana

D. Morphine*

 

Analysis: Endorphins are endogenous opiates, in the same class as morphine.  If the rats are getting exogenous opiates in the form of morphine, then they will have no incentive to “work” for endogenous opiates by running in the wheel.

 

Sample Question A3.  Based on the passage, ABA-induced rats are most likely to demonstrate:

A. increased sensitivity to the effects after running in their wheels.

B. increased sensitivity to pain over time.

C. withdrawal symptoms if they are prevented from running in their wheels. *

D. withdrawal symptoms if they are injected with opiates.

 

Analysis: If the rats are addicted to exercise, then they are producing endogenous opiates by running in their wheels.  If they’re prevented from exercising, then they won’t be generating exogenous opiates anymore, and they’ll experience withdrawal symptoms.

 

Sample Question A4.  If heroin-dependent rats were injected with naloxone, the naloxone would:

A. reduce the reinforcing effects of heroin. *

B. increase the reinforcing effects of heroin.

C. reduce the reinforcing effects of heroin only if the rats had restricted access to food.

D. increase the reinforcing effects of heroin only if the rats had unrestricted access to

food.

 

Analysis: Naloxone is an opiate antagonist, blocking the effects of both endorphins and morphine.

 

 

Sample Question A5.  What type of learning is taking place in Study 1?

A. Instrumental conditioning*

B. Classical conditioning

C. Social learning

D. Observational learning.

 

Analysis: The rats are engaging in voluntary behavior in order to obtain a reward; therefore, we are talking about instrumental or operant conditioning.  And the rats are obtaining the reward directly, not vicariously, so this is not an example of observational learning or any other form of social learning.

 

Sample Passage B.  Research has found that Black men are less likely than White men and Black women to attend healthcare appointments.  In a number of studies, this has been linked to mistrust toward healthcare professionals.  A study examined several factors that might account for the medical mistrust that Black men experience.  Hammond and colleagues (2010) recruited Black male subjects through advertisements placed in Michigan and Georgia.  The researchers collected information on several variables that might predict subjects’ level of medical mistrust.  Because help-seeking behavior might be perceived as incompatible with the traditional male gender identity, researchers surveyed the subjects on their endorsement of male gender roles.  Subjects also completed questionnaires assessing neuroticism, experiences with racism, the nature of recent healthcare experiences, and the degree of medical mistrust they experienced.  The researchers found that recent experiences with racism in any setting, as well as a strong male identity, increased the likelihood of medical mistrust.  Furthermore, recent unpleasant healthcare experiences reduced the frequency of subjects’ seeking healthcare in the future.

These results suggest that if medical mistrust is to be reduced, it is necessary for healthcare professionals to pay close attention to their interactions with Black men.  Related studies showed that when interacting with Black patients, doctors are less likely to assume a patient-centered communication style, which involves focusing on the patients’ needs, concerns, and satisfaction.  Based on these findings, a follow-up experiment was designed to investigate whether the doctor’s communication style caused a difference in the patients’ levels of mistrust.

 

Sample Question B1.  According to the passage, one of the reasons Black men have medical mistrust is because seeking help violates their:

A. gender schema. *

B. gender script.

C. gender conditioning

D. gender adaptation

 

Analysis: A schema is an organized knowledge structure representing a person’s beliefs and expectations.  If help-seeking is not included in a person’s idea of masculinity, then help-seeking violates that gender schema.

 

Sample Question B2.  Which operationalization is most appropriate for the independent variable of the proposed follow-up experiment?

A. Level of mistrust, established by an inventory that measures subjects’ medical mistrust.

B. Level of mistrust, established by independent judges who rate subjects’ medical mistrust.

C. Type of communication, established by training a doctor who is also a confederate to use patient-centered communication or a communication style that is not patient-centered. *

D. Type of communication, established by giving doctors in the study an inventory that assesses whether their communication style is patient-centered or not.

 

Analysis: In this experiment, type of communication is the independent variable, and level of mistrust is the dependent variable.  Independent variables are experimentally manipulated – in this case, by training a doctor to use both communication styles, and vary his style from one patient to the next.  If we assessed doctors’ “natural” communication styles as patient-centered or not, and correlated that with their patients’ level of mistrust, we’d be using a correlational design, not an experimental design, and we’d be talking about predictor and criterion variables.

 

Sample Question B3.  The tendency of doctors to use a physician-centered communication style more often with Black patients is an example of:

A. prejudice.

B. stereotyping.

C. discrimination. *

D. ethnocentrism.

 

Analysis: Prejudice is an attitude; a stereotype is a set of beliefs about an outgroup.  Discrimination involves behavior, such as a doctor altering his communication style.  Prejudice, stereotyping, and discrimination may be reflections of ethnocentrism, but for all we know, from the passage, both Black and White doctors engage in this discriminatory behavior.

 

Sample Question B4.  Based on the passage, unpleasant healthcare experiences act as:

A. positive reinforcement.

B. negative reinforcement

C. positive punishment. *

D. negative punishment.

 

Analysis: Remember how reinforcement is defined, as any stimulus that increases the likelihood of behavior.  In positive reinforcement, the reinforcing stimulus is presented; in negative reinforcement, the reinforcing stimulus is withheld.  The same distinction is made with respect to punishment, which decreases the likelihood of behavior.  In positive punishment, it’s presentation of the stimulus, like electrical shock; in negative punishment, it’s the absence of the stimulus (like going to bed without your supper).

 

Sample Question B5.  Another researcher reviews the study described in the passage and suggests that the medical mistrust experienced by Black men can be explained, in part, by the concept of institutional discrimination.  Which statement best describes that concept?

A. Discrimination is not systematic, except when observed within institutions.

B. If they have a history of unfair treatment, institutions are labeled as discriminatory.

C. When several individuals exhibit prejudiced attitudes within an institution, then that institution will also be discriminatory.

D. As opposed to discriminatory acts committed by individuals, there are institutional policies that disadvantage certain groups and favor others. *

 

Analysis: Institutional discrimination is an institutional problem, a problem of the institution itself, not merely a problem of certain individuals in that institution. 


 

Grading of Exams

Exams are scored twice:

  • once, to identify "bad" items;
  • the second time, to correct for those bad items and generate a final score for each student.

I order to forestall quibbles about which items are "bad", bad items are identified according to two objective, statistical (or psychometric) criteria:

  • First, we look at the proportion of students who got the item correct. In order to be eligible for rescoring, a minority (< 50%) of the class must have gotten the item correct according to the provisional answer key. In psychometrics, this statistic is known as the pass percent.
  • But pass percent is not the only applicable criterion. Some questions are naturally harder than others, and even a very difficult item might be "good", so long as it discriminates between students who do relatively well, and those who do relatively poorly, on the exam as a whole. In psychometrics, this statistic is known as the item-to-total correlation, a measure of what is known as construct validity, and is measured by the correlation coefficient (strictly speaking, the point-biserial correlation coefficient, abbreviated rpb) between the item and the rest of the items on the test.

With upwards of 500 subjects in the sample (i.e., the number of students enrolled in the course), even very small correlations are statistically significant - -that is, greater than we would expect by chance. Accordingly, I have established a cutoff of rpb= .20).

To summarize, any item with both


  • a pass percent < 50% and
  • an rpb < .20

is identified as "bad" and rescored as correct for all responses (so that the exam is still worth 50 or 100 points). In other words, a student who got one or more of the "bad", rescored items wrong according to the provisional scoring key would receive one (1) additional point for each such item.

I am sometimes asked: Why do you do it this way? Why not just add a point to every student's score for each bad item identified by the item analysis? A little reflection will reveal the answer. Imagine a 50-item with one bad item, and a student who got a score of 50 on that test, because s/he just happened to get that bad item right. That student would now get a score of 51 on a 50-item test. Students who happen to get the bad item "right" already have credit for that item. We give the other students credit because it's not fair to penalize students for getting a bad item wrong. The procedure I employ makes sure that everyone gets the credit they deserve.

The result of this process is that the mean score on one of my exams typically ranges from 65-70% correct. If this strikes you as low, you should first consider that, in psychometric terms, the best tests have average scores of 50% -- this allows plenty of room above and below the mean to allow individual differences in performance to express themselves. You should also consider the mean scores in introductory courses in math and natural science. But most of all, you should consider that, despite the typical mean score, the majority of students in Psych end up with grades in the A and B range. This is because discussion section performance is typically excellent.

You have to work hard to get an A in this course -- and why shouldn't that be the case? But because there is no curving of grades according to a predetermined distribution, everybody gets the grade that their individual performance deserves.

The exams in this course, scored as described above, have excellent psychometric properties. Their reliability coefficients typically lie in the high .70s and low .80s.

  • For Fall 2010 (N = 490 students), Midterm 1 correlated r = .71 with Midterm 2, and r = .73 with the Final; Midterm 2 correlated r = .76 with the Final.
  • For Summer Session 2010 and 2011 combined (N = 188 students), Midterm 1 correlated r =.77 with Midterm 2, and r = .77 with the Final; Midterm 2 correlated r = .82 with the Final.


Exam Feedback

An initial scoring key is posted to the course website as soon as possible after each exam has concluded, so that students can check their answers.

More detailed feedback is posted as soon as possible after each exam has been rescored. This "final" feedback includes the correct answer, item pass percent, item-to-total rpb, and a paragraph or so indicating why the right answer is right and the wrong answers wrong. This is intended to enhance the value of past exams as a study guide.

GSIs are encouraged not to address questions about particular exam items. All the feedback you need is provided in the instructor's feedback, on the course website.


Exam Grades

Exam grades are posted to the course website as soon as possible after the two-phase scoring of each exam has concluded. This can take a couple of days, but is often completed much more quickly than that.

Students who believe that their Scantrons have been improperly scored should contact the instructor first, to determine whether the discrepancy is the result of a simple data-entry error on his part. If the discrepancy cannot be resolved in this way, students will be asked to consult with their GSIs, who will arrange for the Scantron to be rescored by hand.


Students with Disabilities

Students registered with the Disabled Students Program are entitled to certain accommodations with respect to testing. Such students should consult with the instructor in advance of the exam to make appropriate arrangements.


Assignment of Letter Grades

Assignment of grades is, in some ways, the most problematic aspect of any course.

At some institutions, and in some individual departments and courses, there is a forced curve such that, for example, the average grade is set at C (no kidding!). Thus, for example, scores 1 standard deviation above the mean might get some kind of B, while scores that are two standard deviations above the mean might be required for an A. But this means that, no matter how good overall class performance is, someone has to get a C, and someone has to fail. And that doesn't seem fair.

As an alternative, the traditional academic criteria for letter grades are as follows:

  • A+, 100% of available points (based on a total of 340 points, this would be 340 points);
  • A ,    93%   (316 points)
  • A- ,   90%   (306 points)
  • B+,   87%   (295 points)
  • B ,    83%   (282 points)
  • B- ,   80%   (272 points)
  • C+,   77%   (261 points)
  • C ,    73%   (248 points)
  • C- ,   70%   (238 points)
  • D+,   67%   (227 points)
  • D ,    63%   (214 points)
  • D- ,   60%   (204 points).

And, continuing down the scale:

  • F+,  57%   (193 points)
  • F ,   53%   (180 points)
  • F-,   50%   (170 points).

Standards like this mean that, in principle, everyone in the class could get an A. That rarely happens, but it could happen, and standards like this seem infinitely more fair than a forced normal curve. So that's the standard I follow, with some provisos.

  • I don't give grades of A+, even to students with perfect scores (it just ratchets up grade competition, and contributes to grade inflation).
  • My standard for a C-, the minimum passing grade, is lower than traditional: 50%+ 1 (171 points).
  • And so is my standard for a D-, 25% +1 (86 points).
  • So in order to get an F, you've got to accumulate less than 86 points.
  • I try not to contribute to grade inflation.

In any event, exam scores contribute only about two-thirds of a student's grade in Psych 1. The rest is made up of discussion section scores, credits from the ZAPS Active Discovery Learning and Research Participation Experience, and Participation. A student who did about average on my exams, scoring 70% (nominally, a C-) can still get something B-ish as a final grade by accumulating all discussion section and ZAPS credits.

In addition, I try to conform my grade distributions to that of the campus as a whole. Psychology is both a social science and a biological science; so I average the figures together for those two divisions of the College of Letters and Sciences, counting only lower-division courses.  According to the most recent data available to me, for the 2015-2016 calendar year (Fall and Spring semesters combined):

  • As, about 31-46%, depending on how you count;
  • Bs, about 20-35%;
  • Cs, about 6-15%;
  • the rest, Ds (1-2%) F (0-1%).

What do I mean "depending on how you count"? It turns out that UC Berkeley records letter grade distributions in two ways, as percentages and as counts. When it calculates percentages, it does so based on the number of letter grades. But not all students take a course for a letter grade. So, for example, of students taking lower-division courses for letter grades in 2015-2016, 46% got "some kind of A". But only about half of all courses were taken for a letter grade; many were taken on a Pass/No Pass or Satisfactory/Unsatisfactory basis (and a very few courses received grades of Incomplete, In Progress, or Unknown). P/NP and S/U are excellent options, as they make it easier for students to take intellectual risks. At the same time, experience indicates that most students who opt for P/NP are going to get grades in the B or C range -- perhaps because they've gotten in over their heads, or because the P/NP option gives them license to blow the course off. The result is that, while 46% of all letter grades were "some kind of A", As comprised only 31% of all grades. Put another way, the P/NP option probably inflates the percentage of As and Bs. If you assume that students who received a grade of P or S would have received a C+ if the course had been taken for a letter grade, the percentage of As and Bs combined would drop to less than 40%.

For students taking the course on a "Pass/Fail" basis, the minimum for a passing grade is C-. 

What happens when the "raw" distribution of grades is harsher than this -- when, for example, the "industry standard" applied above yields only 25% As? At that point, the cutpoints have to be loosened somewhat. But by how much?

  • One possibility would be to drop the cutpoints by just enough to achieve the target distribution.
  • Another possibility, which has always seemed less arbitrary to me, is simply to move the cutpoints down by one grade level. So, for example:
    • Students who would get an A- according to the "industry standard" described above would now get an A.
    • Students who would get a B+ according to the "industry standard" will now get an A-.
    • And so on all the way down the scale -- so that, for example, students who would ordinarily get a C+ will get a B-.
  • So that's what I do: if I have to adjust the distribution of grades, I do so by dropping down to the next category.
    • This can produce a big drop, resulting in even more As than my target.
  • In the event that the drop-down results in fewer As than the target, one option is to drop every letter grade down another notch, so that students who would normally get a B would get an A-. But I don't do that if it would lead me to overshoot my target -- because, as I said before, I don't want to contribute to grade inflation. 
    • And I won't drop down more than two notches -- otherwise, an "A" grade would lose its meaning.

With respect to grade inflation, the issue is not easy to resolve: what is the proper proportion of As in academically elite institutions like Berkeley? At one point, 50% of Harvard students were getting As in their courses, and 80% were graduating with honors: Harvard is Harvard, but even at Harvard there was a general feeling that those figures were too high. In the final analysis, I try to have my As closer to 35% than to 45% -- just my little contribution to stemming grade inflation.

  • Moreover, given the model of the normal curve, there should probably be more Bs than As, and more Bs than Cs.
  • But, if everybody got 90% on my exams, they'd all get As -- and, they'd deserve them.

 An Exchange on Grading Policy

At the end of Summer Session 2014, student in Psych W1 wrote me the following message:

As I was looking at ScheduleBuilder, it appears that for classes you have taught in the past, of the 2011 grades submitted, 881 students, or 44% received an A or an A-. 

I looked at where I stood on the two midterms and final. On the first midterm, I was 0.54 SD above average, putting me in the 70th percentile. On the second, my z-score was 0.3, therefore in the 67th percentile. On the final, my z-score was 0.49, in the 69th percentile. Putting this together, for exams, although the distribution is not perfectly normal, I was in the 68.75th percentile, which means around 31.25% of students performed better.

According to this, I am having trouble deducing how my grade culminated in a B+, which I understand is the average grade. And in undergraduate classes I have taken, usually at least 25-30% of students receive an A or an A-.

Please advise on this matter as I hope it can still be discussed.

My Reply:

Thanks for your message.  I've asked the technical staff to leave bCourses open for a while, so I got it through both my bCourses inbox and my regular bMail account. 

I have to tell you, first, how pleased I was to get your message.  About this time of the term teachers can expect students to write asking for "just one more point" so they can cross the threshold to a higher letter grade; or asking if there is some extra-credit work they can perform, even though the course is over; or, maybe if they can complete a Discussion posting or ZAPS exercise that they had neglected during the regular term.  You didn't do that.  You actually grappled with the statistics, showing that you got something out of Module 3!  In 40 years of college teaching, this is a first for me.

I've never looked at my entry on ScheduleBuilder, so I don't know what that 44% figure is based on.  Historically, half of my undergraduate teaching has been in upper-division courses taken almost exclusively by Psychology and Cognitive Science majors, where you would expect that most students would do well.  But, in any event, my grading policy is clear from the Syllabus, and detailed further in the Exam Information page on the course website.  I guarantee "some kind of A" to every student who accumulates at least 90% of the available points, and "some kind of B" to those who accumulate at least 80% of the available points.  And, in order to encourage students to take intellectual risks, I guarantee "some kind of C" to every student who accumulates more than 50% of the available points. 

So, in the first place, I don't assign letter grades based on percentile scores.  I assign letter grades based on numerical scores -- i.e., points accumulated.  If I did assign letter grades based on percentile scores, and followed a common interpretation of "grading on a curve", your estimated percentile score of 68.75 would amount to a letter grade of C+.

(Actually, though, this was not your true percentile score.  In terms of total points, taking into account credits for Discussion postings, ZAPS, and Participation, your score of 266 points (out of 325) ranked 45th out of 178 students in the class, for a percentile ranking of 74.  Again, following a common interpretation of "grading on a curve", your actual percentile score of 74 would amount to a solid C.)

If, that is, we curved based on percentiles.  A better curve (if I were going to curve a class) would be based on the standard deviation.  I'd give some sort of C to students whose scores were 0.5 standard deviations on either side of the mean; some sort of B to students whose scores fell between 0.5. and 1.5 standard deviations above the mean; and some sort of A to students whose scores were more than 1.5 standard deviations above the mean.  In terms of total points, the average for the entire class was 240, with a standard deviation of 35.  That would put your total score of 266 about 0.75 SD above the mean, probably in the B- range.

But these points are moot, because I don't grade on a curve to begin with.  My policy permits every student in class to get an A.

Here's what I do instead (there are more details in the Exam Information page).

First, I apply what I call the "industry standard": 93% = A, 90% = A-, 87% = B=, etc.  Your 266 points, or 81.85%, would be right in the middle of the range for a B-. 

Then, I compare my distribution of letter grades to local norms -- in this case, lower-division UCB courses in the biological and social sciences.  The last time I looked, based on the 2013 calendar year, about 52% of UCB students in this category received some kind of A, and another 32% received some kind of B.  My distribution was lower than that.  Applying the industry standard, fewer than 1% of students got an A, and another 2.4% got an A-.   So, following the procedure explained in the Exam Information page, I dropped the cutpoints down a bit -- two notches in fact, picking up some students who, according to the industry standard, got a B+.  This is how your B- turned into a B+.

Even after this adjustment, only 23.6% of the class got "some kind of A".  This is, of course, still somewhat lower than local norms.  Accordingly, I suppose that I could have notched the cutpoints down even further.  But, in order to achieve 44% As, I would have had to ratchet the cutpoint for an A- down to 73% (the threshold for a C, according to industry standards).  I chose not to do this for a couple of reasons.

First, an A has to mean something, and it has to mean more than 73%.  Accordingly, I resist dropping the cutpoints down any more than two (2) notches. 

Second, my Department has become increasingly concerned about the problem of grade inflation. Berkeley is Berkeley, and Berkeley students are special, but when more than 50% of students receive A grades, it's reasonable to think that, perhaps, faculty are not grading as rigorously as they might.  Harvard is Harvard, too, but the faculty there have expressed unease that more than 50% of students have been getting As (by which I think they mean solid As) in their courses, and more than 80% have been graduating with honors.  And Princeton is Princeton, too, but the faculty there has been so concerned with grade inflation that they experimented with a cap (since lifted) that allowed only 35% of students to get As. 

The question remains: Why were there relatively few As in Psych W1?

The difference is probably accounted for by the exigencies of Summer School.  The last time I taught Psych 1 on campus, in Fall 2010, 35% of the students received As (A or A-).  But for the last five summers, teaching the course online, the average has been about 25% As.  Same course material, same requirements, same kind of exams, same grading policy.

One difference is that the SS course lacks formal discussion sections, where the GSIs can answer questions about difficult material.  On the other hand, I set up the Queries and Comments discussion board precisely for that purpose.  Moreover, the SS course has almost 50% more lectures than the on-campus version, so that probably evens things out.

More important: Remember that in Summer Session we are compressing a 14-week course into 8 weeks, which doesn't really leave much time for students to digest the material.  This is especially true if students are taking more than one Summer Session course at a time, or working at the same time as they're taking even a single course.  There's just not a lot of time to study. I warn students about this in the syllabus.

Probably equally important is a more subtle factor, which is that a very large proportion of SS students take Psych 1 on a pass-fail basis, in order to meet distribution requirements for other majors.  In the regular semester, by contrast, the vast majority of students taking the course are intending to major in psychology, and so are motivated to do especially well.  That factor depresses the overall proportion of As -- although the guarantees specified in my syllabus mean that, no matter the performance of the class overall, any student who achieves 90% of the available points will get some kind of A.

This page last revised 08/11/2018.