Home Curriculum Vitae Publications Conference Reports Forthcoming Extramural Colloquia Expert Testimony Teaching Healthcare The Human Ecology of Memory Research Archive Publications and Reports Rants


Training for Science, Training for Practice

 

John F. Kihlstrom

University of California, Berkeley

and

Institute for the Study of 

Healthcare Organizations & Transactions

 

Note: This paper was presented at a symposium, "Scientific Foundations of Clinical Psychology at the Beginning of the 21st Century -- Victories, Setbacks, and Challenges for the Future" sponsored by Division 12 (Clinical Psychology), at the annual meeting of the American Psychological Association, Honolulu, July 31, 2004.

Links to other papers that discuss clinical training issues will be found in the reference section.  Appended to this paper are comments on certain of its proposals, and my replies to them.

See also: Kihlstrom, J.F.  (2005).  What qualifies as evidence of effective practice?  Scientific research.  In J.C. Norcross, L.E. Beutler, & R.F. Levant (Eds.), Evidence-based practices in mental health: Debate and dialogue on the fundamental questions (pp. 23-31, 43-45).  Washington, D.C.: American Psychological Association.  LInk to prepublication draft.

 

I greatly appreciate the invitation to join this symposium on the future of scientific clinical psychology. Although I have never been on the core faculty of a clinical training program, I do have clinical training, including an internship, and I have tried though my research and teaching to help build bridges between clinical psychology and the other subfields of the discipline, as well as between clinical psychology as a science and clinical psychology as a profession -- bridges that should run both ways (Kihlstrom & Canter Kihlstrom, 1998).

Let me begin with a little personal history. My own clinical training was in the classic scientist-practitioner model, with a decided emphasis toward science and away from practice. I applied to Penn because I was interested in doing hypnosis research with Martin Orne, and because I was also interested in personality -- I wrote on my personal statement that I wanted to quantify the concepts of existentialist theories of personality, leading Burt Rosner, the chair of Penn's Psychology Department at the time, to tell me that they had accepted me just to see what I looked like -- so I sent my application to the Program of Research Training in Personality and Experimental Psychopathology. I was very happy, the next spring, to get a thick envelope from Philadelphia -- but also dismayed to find that Penn's offer of admission was signed by Julius Wishner as head of the Clinical Training Program. I immediately called Julie, who ran both programs, and told him I didn't want to be in the Clinical Training Program; in that charmingly gruff way that Julie had, he replied "Don't worry: you're not", and hung up the phone. It turned out that Julie was prevaricating a little -- I was in a clinical training program, but it just wasn't called that, but at least it wasn't oriented toward preparing students for careers in clinical practice.  Nevertheless, the students in that program did everything that clinical students did -- some proseminars, a weekly research seminar, a course on assessment, a course on treatment, a little practicum. It was enough to prepare us for our internships. Having completed our dissertation research before leaving on our internships, we wrote our dissertations at night and on weekends, and then we were out.

It's been said that everybody favors the training model in which they themselves were trained, and I suppose that I'm no exception. For all these years, I have been a staunch advocate of the scientist-practitioner model, and dismissive of such alternatives as the practitioner-scholar model. To be honest with you, I'm not even all that thrilled with the clinical scientist model, which is probably the closest to the version of the scientist-practitioner model implemented at Penn at that time I was there -- not just because the very term "clinical science" strikes me as scientistic (Noam Chomsky once said that you know a science is in trouble when it has to call itself a science; he had political science in mind, but the criticism might apply equally well to cognitive science and neuroscience), but because it's uninformative about what the person actually does: the person isn't a clinical scientist -- he or she is a clinical psychologist.  Also, the clinical-scientist model, like the practitioner-scholar model, seems to dig a ditch, rather than build a bridge, between science and practice. However, it now strikes me that the scientist-practitioner model has outlived its usefulness, and creates more problems than it solves. In my view, it is now time to train some students for science, and other students for practice -- even if this training must take place in quite different programs, and even if it must take place in quite different institutions.

The reason for my change of heart is that I have become increasingly convinced that the traditional scientist-practitioner model works to the disadvantage of both types of students. For example, the student training for an academic career in teaching and research must spend precious time preparing for a career of clinical practice that he or she will never pursue, and in which he or she has no particular interest. It's one thing for budding clinical researchers to take a couple of courses in assessment and treatment methods, see a couple of patients during practicum, and then get full-time exposure to "the living material of the field" during their internships before taking up posts in universities and medical centers. It's another thing entirely for students who are really interested in clinical research to spend as much as one or even two thousand hours during their graduate studies acquiring and polishing practical skills to make themselves more attractive in the competition for internship placements.

img004.jpg (107277 bytes)

img003.jpg (105854 bytes)

img005.jpg (102595 bytes)

At Berkeley, for example, students in our "clinical science" program must complete, in addition to a two-semester clinical proseminar (no harm in that), and one or two courses in methods and statistics (everybody does that), two or three courses in assessment, two courses in intervention, and the equivalent of more than six courses of practicum -- not to mention three to four "breadth" courses covering the biological, cognitive-affective, social, and individual bases of behavior. That's a total of 13 to 15 courses -- almost two full years' worth.

This doesn't count an unspecified number of courses on diversity issues, and the requirement that each clinical student take a minor outside clinical. Nor does it take into account the fact that clinical "students... are encouraged to take as many as possible of the major courses and seminars offered by the core Clinical Science Faculty": at last count, there were approximately 15 such courses in the General Catalog. All this while their research-oriented peers in cognitive, social, and developmental psychology, having completed their proseminar and methods requirements in their first year or so of graduate study, are happily working in their advisors' laboratories developing their individual research portfolios, and taking the occasional seminar offered by their advisor or someone in a closely related area of research. Research, including research-related courses and seminars, should have pride of place in the training of these research-oriented students. Instead, both research training and research itself are subordinated to the requirements of training for practice.




Note added 2005: As of 2004-2005, the UCB area in Developmental Psychology has reduced its proseminars from 4 to 2, further increasing the disparity between Clinical Science students and their Developmental counterparts. 

Note added 2011: The Clinical Science area subsequently revised its curriculum, easing the course requirements somewhat.  As of the 2008-2009 academic year, students are required to take one proseminar course, one or two courses in statistics, two courses in clinical assessment, one course in intervention, two specialty clinics, and four breadth courses -- in addition to attending staff meetings when working in the Psychology Clinic.  Diversity, ethnic minority, ethics, and professional issues can now be addressed  in the context of these courses.  That is the equivalent of 11-12 courses (depending on whether the student takes a second statistics course).  This is certainly a reduction from the 13-15 courses specified in the previous requirement, but still well above the norms for the other graduate areas.  And while students are no longer "encouraged to take as many as possible of the major courses and seminars offered by the core Clinical Science Faculty", they are still "encouraged to take as many elective courses as their schedules will allow", as well as "additional courses in diversity and ethnic minority issues".   

img006.jpg (101437 bytes)Students heading for careers in clinical practice face a different set of issues. Because they take the same courses as the other graduate students do, they have to sit through "breadth" and methods courses that are expressly intended for students preparing for scholarly careers in various subdisciplines of the field. I don't doubt that budding practitioners would benefit from a concise overview of cognitive psychology (at the very least, such a course might have spared us the excesses of the recovered-memory movement). But that's not what they usually get, because for the most part clinical students take breadth courses that are designed not for them, but rather for graduate students in other subfields of psychology. Instead of a concise overview of the cognitive psychology that every clinician ought to know, they get 15 weeks' worth of lectures comparing prototype and exemplar theories of conceptual structure, or connectionist and rule-based theories of the acquisition of the past tense of verbs.  These courses are not listed because they are designed and taught for the purpose of providing budding clinicians -- practitioners or scientists -- with a breadth of exposure to psychology.  They are listed simply because they are available.

img007.jpg (102847 bytes)Something similar happens with the methods and statistics requirement. Again, clinical practitioners-in-training are subjected to exercises in counterbalancing and Latin squares, discussions of the comparative advantages and disadvantages of the Bonferroni and Tukey multiple-range tests, principal components versus principal factor analysis, and how to calculate the proper degrees of freedom in discriminant function analysis. Clinical researchers need this kind of instruction but it is simply lost on those headed for clinical practice -- not because they aren't smart enough, or even because they're not particularly interested (who in their right mind is?), but because once out of graduate school they will never have occasion to apply this knowledge in their entire careers. Just as budding schizophrenia researchers don't have to know how to do psychotherapy, budding psychotherapists do not need to be taught how to do statistics. Instead, they need to be taught how to consume statistics -- and, in particular, to respect statistical evidence as the core of the scientific base of clinical practice.

Part of the problem here, frankly, is the APA accreditation scheme, which continues to be based, at least implicitly, on the scientist-practitioner model. Because it doesn't distinguish between research-oriented and practice-oriented students, it imposes the same requirements on both. The result is that research-oriented students are forced to take courses that they don't need, and practice-oriented students are forced to take courses that aren't appropriate for them. Part of the solution, I think, is for research-oriented, "clinical science" programs to simply drop their accreditation. That will permit them to develop their own curricula, just like their colleagues in cognitive, developmental, and social psychology, free of outside infringements on academic freedom. It will also make them less attractive to applicants who are not really interested in research, but who say they are in order to gain admission to high-prestige programs. Without the pressures of accreditation, clinical research training programs would look more like their counterparts in cognitive, developmental, and social psychology.

Note added 2020: In 2007, an alternative accreditation scheme, known as the Psychological Clinical Science Accreditation System, was established under the auspices of the Association for Psychological Science, the Academy of Clinical Science, and the Council of Graduate Departments of Psychology, intended to better meet the needs of "Clinical Science" programs that were closely oriented to the scientific basis  of clinical practice. 

After years of maintaining dual accreditation with both APA and PCSAS, (thus doubling the expenses associated with accreditation), UC Berkeley took the radical step of terminating its APA accreditation (technically, it will not renew it when its current accreditation expires). 

Part of the problem, too, is the changing nature of the clinical internship. When internships were originally proposed, by David Shakow (Shakow, 1938), they were intended to be opportunities for graduate students interested in clinical problems to get out of the classroom, and out of the laboratory, and become acquainted with "the living material of the field". Most of the student's clinical experience was to take place on the internship itself. Actually, I've always thought that every graduate student ought to do some sort of clinical internship, for just this reason: students of visual perception could spend some time in an optometry clinic, and students of memory could spend some time with Alzheimer's patients -- but I digress. But that's not the case anymore, because internships are increasingly construed as sources of revenue rather than vehicles of training, and so pre-doctoral students must devote increasing amounts of time preparing to provide reimbursable services with a minimum of costly supervision. The solution to this problem is for research-oriented programs to develop their own, in-house or "captive" internships, to give their students the kind of broad and deep encounter with "the living material of the field" that is consistent with the research-training goals of their PhD programs.

img008.jpg (60444 bytes)Actually, I have long believed that every clinical psychology program, whether science-oriented or practice-oriented, should have its own in-house or "captive" internship program (Kihlstrom & Canter Kihlstrom, 1998). Such an arrangement would avoid the situation that exists now, and has existed at least since the 1990s, where there are more students graduating from clinical training programs than there are internship slots. For example, data from the Association of Professional Psychology Internship Centers (APPIC) indicates that, from 1986 to 1997, the number of internship applicants unplaced three weeks after Uniform Notification Day increased more than 700%, from 65 to 469, while the number of internship vacancies decreased by more than 50%, from 79 to 34.

img009a.jpg (76106 bytes)Things improved somewhat after 1998, possibly by the addition of more unaccredited internship slots, but still, from 1999 through 2004, unmatched applicants outnumbered unfilled vacancies by a ratio of 2:1. In 2004, there were 611 unmatched applicants, and 304 positions remained unfilled.

 

 

Periodic Updates  

The trend described in 2004 has continued, and worsened, as documented clearly in the Match Statistics posted annually to the website of the Association of Professional Psychology Internship Centers (APPIC).

The match process is improving, but mostly because applicants from accredited doctoral training programs must settle for unaccredited internships.

If every accredited program had its own internship, every graduate of every accredited program would be guaranteed an accredited internship slot -- more like the match system for medical internships, where the number of slots actually exceeds the number of American graduates. But equally important, from a pedagogical point of view, if every program had its own internship, the pressure on interns to provide reimbursable services would be relieved, and the training goals of internships could be more closely articulated with the training goals -- towards science or towards practice -- of the student's predoctoral program.

As a footnote, let me say that I also believe it is a mistake for clinical training programs to "farm out" practicum experiences to community practitioners, who do not necessarily share the scientific values of the doctoral training program. It's a prescription for disaster when students learn in the classroom that the Rorschach isn't worth the paper it's printed on (Wood, Nezworski, Lilienfeld, & Garb, 2003), and then they go out into the world to score Rorschachs for someone who thinks that they are the epitome of clinical assessment. Medical students do their clerkship rotations in academic health-science centers; similarly, clinical training programs should keep tight control of their students' externship and practicum experiences.

a

img010.jpg (102659 bytes)So what would training look like if we separated training for practice from training for science? Frankly, it would look a lot like the training provided in medical schools, which offer quite different curricula for research-oriented students headed for the PhD, and practice-oriented students headed for the MD. Berkeley doesn't have a medical school, but there is a very nice one just across the bay at the University of California, San Francisco, which also offers PhDs in some 17 health-science fields from Biochemistry and Molecular Biology to Sociology, and there is no overlap between the two sets of programs. For example, medical students take an "Essential Core" of 9 interdisciplinary courses, beginning with an 8-week review of basic biological and behavioral science followed by 8-week blocks devoted to the various organ systems, cancer, infection and immunity, and life-span human development. These block courses run parallel to a foundational course in patient care that runs for two years, covering clinical skills, professional issues, and clinical reasoning; this course then leads to a longitudinal clinical experience in the third year and advanced rotations in the fourth. Medical students get some biochemistry, but it is not the same basic biochemistry course taken by biochemistry PhD students; and they get some neuroscience, but not the same basic neuroscience course taken by neuroscience PhD students.

That's essentially the vision I have for the future of training in clinical psychology. Training for clinical research should look more like research training in the rest of psychology, and training for clinical practice should look more like medical school. If the proposal on the practice side looks like a PsyD program, that's intentional. I always thought that the PsyD was a good idea, even if I also thought it was usually a poorly implemented one (Yu et al., 1997). The PsyD recognized that there is an inherent difference between training for science and training for practice. If some PsyD programs were established to enable students (and faculty) to escape from science, and I am sure that they were, the solution is not to abandon the PsyD format, but to reform it so that future generations of practitioners are trained to respect science as the base of practice, and of the status and autonomy of their profession as well -- just as current and future generations of physicians are.

There is no reason that training for science and training for practice cannot proceed on parallel tracks within the same department -- though with different curricular requirements. Schools of public health and social work have no problems with these divided functions -- nor, for that matter, do schools of law and business. At Berkeley we have a School of Optometry that trains both researchers in vision science and professional optometrists, as well as a College of Chemistry that houses separate departments of chemistry and chemical engineering. In both places everybody seems to get along just fine, but the two curricula are very, very different. Housing the two programs under the same institutional roof would allow current and future scientists and practitioners to benefit from contact with each other, but it would require some adjustments: clinical faculty would probably have to expand, to cover all the various aspects of clinical practice; and nonclinical faculty would have to agree to mount basic-science courses that are geared to the needs of future clinical practitioners.

But there are lots of alternative arrangements possible. Universities might develop "schools" of psychology, like Berkeley's School of Optometry, which would train both scientists and practitioners with an expanded faculty. Or, training for practice could be removed to academic health-science campuses like UCSF, which already house schools of medicine, dentistry, and nursing; this would bring practice-oriented students into closer contact with patients, but it would also require expansion of the basic behavioral science faculty in these institutions -- not incidentally, creating jobs for the products of the research training programs.

Training for practice might also be removed to free-standing schools of professional psychology. As with the PsyD, there is nothing wrong with such schools in principle. UCSF is for all intents and purposes a free-standing medical school, and it is not inferior to Harvard because it lacks departments of classics and political science. In fact, UCSF is the equal to Harvard (or its superior) precisely because, in addition to training medical students, it hosts a world-class cadre of basic scientists who do basic and applied research relevant to health care. This is a feature that most free-standing professional schools lack -- and which may contribute to their relatively poor performance (Cherry, Messenger, & Jacoby, 2000; Maher, 1999; Yu et al., 1997). If professional schools are going to train practitioners properly, they are going to have to invest a lot more in their basic-science and research infrastructure than most of them seem inclined to do at present.

Now, I'm told that this room holds 113 people, so I bet that there are at least 113 objections to this proposal. I have answers for all of them, but time permits me to respond to only four or five.

First, I seem to be advocating the segregation of science and practice within clinical psychology, when clinical practice has already veered far from its scientific base, and managed care is putting a greater emphasis on evidence-based practices (Kihlstrom & Kihlstrom, 1998). But I'm not: science and practice are not like science and religion, Steven Jay Gould's "Non-Overlapping Magisteria" that have nothing to say to each other (Gould, 2003). Science needs input from the real world of practice, and practice must be placed on a firm scientific base. But science and practice are different, and they deserve curricula that respect these differences. The fact that MDs don't get the same coursework and research experience as biochemistry PhDs doesn't make medicine any less science-based. Medicine is "scientific" not because physicians are scientists, but because medical practice is based on scientific evidence.  

Second, my view of clinical practice may be outmoded. I have heard it said that the future of clinical psychology is not in the direct delivery of clinical services, but rather in the design and evaluation of assessment and treatment programs that will be delivered by other professionals. If so, we might have to invent an entirely different practice-training model, but it still wouldn't be the model of PhD research training. Moreover, we're probably going to need a lot fewer clinical psychologists than we're training at present, and if so someone should inform the thousands of undergraduates who will be applying for positions in clinical training programs next year, and the thousands more each year after that, with the expectation that their careers will be devoted to testing and therapy of individuals and groups.

Third (and maybe fourth, depending on how you count), I seem to be going against history, abandoning the scientist-practitioner model at precisely the time when more and more physicians are training to do research, and basic researchers in fields like cognitive neuroscience are becoming more interested in studying patients. As a matter of fact, that's why I enrolled in an experimental psychopathology program to begin with -- because long ago I realized that psychopathology offered a unique perspective on normal mental life. But you don't have to be trained to practice to take pathology seriously, or to do research with patients. There's nothing that cognitive neuroscience does that physiological psychology didn't do before it, and I have yet to see a piece of medical research that couldn't have been done just as well by a "mere" PhD. My graduate advisor was an MD/PhD, and I have two MDs in my own department. Martin was a wonderful researcher, and so are my Berkeley colleagues; but such exceptions merely test the rule, and I'm generally an advocate of the division of labor. To take people who are trained to treat patients, and then divert them into research, simply diminishes the human resources available for healthcare (Kihlstrom, 2000). Scientists who want to study patients should collaborate with the practitioners who treat them, and practitioners who want to do research should collaborate with the scientists who know how.

Fourth (or fifth, depending on how you counted), I've abandoned the thing that I, as a devout generalist within psychology, ought to prize most: the breadth of exposure to the entire field of psychology that clinical psychologists must get, whether they are training for science or training for practice, by virtue of the APA accreditation standards.  Now, I freely admit that the one thing I really like about the standards is that they contain a breadth requirement.  And I think that a breadth requirement should be preserved in training for practice, just as there is a broad basic-science requirement in medical school.  But as much as I don't like it, the fact is that psychology as a science is going the way of specialization, if not super-specialization.  We have students in visual perception who don't know anything about memory, students studying working memory who don't know anything about autobiographical memory, students in psycholinguistics who are so focused on the processing of individual words that they don't know what a sentence is (thanks to my former Arizona colleague Ken Forster for that last example).  It's a shame, indeed it is, and I think it works to the detriment of psychology as a science that we produce graduate students who are so narrowly specialized that cognitive students know nothing about social psychology and social students know nothing about development.  It won't kill budding clinical researchers to take four breadth courses -- what kills budding clinical researchers is training for practice (and, for that matter, vice-versa!).  But in the final analysis, responsibility for breadth of training should fall on departments as a whole, not on clinical psychology alone.  

Frankly, I'm sorry to see the scientist-practitioner model go, because it was a lovely idea. But it was also a rhetorical device constructed, in large part, to aid an upstart clinical psychology's professional competition with the psychiatric establishment. Psychiatrists had at least the cosmetic advantage of medical training (although as far as I can tell not many of them ever practiced that much medicine), and the scientist-practitioner model seemed to say, "We're doctors too, but we're not just doctors, we're also scientists".  (The practitioner-scholar model adopts the same conceit -- "We're not just practitioners, we're also scholars": Funny, but physicians don't have to defend their professional status by referring to themselves as scholars.)  Neither the scientists nor the practitioners need to do that anymore.  Clinical scientists have enough to learn without learning practice as well, and clinical practitioners have enough to learn without acquiring research skills as well -- especially in the new era of managed care and evidence-based healthcare. I'm simply proposing that our training programs reflect that fact of life. People who want to treat patients should be trained to treat patients, with the best methods that science provides, and people who want to do research should be trained to do research, so that our clinical knowledge and practices constantly improve.

 

Note Added on the

PCSAS Accreditation Proposal

In 2008, Timothy Baker, Richard McFall, and Varda Shoham published a critique of current "APA" system for accrediting clinical training in Psychological Science in the Public Interest, a journal of the rival Association for Psychological Science.  The article proposed a new accreditation system, known as the Psychological Clinical Science Accreditation System, geared toward the interests of "clinical science" training programs.  

Baker, T. B., McFall, R. M., & Shoham, V. (2008). Current status and future prspects of clinical psychology: Toward a scientifically principled approach to. Psychological Science in the Public Interest, 9(2), 67-103.

The critique generated a furor within the practitioner community, and no little controversy among academics engaged in clinical training, resulting in quite a bit of coverage in the professional and popular press.  Because I am a researcher with clinical training and strong clinical interests, and had expressed skepticism about the proposal (though not the critique) over the listserv maintained by the Society for a Science of Clinical Psychology (of which I am a longtime loyal and disciplined member), I was interviewed via e-mail about the proposal for an article in the APA Monitor on Psychology ("Disputing a Slam Against Psychology" by Michael Price, December 2009).

Due to the exigencies of editing, my remarks didn't quite get across the way they might have, and I was slightly misquoted:

[T]he solution might not be another level of accreditation, says John Kihlstrom, PhD, a cognitive science professor at the University of California, Berkeley.  He says that although he believes there are problems with the current PsyD model, PCSAS probably wouldn't solve the perceived problems, and could even hurt psychology's reputation.  It's analogous to training in medicine, he says, where physicians and medical researchers receive quite different training but the training institutions aren't accredited differently.

"Medicine doesn't have two competing accreditation systems", Kihlstrom says.  "The establishment of a second accreditation system can only give the appearance of a field in disarray."

In fact, of course, PhD programs and medical schools are accredited differently.  Medical schools are accredited by the American Medical Association, just as clinical psychology programs are accredited by the American Psychological Association.  But PhD programs in biology, like PhD programs that are geared to training research psychologists, aren't accredited at all -- except insofar as their larger institutions, such as the University of California, Berkeley, are accredited by their regional board.

In what follows I reprint my interview in full for the record.

 

Do you think there's a discrepancy between the content of the APS journal article and how it's been portrayed by media reports of it?

No, from what I have read, the media reports have it pretty much right.  The article is very critical of the current state of both clinical practice and clinical training, and by extension the "APA" accreditation process (I put "APA" in quotes because I know that, technically, APA doesn't do the accreditation -- but, as a matter of historical fact, APA devised the current accreditation system, put it in place, and maintains it; similarly, I understand that APS isn't proposing to do accreditation either, but we'll use "APS" accreditation as a shorthand for what is being proposed in the article).

For all too long, research psychologists, who work in academic settings, have treated clinical practice, and clinical training, with benign neglect. What we got were trends like the recovered-memory movement, which has been an utter disaster for psychotherapy as a profession. That's all over now. Clinical psychology owes its professional status, its autonomy from psychiatry, and its eligibility for 3rd-part payments, to the assumption that what clinical psychologists do -- in terms of assessment and treatment -- is supported by a firm scientific base. Now, an increasing number of research psychologists, including those who are engaged in clinical research, have become actively involved in making sure that this assumption is valid. The article is not the opening salvo in this struggle, but it is an important milestone -- rather than avoiding the problems that beset clinical psychology as a profession, the scientific community now is confronting them head-on. This is good: it will make clinical practice more effective, and the science of psychology more relevant.

 

Do you think there's a need for a new accreditation system? I spoke to one of the co-authors, Richard McFall, and he's advocating a kind of dual accreditation system for institutions who want to show that they're teaching students competent research techniques. Do you think a dual system is a good idea?

I'm on record (on the SSCP listserv) as opposing the new accreditation system, but on mostly practical grounds. Medicine doesn't have two competing accreditation systems, and the establishment of a second accreditation system can only give the appearance of a field in disarray, which can only give aid and comfort to those who would like to diminish the status of clinical psychology as an independent mental-health profession. 

Accreditation isn't necessary for those institutions who want to teach competent research techniques. We don't accredit research training programs in biological psychology, or cognitive or clinical psychology, or developmental or social or personality psychology. The only reason for accreditation is to insure that professional practitioners are competent -- meaning that they adhere to practices (in assessment, treatment, and prevention) that are based on science -- either those that have actual empirical evidence for their efficacy, or those which are reasonably grounded in established psychological theory. What should happen is that the APA accreditation system should be strengthened, so as to insure that prospective clinical practitioners are taught scientific psychology, and a respect for evidence-based practices, the same way that medical students are taught the basics of anatomy, physiology, biochemistry, pharmacology, etc., as well as respect for clinical research that identifies which treatments work and which do not. In my view, APA accreditation has failed to do this -- certainly at the level of accrediting doctoral training programs, and possibly at the level of accrediting internships as well (I have particular views about the clinical internship, which I've also expressed in a paper).

If the new accreditation system prompts APA to strengthen its own accreditation standards, then the APS initiative will have done a good thing. But there's a paradox here, which is that the APS accreditation system is directed toward doctoral programs that are oriented toward training clinical researchers, not clinical practitioners. And, as I've indicated before, research training needs no additional accreditation. Therefore, I have to agree with Scott Lilienfeld that the new accreditation system, because it's not directed toward those programs that focus on the training of clinical practitioners, won't hit its intended target.

 

Some of the people in the practice community I've spoken with feel like the authors painted practitioners with too broad of a brush. Is the science-disconnect between practitioners and evidence really that bad, and if so, how much at fault is APA's accreditation system?

As far as I can tell, from my vantage point as a researcher with an active interest in clinical practice, and who has observed the training of clinical practitioners for almost 40 years, the authors have pretty much got it right. It might be too extreme to say that there is a science-practice war, but there is definitely a science-practice disconnect. All too often, research psychologists treat clinical practitioners with benign neglect, and clinical students treat scientific psychology as something that has to be tolerated as a price for getting a professional credential. That's one reason, I think, why free-standing professional schools of clinical psychology, offering a PsyD degree instead of a PhD, have become so popular. They don't typically have non-clinical scientists on their faculties, and their students get only minimal exposure to scientific psychology. So, as a group, they actually fail in this respect. And APA has been complicit in this, by accrediting so many free-standing programs whose scientific grounding is so weak. The proposed APS system reflects the pent-up frustration of the scientific community with the lax standards of the APA system.

 

You mention that you think that clinical practice training and clinical science training should be split. Would an accreditation system like the one APS has proposed help solve some of the problems you mention?

Just as background, I'm a product of the scientist-practitioner training model. My PhD is from the University of Pennsylvania, which established the first university-based psychological clinic in the United States, if not the world, and which -- at least when I was there, along with such noted clinical researchers as Susan Mineka, Lynn Abramson, and Lauren Alloy -- was a real model for the kind of clinical science programs that we see today. But I've now come to the conclusion that the scientist-practitioner model was a failure. Medicine doesn't train scientist-practitioners. It trains practicing physicians who understand and respect science. And, these days, physicians' understanding of and respect for science is reinforced by managed care (broadly construed), which simply says to physicians: If there isn't evidence that it works, we're not going to pay for it. As a result, physicians have gone back and put even their tried and true practices -- like using a stethoscope to detect cardiovascular problems -- to empirical test. For a long time, clinical psychologists were invulnerable to these pressures, because they relied on out-of-pocket payments from their patients, and because the scientific community generally left them alone (that's the benign neglect). But now they are increasingly coming under the same kinds of pressure. Parity for mental-health services is a two-edged sword: psychotherapists become eligible for third-party payments, but at the same time they have to abide by a new set of rules. And there's no getting away from this: if physicians and surgeons can't resist the demand for evidence-based practices, what in the world makes psychotherapists think that they'll succeed in doing so? 

The solution is not to give everyone PhD-level research training. Medical students don't get that. The solution is to make sure that clinical students are thoroughly grounded in scientific psychology, and are oriented around evidence-based practices. That's the kind of training that medical students get, in anatomy and physiology and biochemistry and pharmacology, and outcomes research, and those who want to be psychotherapists should get the same kind of training in psychology -- the science of mind and behavior. I now believe that it is a mistake to try to do training for clinical practice in the context of the usual sort of PhD program -- whether it's Berkeley's, or Yale's, or Arizona's, or Wisconsin's, or Penn's). These programs are just not big enough. It's like medical training before the Flexner Report. Training of clinical psychologists needs to look a lot more like medical school, and it needs to be done in an environment that looks more like medical school. That's my story, and I'm sticking with it.

 

The APS journal article takes a pretty hard swipe against the PsyD program, and you mention in your piece that you favor the idea of a PsyD program in theory. Is there a way to reconcile the need for specific training in practice with the scientific community's apprehension that the science gets distorted or goes unused in practice?

I always thought that the PsyD was a good idea, in principle. It's directly analogous to the MD. We don't make physicians take PhDs in biology or biochemistry. Why should we make psychotherapists take PhDs in psychology? We don't make physicians complete a doctoral dissertation. Why should we impose this requirement on erstwhile psychotherapists? And, frankly, we don't expect physicians to be researchers. My graduate advisor, Martin Orne, was an MD-PhD, I loved him dearly, and he taught me almost everything; and I have colleagues here in my Psychology Department at Berkeley who are MDs, whose scientific accomplishments I respect very much. But there's nothing that any of them ever did, as researchers, that they couldn't have done as PhDs. I now believe that there should be a strict separation between training for research, which should be done in the context of more-or-less traditional PhD programs, and training for practice, which should be done in programs that offer something like a PsyD, analogous to the MD, in training program that look a lot like medical school (but with a focus on psychology, not biology). 

But that doesn't mean that PsyD programs don't need reform. All the evidence suggests that, although of course there are exceptions, both at the programmatic and the individual level, the typical PsyD program is very weak scientifically. They don't have research psychologists on their full-time, tenured staff (as medical schools do). And, it appears, they don't do enough to inculcate in their students a respect for scientific evidence, and adherence to evidence-based practices. When I was editor of Psychological Science, I published a couple of articles that showed, on empirical grounds, that PsyD programs (as a whole) were doing a pretty bad job of training their students to respect scientific psychology. For example, Yu and his colleagues (1997) showed that graduates of PsyD programs did more poorly on the Examination for Professional Practice in Psychology than did graduates of more traditional, scientist-practitioner, PhD programs. And the late Brendan Maher (1999), who was my colleague at Harvard, showed that there were big differences between those professional training programs that were situated in a traditional university context and those that were not. This is not an argument for doing clinical training in the context of a traditional PhD program. And it's not an argument against free-standing professional schools. UC San Francisco is, essentially, a free-standing medical school, but its lack of a traditional Psychology department -- or, for that matter, departments of Classics or Physics or Political Science -- doesn't prevent it from being the world's greatest medical school. UCSF is the world's greatest medical school because it puts science first. Free-standing PsyD programs can achieve the same kinds of goals, but they've got to put science first. The APS accreditation system, as proposed, won't force them to do this, because it's explicitly not directed at them. The APA accreditation system, precisely because it's oriented toward clinical practice, and not clinical research, has the right and responsibility to hold these programs' feet to the fire. That APA doesn't do this is a shame. It's maybe even a sin.

 On Requiring an In-House Internship

as a Condition for Accreditation

 

My proposal that all accredited clinical training programs be required, as a condition of accreditation, to offer an in-house internship with enough space to accommodate all of their own students who are in good standing, has engendered considerable discussion on the listserv of the Society for a Science of Clinical Psychology.  Here are the comments, accumulated over a period of years, and my responses to them.

 

Correspondent #1:

Thank you for your comments, John. They bring up the question of "what's an internship". They also raise the often forgotten principle of clinical training as a continuum, from practicum to internship and beyond. In an ideal world, there should be such continuity, but that doesn't mean that all programs should offer all students an in-house internship training. We certainly don't want to empower accreditation bodies to impose such a one-size-fits-all game-changer on our clinical science training. That said, it would be great if we could hear of success stories by programs who offer some of their students an in-house, high-quality internship training. I fully agree that having a degree requirement that programs cannot guarantee is untenable.

Thanks for getting this discussion going.

Response: I think that is exactly the function of accreditation -- to identify minimum standards that all training programs in a particular domain must meet. That's what high-school and college accreditation does. That's what accreditation does for law, medical, dental, nursing, and engineering schools. I don't see why professional training in clinical psychology should be any different. If you want to train clinical practitioners, then you need to be accredited. I think we're now at the point where we need to impose this particular standard. If a program can't meet it, then it simply shouldn't be accredited.

Why should we impose this new requirement? The reasons are both pedagogical and practical.

On the pedagogical side, we already have several different models for clinical training -- the old, classic, scientist-practitioner model is only one of them. An in-house internship -- or, perhaps, a consortium of internship settings that agree to accept each others' students -- would make sure that the internship is tailored to the philosophy of the student's training program. There is no reason why a student from a science-oriented PhD program should have to administer Rorschachs in order to get into a more traditional internship.

But the chief argument, as before, is practical, and motivated by the problems with The Match. Altogether too many students from accredited programs are failing to match to an accredited internship, for the simple reason that there are not enough accredited internship slots to accommodate all applicants. This is simply unfair, and perhaps unethical -- we (collectively) train students, who spend their time and in many cases their money (including student loans) training to enter a profession which, through no fault of their own, denies them the final step toward entry into their career. If we're going to train clinical practitioners, and the internship is deemed necessary to complete that training, then we can't take their time and money for 4, 5, or 6 years and then say "Geez, that's tough -- Good luck!".

 

Correspondent 2:

I am sure that all on this listserv share the sentiment that the internship system is broken and that the students are the victims. certainly in the short term requiring that programs provide internships for all students makes sense. but as my new mayor has pointed out previously, a crisis is too important to waste (minus the expletives). this is a time for us to truly fix what is broken by reconsidering the premise for a required full year internship. for those who have attended academy meetings you know that we have this discussion most every year and I am sure we will again this year. Dick McFall is leading the new accreditation system and described heroic efforts at the state level to change the requirements for licensure (in this case to be accredited by a body other that CoA). I am not the one to describe this but simply I want to say that Tim Fowles and Ryan Beveridge of the University of Delaware are leading an academy task force that is considering new models for clinical intervention training. there are some exciting ideas out there and now is the time for us to engage in real system reform. change is hard but the status quo is unacceptable.

Response: I don't see any practical alternative to an internship. In fact, the only alternative I can envision is one in which students accomplish the equivalent of an internship during those predoctoral years when they would ordinarily be engaged in classroom learning, graded practicum experiences with assessment and intervention, and research. That would, effectively, extend the period of doctoral training by at least a year.

Like it or not, the internship requirement is based on the medical model of psychopathology -- which, as I've pointed out elsewhere, is not based on an assumption of biological etiology, but rather on medical training and practice. We've argued for an internship year of intensive supervised clinical experience precisely because that's what is done, and done successfully, in medicine. It will take a lot of rethinking, and a lot of argument, to now turn around and say to state licensing boards "Oh, never mind". Especially when it's patently obvious that any proposal to abandon the internship is based less on pedagogical considerations than on practical ones.

 

Correspondent 3:

We tried an optional in-house internship some time ago and it failed, mostly because it took too much faculty teaching time and because we couldn't make enough money from the clients our interns were seeing to pay the interns. We don't take insurance in our training clinic, because most insurance won't reimburse if the student is the therapist. Also, to meet APA internship accreditation standards, we had to take a "diversity of interns", meaning folks from other programs, who weren't always prepared. Of course we didn't need to try to meet APA standards, but that would have made us less attractive to applicants. I think one answer is closer to your recommendation that programs not be accredited if they can't guarantee internships - that answer is to not accredit programs that take too many students. We have too many students in the pipeline.

Response: Somehow medicine managed to solve this problem. But I'd only point out that your in-house internship didn't fail because it was "in-house". It failed because it couldn't support itself financially -- just as many other internship programs have failed to thrive for financial reasons that have more to do with how healthcare -- and particularly psychological services -- are paid for than anything else. The funding problem is critical. But my basic point is completely congruent with yours: programs should not be accredited if they can't guarantee internships. In my view, the most reliable way to offer such a guarantee is to have an in-house internship -- though, as indicated earlier, a consortium of internships might do the trick. But somehow the accreditation system -- and this goes for APA and APCS accreditation alike -- has to guarantee that there is an internship slot available for each and every one of the students who are in good standing in each and every one of their accredited programs.

 

Correspondent 4:

I've long advocated for in-house internships - but as a former DCT and former internship director, I am keenly aware that the reinforcement contingencies in academia are usually not consistent with intensive, high-quality internship training. It can be done with creative planning of the integration of research and clinical functions, as well as didactics - but it takes "selling" within the department, the college and the university - and most of all and the hardest to come by, a supportive and selfless faculty willing to take time from active course planning and research to do a solid job of clinical supervision. The other points raised by Susan are also quite valid, but I think can be worked around with enough time and thought. Unless things change quickly in ways I consider doubtful, foregoing accreditation in order to "force" an in-house internship would hurt students and the academic program. It is rare that a good, clinical scientist or scientist practitioner doctoral program can pull all of those factors together. If it happens, it's beautiful.

Response: We can't just let it happen, and be happy when it does. We have to make it happen.

 

I concur with the earlier point, that forcing students to take an in house internship may be counterproductive.

Response: I am less concerned with forcing students to take an in-house internship than I am in requiring that training programs provide one. If every training program guaranteed an internship slot to each of its students, then there would be at least one internship slot for every applicant in the market, and that would create the possibility that students could move around. But at the very least, every student in good standing would be insured that, by virtue of being enrolled in an accredited program, s/he could complete the minimal requirements to pursue his or her chosen professional career.

 

Final response here - requiring 100% internship placement from a doctoral program is totally unrealistic. Perhaps none of you has dealt with a student who decided to do the intern app process his or her way rather than yours, or who have simply not taken the time and effort to do the applications carefully - but I've seen many from good schools who did not get placed/matched due to sloppy or simply naive approaches to the process. Slamming a doctoral program for one such unwise student is unacceptable. Perhaps there is a minimum proportion of successful matches that might be required, but 100% ain't it.

Response: Not only is it totally realistic, it is also totally necessary, as our recent experiment demonstrates. We impose lots of other requirements on clinical training programs. Why not require that they actually complete clinical training. A student who wishes to go outside the house for an internship is free to apply anywhere s/he wants. If the application fails, there will still be a spot available in his or her home program's in-house internship. And with an in-house internship, a student wouldn't have to apply at all -- the slot would be here waiting for them at the appropriate time.

 

I'd love to see university departments develop cooperative internship arrangements with local clinical facilities adhering to practice model consistent with the university's training model - a provision of internship opportunities for students in exchange for faculty time donated to supervision and clinical research support in the clinical facility. Still difficult, but probably a bit more generally feasible than internship programs totally within an traditional university department.

Response: But this is just one way of organizing an in-house internship. Every college campus has a health center with a mental health unit. There are inpatient and outpatient psychiatric facilities in most general hospitals. There are lots of places, everywhere, where an in-house internship could be put together.

 

Another interesting possibility is the "professional school" as articulated 40 years ago by George Albee - NOT what we currently call "professional schools", but schools within universities that are similar to schools of other professions, like medicine and dentistry, where clinical training, basic education, and research are all valued and reinforced. There are big problems with this model as well, but it is probably also more feasible than in-house psych- dept run internships.

Response: In the 2004 talk referenced above, I specifically propose that clinical training be separated from research training, along the lines of medical school, and along the lines of a PsyD program. But that is a topic for

 

Just some rambling thoughts on a very important topic. I would love to see a bunch of DCTs put together stellar in-house internship programs to prove my misgivings unfounded.

Response: Yes, it would be nice if the APA would actually do something proactively, instead of merely wringing its hands over this problem.

 

Correspondent 5:

Why oh why can we not just go the medicine route and move the internship post-doc. Then the students who wanted to practice or sharpen their clinical skills or learn a clinical specialty could go do that. Is that just too simple?

Response: This is not the solution to the problem, because there are still fewer internship slots available than there are qualified applicants. There are virtues to making the internship post-doctoral, not the least of which is that MDs would be required to address psychologists as "Doctor". But unless every graduate of an accredited clinical psychology program is guaranteed an internship slot, the current problem will persist, and a significant number of students will have invested their time and money and effort training for a profession that they are prevented from practicing because they cannot complete the minimum requirements for licensure.

 

Correspondent 6:

That may well happen, and it will be interesting if it does - it would certainly take the pressure off of the academic programs by eliminating the situation of relying on others to provide a requirement that the university mandates.

Response: Yes, but the pressure at present is almost entirely on the students. It would actually increase the pressure on the academic programs, because they would have to insure that their qualified graduates could complete their education, and be eligible for licensure.

 

Potential problems?

1. The academic department loses all control over where a student chooses to put the capstone on her/his training.

Response: But departments don't have this control now. Students apply to a bunch of places, and they're accepted to some, or none, and the department has not control over either the application or its result. This maximizes the control of the academic department, because it requires that the department have its own internship, right there, available for students at the appropriate time.

 

2. I suspect that the imbalance between those who want internships and available slots would continue to be a problem.

Response: No. Because every clinical training program would be required to host an in-house internship of sufficient size to accommodate all of its own graduates, there would be, by definition, no imbalance. If a CTP could offer a bigger internship, and attract applicants from outside, all the better. But that is a different issue entirely.

 

3. Any semblance of quality control over internships by academia or professional associations would likely evaporate. I would expect the abuse of "interns" used for free or cheap labor to skyrocket.

Response: Absolutely not. In my scheme, the clinical training programs would have complete quality control, precisely because it's their own program. Interns, whether medical or psychological, are already used as cheap labor, and that is the cost to them of completing their professional training. It's the way the professional world works, and they had better get used to it.

 

Correspondent 7:

I'd like to echo Bob's comment below from the perspective of someone who didn't match in the 2006 match. My university (USC) had had a 100% placement rate for about 10 years and then my year, two of us didn't match. It was a shock to us and our program. We both had applied to selective, mostly child programs (or integrated adult/child programs) even though our training was probably about 2/3 adult and 1/3 child because we both wanted to head towards child psych careers (albeit research careers). We both had applied to schools outside of our the LA metro area, wanting a change of pace. Previously, people had done more selective adult/other focus (e.g., health, neuropsych) outside of the area, or had done child programs within the metro area, so we were both the first to try a different approach. What it did was open the program's eyes up to the need for more child and adolescent placements/externships and opportunities in our own training center. They did a really nice job with this in the years to come and several of our students have since gotten matched to very strong child programs outside of LA (UC Davis, UWashington). I'd hate for any program to be penalized when a program can clearly use a non-match as feedback with which to improve their program.

Response: Your program would have continued to have a 100% placement rate if it had its own in-house internship. There may be a problem with an "adult-oriented" training program that hosts an "adult-oriented" in-house internship that isn't great for a student whose interests are in children, and maybe that is an issue that can be addressed by something like a post-internship residency in a specialty. That is another question, about the scope of training programs. All medical students, and I believe all medical interns, go through a rotation system in which they are systematically exposed to the practice of medicine, from anesthesia to urology.

 

Correspondent 8:

CUDCP took a careful look at the match imbalance at our annual meeting a couple years ago with the intention of coming up with solutions. One of the presentations we heard put to rest the myth that medical training has some tight controls that match residencies with student admissions. That apparently is not the case at all. In fact, their match imbalance is worse than ours. See the following figure:

Response: Somehow I lost the figure, which I'd be very interested in examining. But are we talking about residencies or internships? It's possible that I have overstated the situation with respect to medicine, and it is also possible that medicine has created some imbalance by holding out slots for foreign medical graduates. But the fact is that the AMA regulates the number and size of medical schools very carefully, with an eye to the market. As far as I can tell, every graduate of an onshore medical school is guaranteed an internship somewhere. If they don't, they should, and for the same reason: medical students have invested a great deal of time, money, and effort in acquiring a professional education: they have a right to be able to complete the minimal requirements for licensure. Otherwise,

 

Correspondent 9:

I think this is an important discussion and there are two additional issues I'd like to raise.

1. The actual imbalance is much, much worse than APPIC statistics show simply because the vast majority of students in most of the very large California PsyD. Programs never enter into the APPIC match. As just one example, and I apologize for not having the time to do this systematically, I just looked up the Internship statistics at Alliant in LA's PsyD program. According to information on their website, in 2010 they had 136 students apply for internship but ONLY 8 applied to APPIC internships (and two of them received it). That is, only 2 out of 136 obtained an APPIC internship but, I believe, APPIC statistics would show a 25% match rate, 2 of 8, implying a much better if still dismal state of affairs than the <2% which is the percentage who go on to APPIC internships. In case you're wondering, the vast majority of the rest went to CAPIC internships (which is a whole other story). That is, the troubling statistics reported by APPIC are not the whole story which gets worse the more you look into it. I don't mean to single out any single campus, I encourage anyone with the time to look these data up on each program's web page.

Response: I suspect that the free-standing PsyD programs are causing a lot of this problem, including the "invisible" portion of it that you mention -- though not all of it, when even students from long-established, highly prestigious programs fail to match. But notice what CAPIC has done. This is, essentially, an alliance of free-standing clinical psychology programs (Alliant, Argosy, John F. Kennedy, etc.) who have banded together, for the good of their students (and their own bottom line) to enable their students to get some semblance of an internship. At least, to some extent, they're trying to service their students, instead of simply taking their money and leaving their students to twist slowly, slowly in the wind.

 

2. In my opinion, the problem with eliminating the predoctoral internship is that it will have the de facto effect of reducing the responsibility for schools to place their students in internships. Those leaders of professional schools that I have spoken to would love to get rid of the internship requirement because it is a huge pain for them and makes them look bad and is bad business. Pushing the internship requirement out to be postdoctoral just takes pressure off schools that do a poor job of placing their students. Thus, there is, in my opinion, an "unholy alliance" between some of our most prominent research universities and some of the largest professional schools towards eliminating predoctoral internships.The motives of the two stakeholders, however, are very different.

Response: I'm actually in favor of a post-doctoral internship, because (1) the student's academic work, including the dissertation, will be complete; and (2) MDs will have to address psychologists as "Doctor". But I'm not concerned with whether the internship is pre- or post-doctoral. The internship is a necessary milestone on the way to licensure and professional practice, and for good reasons. Therefore, in my view clinical training programs should make sure that one is available for each of their students. And the only way to insure this is to make it a requirement of accreditation.

It is not good enough just to throw students on the mercies of the market. There is time enough for that after they have completed their training, and go in search of employment.

 

Correspondent #10:

Has anyone brought up the huge number of clinical students from programs that have extremely large classes relative to the numbers of faculty who can provide high-level education and training (kindly note my use of the word "education" in addition to "training"). The argument was made in the mid-1960s that there was a serious shortage of doctoral-level clinical/counseling/school psychologists graduating from (true) Boulder model programs. Wad this true back then? Is it true now?

Response:  I'm sure that it was true then, and despite the proliferation of free-standing clinical psychology programs it may still be true. The wars in Afghanistan and Iraq, for example, are creating mental-health problems that will require large amounts of professional resources for years to come. And it was to address this need, as well as to create "power in numbers" both within and outside the APA, that "the Dirty Dozen" and others pushed for the expansion of clinical training programs beyond "the Establishment". Unfortunately, nobody seems to have thought about the output end -- that is, whether all those new clinical students would actually be able to complete the internship required to enter on a career in professional practice. That, combined with various economic factors (shrinking budgets), and, I suspect, hostility on the part of many psychiatrists who were ultimately responsible for many hospital-based internships, led to the current crisis. It was probably even more complicated than this, and the historians of clinical psychology will have to straighten it all out. But the vast and rapid proliferation of clinical psychology training programs, including free-standing programs, couldn't have helped. We can't do anything about the economy, or psychiatry, but we can do something about internships, which is to think creatively about how to solve the problem ourselves. But there is no incentive to engage in this creative thinking unless programs are required to insure an internship slot for every one of their students. And the only way to do this is to mount an in-house internship.

 

Correspondent #11:

[Correspondent #10] has a very good point. APA has, as an institution, held the position for several decades that the path to power for psychology as a field is through a very large N. This position can be found articulated by the "Dirty Dozen" in many of their writings, including their self-indulgent group auto-biography. It has become ingrained in the attitudes of, and is a presumption in many of the pronouncements from, APA central office even when it is not explicitly articulated.

This attitude is at the core of almost all the poor decisions that APA has made and continues to, from an entrenched "we can't be wrong" position, defend - despite overwhelming evidence to the contrary.

There are too many APA approved Clinical programs, that is the fundamental problem. Too many want to work on the internship side of the equation and pretend that the other end is just fine. Too many programs, too many students, too few genuine faculty at all too many of the programs, no genuine experimental exposure at far too many of the "new model" programs, etc., etc., etc.

Response: I'm sure that the proliferation of clinical training programs is a major cause of this problem. But I also suspect that economic and guild factors are also major contributors, and they will make solving the internship problem even more difficult. Requiring each clinical training program to mount an in-house internship of sufficient size to accommodate all of its own students would, as I have indicated earlier, solve the problem -- if not overnight, then by the end of the the usual cycle for renewal of accreditation. Programs that could not provide an opportunity for their own students to actually complete their professional training would lose their accreditation -- and presumably their students. But even if they were able to continue to attract new students, at least APA would have done its part to address the problem.

And when I say "APA", I mean whatever "independent" organization actually accredits clinical training programs. And, just to be clear, I would impose exactly the same requirement on the new PCSAS organization (though longtime listmembers will know that I am opposed to the establishment of PCSAS on other grounds).

 

APA will NEVER address this problem because it would mean a) admitting an error, b) cutting out some of its core constituents and ever recycled insiders, and c) go against the core misbelief that power comes from shear numbers alone. It's an example, though a loose one, of the need for a paradigm shift - the "old timers" will never get it or admit the evidence counters their cherished beliefs.

Response: But my proposal is so easy in this respect. All APA has to do is to create a new requirement, and give currently accredited programs time -- until their next renewal -- to solve it.

 

At the very least there should be honesty in what is said to prospective students. The problem with that is that our younger would-be colleagues often mistakenly believe that they can't be in the 45% not placed and that it will happen to the other guy.

Response: Yes, all accredited programs should be required, as of tomorrow, to post their success rates in the APPIC match. They can post whatever else they wish, in terms of placement into unaccredited internships, but they must be required to post information with respect to APPIC. Or, might I suggest that APPIC do this itself.

 

Correspondent #12:

[This] solution has real merit IMO. The simple requirement that you cannot be accredited unless you can guarantee the availability of program requirements. Also, for people who would like to cap the number of students entering the big enrollment places, this would crush that number, or it would force them to spend some of those big tuition dollars on internship slots.

I will just go hold my breath waiting for the APA do the right thing.

I seem to recall years ago a prominent member of this list saying that the APA always does the right thing.....after it has tried everything else. (though I think this view may be too generous)

I would personally be totally OK with either [this] solution or with eliminating the predoc internship.

Response: In my view, the internship is a critical element in clinical training -- one intensive year in which the student can devote him- or herself to actual clinical work. I'm agnostic about whether it should be predoctoral or postdoctoral. A postdoctoral internship has the advantage that MDs would have to call psychologists "doctor". In addition, although students embarking on a "predoctoral" internship are supposed to have completed all the other requirements for their degree, we know that large numbers of them will actually write their dissertations while on their internship (if I am the only person who did this, please don't tell anyone). The predoctoral internship would have the advantage of making it even clearer that it is the student's home institution that is responsible for insuring that every student gets an internship, because completing an internship would be a requirement of a degree, and institutions are legally and ethically obligated to insure that their students can complete degree requirements in a reasonable period of time. But shifting from a predoctoral to a postdoctoral internship shouldn't just be a tactic for absolving the student's department from its responsibilities. Predoctoral or postodoctoral, every accredited program should be required to insure that its students can complete an accredited internship, and requiring an "in-house" internship is the most efficient way of doing this.

 

Correspondent #13: 

Re match issues, I looked at items 3,7, and 10 re match stats and the major point that hits you in item 10 is that "size matters" as noted by one of the women from the bridesmaids last night at the academy awards, and, as also noted, not always for the better. The larger the program the worse the match, and very large programs had terrible match rates. When Sayette et al presented their data in the 2010 article in Clin Psych Science and Practice and several of us provided commentary, it was very clear that the major differences came not from the comparison of Academy and non Academy programs but from the specialized non-university based programs. The internship match rates in that comparison were as follows: APCS: 93.2%, NonAPCS 90,6%, and Specialized 61.5!!!!

Response: It's clear that the "specialized", non-university programs do most poorly in terms of internship placement, and I don't doubt that the proliferation of such programs is a major contributor to the crisis. But even a 9o-93% placement rate leaves some students out in the cold. There's a principle here, which is that programs that undertake professional training for clinical practice have an obligation to insure that their graduates can complete the requirements for professional practice, including the internship. And the only way to insure this is to require all programs to mount an in-house internship that covers all of their students.

 

Correspondent #14:

It used to be the case that APA-approved doctoral programs could "accredit" non-APA-approved internships. Inter alia, this allowed for innovative placements. The responsibility lay with the program. Then some time in the 80s APA extended its hegemony over internships and then postdoc programs as well, if I am not mistaken. And licensing boards and the VA followed suit. Even some academic programs specify "APA-approved internship" as a requirement. This "mission creep" continues. I like to think that the Academy will provide alternatives that make sense to serious science-minded clinical psychologists.

Response: I'm not opposed to APA extending its hegemony to internships as well as training programs, because I see the internship as an essential component to any training program -- no less than a basic course in assessment or treatment. But any clinical training program that requires that its students complete an "APA-approved internship" as a degree requirement has an obligation that all of its students in good standing can meet that requirement. And the surest way to do this is to have an in-house internship. APA hegemony shouldn't extend to post-doctoral programs, however, unless these are modeled after medical residences, and geared exclusively toward specialty clinical training. Research-oriented postdocs must remain solely under the control of the PI who provides the funding.

 

Well, I hope you will get concerned about APA's hegemony.

Response: Well, of course I'm concerned. I'm concerned about the APA Publication Manual, too, with its insistence that we refer to subjects as "participants", and all the other silly things it mandates, but I digress.

With respect to accreditation, and the internship crisis, I'm concerned about the values behind APA's actions, and the practical effects of their policies, but not the fact of hegemony itself. I fully recognize the need for nationwide accreditation of professional training programs, whether these are in psychology, or medicine, dentistry, nursing, or engineering. And I think there should be just one such accreditation organization. Sure, I'd rather it be something like PCSAS than APACoA, because the former is closer to my values. But APA got there first, long ago, and I think it is better, strategically, to try to reform CoA than to establish a parallel accreditation system that will never gain widespread acceptance.

 

Correspondent #15:

The issue of in-house internships is a non-starter, in my opinion, until we figure out a way to for licensing boards to accept non-accredited internships (or, captive internships from accredited programs, more precisely) in a systematic way.

Response: It's not a non-starter, but it will solve the problem once and forever. There is no need to convince licensing boards to accept non-accredited internships. That would defeat the purpose of accreditation, which is necessary to insure the quality of professional training. What we need to do is to figure out how to convince accreditation boards to withhold accreditation from clinical training programs that don't provide accredited internships for each of their qualified students.

 

Personally, I am all for building in-house internships and making this part of our accreditation systems, precisely as John proposes. The problem, however, is that even if we do a great job of building such a program at Arizona, for example, our students don't have a chance of getting licensed in, say, California or Massachusetts. I think we can get our students licensed in Arizona through an in-house internship, but most of our graduates don't practice here-- they go forward to other places where our reputation for good training doesn't matter or extend.

Response: This simply isn't true. If you complete an accredited internship in Arizona, you can be licensed in California or Massachusetts. That's the whole point of a nationwide system of accreditation. True, your students will be in trouble out-of-state if all they have to offer is an unaccredited internship. But if an in-house internship is made a condition of accreditation, then the in-house internship will perforce be accredited. Perhaps I should have made that clearer.

 

Suppose, just for example, APA worked diligently to convince all licensing boards that graduates from accredited programs with in-house internships, what would stop professional schools from simply upping their enrollment and really flooding the market?

Response: Nothing, and that's the point. If they can supply the required in-house internships (accredited, of course), why would we want to stop them? But if we required them to supply sufficient in-house internships to meet the demands of their own students, then they wouldn't flood the market with internship applications. They might flood the market with new licensed or license-eligible practitioners, and that might be a bad thing, but that is another matter. We can't and don't guarantee our graduates jobs. But we must guarantee that they can get the minimum to have a chance to complete in the market. And they can't compete in the market at all, no matter how many courses on the Rorschach or brief intensive existential psychotherapy they have completed, unless they can complete an internship. That's why it should be required that all accredited clinical training programs be required to offer (accredited) internship slots sufficient to meet the needs of their own students.

 

Having said this, what is stopping any of us from creating an in-house internship and pursuing accreditation as an internship? It would seem more efficient if we could build in-house internships into our already accredited programs, but I wonder how feasible it would be to apply for accreditation for an internship that exists "side by side" with a given doctoral program...?

Response: Nothing. This is precisely what I proposed -- which you characterized at the outset as "a non-starter"! If a new doctoral program can't provide an in-house internship, then it (and its internship) shouldn't be accredited. If an existing doctoral program can't create one, then its accreditation shouldn't be renewed at the end of its current cycle. It's just that simple.

 

I believe PCSAS can be of direct relevance in dealing with the match imbalance. In a matter of a few short years, licensing boards will accept either APA-CoA OR PCSAS accreditation. If PCSAS were to explore options around in-house internships or the like (or to support the exploration thereof via the Academy, for example), such an initiative would have the potential to move this discussion forward in a substantial and meaningful way.

Response: You're more sanguine than I am about getting licensing boards to accept PCSAS as an alternative to the APA-CoA. I think they won't, and I think that APA will fight this tooth and nail. But this isn't about PSCAS (which, remember, I am opposed to). But what's sauce for the goose is sauce for the gander: if APA is going to require all clinical training programs to offer an in-house internship as a condition of accreditation, then PCSAS should do the same.

 

Finally, I know many will get upset when I say this but here goes: Our exchange this weekend was entirely predictable. We did this last year. Yes, it's getting worse, but we're only venting here. We're going to vent for a few more days, perhaps weeks, then get back to life.

Response: Of course it was predictable, because no organization, not APA nor PCSAS, has done anything about a problem that has been worsening (as I showed in my 2004 paper) at least since 1991, when the problem first appeared. In 2004, when I gave my paper, the ratio of unplaced applicants to unfilled vacancies was almost precisely 2:1 (2.0099:1, to be exact). In 2012, the comparable ratio is almost 5:1 (4;6892:1, to be exact). That's 20 years -- 20 years that have seen a proliferation of clinical training programs, mostly free-standing, without any corresponding expansion of internship opportunities.

 

I've asked myself several (hundred) times what I think it will take to get us past the venting and toward some real action. My opinion is that we need a leader. This cause needs a champion who is willing to set aside her or his work and to focus more or less exclusively on this issue. This person needs to bring the relevant parties to a meeting, and to understand the relevant research on this issues; this person needs to be a synthetic thinker who can negotiate a win-win-win..win solution for all parties involved-- our programs, professional schools, APPIC, CoA, PCSAS, CUDCP, etc. I've asked myself if I could be this person, and I sincerely don't think I have the guts to do it... largely because I know it's a job that cannot be done while also maintaining a good research program. At least I think this is the case. Here at Arizona, we had two people go unmatched on Friday, and I've spent most of the weekend thinking about them and their strategies for Phase II. Still, as sick as I am about this whole thing, I not willing to be the leader on this. I only raise my personal thoughts because I suspect that most all of us are dealing with the same internal struggle. Are we sick enough of this situation that someone wants to lead the change?

Response: We have organizations for this: APA, which runs the CoA, is the most responsible party. They set up this system, and have -- aside from some wringing of hands and gnashing of teeth in recent issues of the APA Monitor -- assiduously ignored the problems they created. So, for that matter, has PCSAS, which so far as I can tell has not considered making an in-house internship a condition of accreditation. APPIC is just the messenger, though it now posts match rates by doctoral program, which -- as another listmember implied -- should be very helpful to future graduate-school applicants. CUDCP and COGDOP can apply pressure. You'd think that Division 12 as a whole would be interested. Certainly SSCP is interested. Maybe now there is a critical mass of interest in addressing this issue effectively.

And by "effective", I do not mean making the internship requirement postdoctoral (which merely relieves the burden on the graduate programs), or eliminating the internship requirement altogether (which would be detrimintal to the quality of clinical care, not to mention the sapiential authority of clinical practitioners). I mean taking steps to insure that every qualified student in an accredited program has access to an accredited internship slot. The easiest and most reliable way to guarantee this is to require all clinical training programs, as a condition of accreditation, to host an in-house internship sufficient to accommodate their own students.

Clinical psychology is in a crisis. A crisis largely of is own making, to be sure, but also a that it can resolve. I'm old enough to remember the early days of the movement toward diversity of faculty, particularly black faculty, on college campuses -- and in particular Bill Robinson, Class of '69, in a discussion with a philosophy professor who argued that there just weren't enough black faculty in the pipeline. Robinson's response was: "Get the black faculty!". He was right. Let's get the internships.

 

I think you and I are talking about apples and oranges. I was mostly referring to in-house internships that are unaccredited. You're speaking of requiring our accreditation systems to include the internship as part of what it is that's being accredited; all of our programs would now need to create internships that would conform to requirements for an internship.

Response:  But there's no point in talking about internships that are unaccredited. In my proposal, the in-house internships would be accredited by virtue of being attached to an accredited graduate program.

 

To do this, there needs to be real change in terms of what actually constitutes an internship. I suspect most clinical programs can hardly cover the needs of their current students (up until internship). I've looked briefly at CoA's internship accreditation materials and they're daunting if you don't have faculty dedicated to providing the "internship" portion of the training. Do we have any doctoral programs that also have accredited internship? I don't think so.

Response: I don't agree that my proposal entails changing the definition of an internship. Do we really want to "define down" the internship so that departments can host them easily? Or do we want to maintain the highest standards? As my mother would say, "It's your choice".

 

Although it's true that nothing is stopping an accredited doctoral program from also designing and offering an accredited internship, this is just not going to happen unless APA loosens the criteria for what it takes to be accredited as an internship.

Response: Twice now you've hinted that you can't run a successful clinical program, cum internship, and still have a research career. I pretty much agree. That's why, if you read my 2004 paper, you'll see that I think that clinical training should be taken out of PhD programs. I always thought the PsyD, strictly analogous to an MD, was a good idea. Training for science is different from training for practice.

 

Finally, I disagree with your assessment about what it will take to put these changes in motion. Without someone-- an actual person-- leading the initiative for change, our professional organizations aren't going to do anything.

Response: I agree with you that someone has to take the lead on this. That's what we elect presidents of professional organizations for, and why we have executive directors for things like education and clinical practice. Maybe, just maybe, now that the crisis has reached this point, someone at the APA will get religion.

 

I suggest going even further-- toward something like a cap and trade proposal. This was an idea I raised at the 2008 CUDCP meeting, but I was told the "restraint of trade" push-back from the professional schools would be too great. I've always wondering if it could be implemented; seems like the most equitable solution for all parties.

The cap and trade would be simple: The number of credits given each APPIC-registered doctoral program depends entirely on the number of registered positions in the match. If we have 250 doctoral programs I don't know this N) and 3190 positions, then each program is given ~12 credits. Programs would then be free to trade their credits with each other as they see fit. The system would need to be formalized in some way, but it would automatically limit the number of applicants in the match to roughly the number of positions offered.  Would people still fail to match? Sure. Would the number of unmatched positions drop way down from the 2012? Definitely. Tthis would be to the students' benefit, and this would require programs with cohorts larger than 12 to make the so-called tough decisions about who is ready and less ready to apply on any given year.  How many CUDCP/Academy programs ever put more than 12 people into the match? This might happen a bit, but as far as I can tell, not so much. If certain programs flood the internship pool with applicants, all corners of the internship world are negatively impacted, even the internships than never receive applications from the students of these programs. It's obvious to everyone the problem stems from the market being flooded. Why not simply limit the flow for ALL programs and formalize a trading program. In environmental legislation, cap and trade is often referred to as a "polluter pay" model. Seems like an apt description for our problem as well.

Response: Cap and trade is an interesting idea, but it might constitute an illegal restraint of trade.  Some programs don't need 12 slots (we admit about a half-dozen students to our clinical program every year).  Suppose someone gave us a zillion dollars to expand our clinical training program?  Some need more than 12 (like the professional schools). As you point out, some students would still fail to match. With an in-house internship, no student would ever fail to match: in fact, they'd be matched from the moment of their matriculation, provided that they remained in good standing.

 

Correspondent #16:

Why does a clinical program have to guarantee an internship for every last student? Surely in some instances (maybe one in every 14 or so students?), a reasonable reason may underlie the lack of interest in the student on behalf of internship agencies.

Response: An excellent question. The principal reason why clinical programs must guarantee an internship to every last student (in good standing) is, simply, that an internship is part of the requirements for a doctoral degree in clinical psychology. APA accredits clinical psychology programs, and one of the requirements of accreditation, as I understand it, is that a student must complete an internship before he or she can take his or her doctoral degree -- that is, after a student has, in many cases, ponied up tuition and living expenses for 4-5 years, done a whole bunch of coursework, clinical practica, and research sufficient for a dissertation. Under these circumstances, denying an internship to a student is no different from denying them access to a required course, or to the wherewithal to conduct a dissertation. You couldn't do any of these things to a student in good standing. You can't deny a required internship to one, either. And it doesn't change things to make the internship requirement postdoctoral. The reason is that eligibility for licensure, and thus for the professional career for which the student has been trained (and, one way or another, paid good money, time, and effort for) depends on the internship. So long as entrance to the profession depends on completion of educational requirements, whether predoctoral or postdoctoral, a student in good standing cannot be denied the opportunity to complete these requirements.

 

By the way, do we know what the denominator is for the unmatched APCS and non-APCS students, i.e., how many internships they applied to in order to end up unmatched? Is it possible that some of the unmatched were simply too selective in their preferences, e.g., an internship in the Bay area in a facility devoted to neurological problems of preschool children? What each program can and should (pretty much) guarantee to each student who has been allowed to reach the internship stage is a Ph.D. in psychology that will enable the student to compete in the job market along with all other PhDs of that persuasion.

Response: That's true, but a student in a clinical training program cannot compete in the job market unless and until he or she has completed the required internship. For a student training for clinical practice, the PhD is necessary but not sufficient for entry into the job market. You may be right, that some of the match problem is caused by students being unduly selective in their applications. But that's not the whole problem. If it were, there would not be such a high ratio of unplaced applicants to unfilled slots. But the selectivity problem would also be solved by the requirement to host an in-house internship. A student from Arizona might want to do his or her internship in the Bay Area, and a student from Wisconsin might want to do his or her internship in Arizona, and a student in the Bay Area might want to do his or her internship in Wisconsin. That will be possible, and things might work out. But if they do not, that student will still be able to complete his or her internship requirement, and begin to compete in the job market, by staying at his or her home institution.

If a course in assessment were a requirement for graduation from a clinical training program, and a student in good standing were denied admission to such a course, that student would have a cause of legal action. If an internship is a requirement for completion of a doctoral degree in clinical psychology, and a student in good standing is effectively denied such a position, that student would have a cause of legal action. If an internship is a requirement for licensure, and there are not enough internships to accommodate all students in good standing, those students would also have a cause of legal action. Hint, hint....

 

Medical schools do not guarantee their graduates admission to residencies and certainly not to residencies specified by their graduates as acceptable. A lot of MDs do not get residencies; they get jobs.

Response: Let me be clear: when I talk about "internship", I am referring to the traditional medical internship, which was the model for the clinical psychology internship, back in the days of David Shakow, who pretty much invented the institution. I am aware that American medical schools no longer refer to interns as interns, but rather as first-year residents, but the first year of residency is the only one required prior to licensure and independent medical practice, so it's an "internship" for purposes of this discussion. I'm willing to stand corrected on this, but I believe that there is such an "internship" slot available for every graduate of an onshore medical school. They may not get the internship that they want, but by the time the match process is over, every applicant has a position. If not, then the AMA is also behaving unethically -- all the more so because medical students, even more than clinical psychology students, do not have the benefit of TAs, RAs, and fellowships that help defray their ecucational expenses.

If it's the case that MDs can get jobs without completing an internship (1st-year of residency, whatever), only by virtue of passing the US Medical Licensing Exam, then the internship is not required for licensure, and medical schools are under no ethical obligation to provide one for each of their graduates. But the clinical psychology internship is required for licensure, and that's a big difference. And it makes no difference whether the internship requirement is predoctoral or postdoctoral.

 

Law schools turn out a very large number of JDs who will never be employed as lawyers. More than half, now, I think.

Response: There's no equivalent to the medical or clinical psychology internship in legal education. Law has its clerkships, but these aren't required for licensure, and they're generally offered to students who are headed for scholarly careers. You don't even have to pass the bar exam, technically, unless you want to be licensed for independent practice (you could, in principle, work as a researcher for a law firm). However, note that some law schools are now responding to their own jobs crisis by setting up in-house, non-profit law firms for their students to gain experience.

 

Lots of students in many fields, including scientific fields, including psychology, turn out PhDs who are not guaranteed jobs and who cannot all even got post-docs.  Why should clinical psychology be so special?

Response: I'm not talking about guaranteeing jobs or postdocs. I'm talking about a student's ability to complete the requirements of his or her educational program -- at which he or she has matriculated in good faith. In my view, we are admitting way more students to PhD programs (clinical and otherwise) than the market can bear, and all too often postdocs are used as a holding pattern while graduates circle around a vanishing number of tenure-track jobs, with the result that all too many PhDs will spend their working lives as adjuncts doing scut-work for low pay and no benefits, but that is another matter. I am talking about professional training -- training of the many students, arguably the vast majority, who enroll in PhD and PsyD clinical programs with the intention of pursuing professional careers entailing the delivery of psychological services such as assessment and treatment. If we are going to make the internship a requirement for entry into such careers, we cannot deny this opportunity to students who, in all other respects, are moving through our programs in good standing. Otherwise, we are mistreating the students who come to us in good faith.

 

Good response. I pretty much agree with you, but I think it is worthwhile to have all the arguments on the table.  But if students are well trained in psychology--which is to say in the science of psychology--their lives should not be greatly blighted simply because they cannot get a job in some clinical facility.

Response: Thanks, I didn't take your probes personally. These kinds of exchanges are always very valuable in clarifying thinking.

I sometimes think that we might do for graduate education what we do with liberal-arts education, which is to dissociate higher education from employment. When students ask me what they can do with an undergraduate degree in psychology or cognitive science, I give them the list of obvious possibilities, and then remind them that they can also go home and run the family hardware store. And in this, I'm not being cynical. I think that an undergraduate liberal arts education is good for its own sake, regardless of how the student uses it. And I'm prepared to think the same thing about graduate education, too. But that's not the expectation of the students who enroll in our programs, and we do very little to tell them the truth about the realities of the job market. If students want to do graduate-level research because they're interested, and don't have anything else that they'd rather do, or have to do, I see no problem with enrolling them in PhD programs. Not because we need warm bodies as TAs or RAs, or adjuncts or postdocs, to make our own lives as faculty easier, but because we value education for its own sake, and think it's good for bright young people to spend some serious time as members of a community of scholars. But if graduate education is going to go in this direction, we really need to be clearer about it with students, else we risk being even more exploitative, and even more intentionally exploitative, than we are now.

But it still seems to me that professional education is different. So long as we train students who are intent on going into the professions, and accredit training programs with this goal in mind, we owe it to them to make it possible to complete their training, so that they are eligible to compete in the job market if they wish to do so. That means that, if an internship is required for licensure, we have to make it possible for all students in good standing to get an internship; and the most effective and efficient way to accomplish that is for every program to mount its own internship, just as every program teaches its own courses on assessment and treatment.

 

Correspondent #17:

Maybe I missed it, but what proportion of grad students in APA accredited clinical programs never get an internship? The data seem to focus on 1-year match rates. My own experience has been that many students who don't match one year end up matching the next (sometimes at better places than they applied their first year, perhaps because their dissertations were farther along). It seems to me the critical variable is how many students have never been able to get placed in an accredited internship during their tenure in grad school despite being in good standing.

Response: But why should they have to wait? And why should they make the situation even worse the next year, for new internship applicants? Why should they risk disappointment a second time? And what do they do in the meantime, when they have likely exhausted whatever financial support they may have received from their home department?

 

Correspondent #18:

This discussion has been interesting: but, I may be a little slow in understanding some of the meanings - 'in house' means having an onsite Clinic or Hospital?; where students do their clinical internship, the latter being full time work contact with patients? As distinct from 'matching', which I am guessing is having them interned on sites outside the University training/educating centre? Which I assume is different to briefer 'placements' at various sites to gain supervised experience in various settings? Just want to make sure I am not wildly off the mark here as the terms are unfamiliar.

Response: By "in-house" I mean an internship that is organized and controlled by the clinical training program itself.  The clinical training program would not have to provide internship training directly, but could arrange for that training to occur at other institutions.  What makes this internship "in house" is, first, that it is organized and controlled, and the faculty hired and paid, by the clinical training program itself; second, that there would be enough internship slots to accommodate all of the program's own students; and third, that students in the program would have automatic access to those slots.  Any additional slots could be put out on the open market.

Here at Berkeley, for example, we might make arrangements with UCSF, which could provide an opportunity for students to rotate through inpatient and outpatient psychiatry, general medical services, neurology, gerontology, etc. 

I don't mean just brief placements, or a mere expansion of the part-time practica that are already part of every clinical training program.  By "internship" I mean what David Shakow meant, and what's modeled on medicine: 1 year of intensive exposure to the full spectrum of clinical work, without the distractions of classes and dissertations.  But, with the internship organized and controlled by the training department, students would have a clinical experience that would more accurately reflect their home department's training philosophy.

 

Correspondent #19:

I find it interesting that so many people want to blame professional schools for the internship problem. No one if forcing internships to select students from professional schools. Internship sites select students from professional schools over students from other programs because they prefer those students. But yet, some people want to limit giving internship sites the freedom to select the best students.

Response: For the record, I don't blame professional schools for the internship crisis. I blame the APA accreditation system, for accrediting new programs without considering whether the internship market could absorb the new students produced thereby. The fault lies entirely with APA, for lack of foresight.

 

Here in California, professional schools have tried to partially find a solution through the development of CAPIC. APA, however, won't accredit part-time or non-stipend internships, even though there's no data that I'm aware of that suggests that these internships are of lesser quality. 

Response: Actually, I think that CAPIC has at least tried to address the problem.  Part-time internships depart from the model of intensive immersion that David Shakow had in mind (and are the medical model for internships), and unpaid internships can become exploitative (because unpaid interns provide services without compensation), so these are matters that require serious consideration as reform proceeds.  

 

Correspondent #20:

For APA to not accredit programs that meet accreditation criteria for reasons of internship market considerations might well invite legal action for restraint of trade.

 

Response:  Easy fix, which has the advantage of being both true and ethical.  

  1. The internship is the culmination of the student's clinical training (which is why we require a predoctoral internship, but it's still true even if it's postdoctoral). Therefore, it is incumbent on accredited clinical training programs to insure that their students in good standing can complete their programs. The way to insure this is to require all clinical training programs to mount an in-house internship of sufficient size to accommodate all of their own students.  
  2. The internship is the culmination of the student's clinical training, and so it is important that the philosophy of the internship be commensurate with that of the rest of the student's program. The way to insure this is to require all clinical training programs to mount an in-house internship.  

There's no mention of the market in the pedagogical rationale for this proposal, and there's no restraint of trade when programs are required to actually fulfill their educational mission. And there's no restraint of trade when programs have to meet the needs of their own students. There might well be restraint of trade if accreditation placed artificial limits on program size, but that's not what I'm proposing. A program can be as big as it wants to be, so long as it can accommodate its students' educational needs.

 

Correspondent #21:

the key issue seems to be licensure. if clinical psychology licensed at the ph.d., as they do in medicine and at the msw in social work, our students would not need internships (and postdoc years). then graduate programs would not feel compelled to require an internship, or they could offer one during graduate training as they see fit.

Response: My proposal is premised on the assumption that a full-time clinical internship is a good idea, and that all clinical students should have one before they're licensed to practice. I agree with this assumption. There's no data to contradict it, because there's no data from a control group that had no internship. Maybe the AMA has such data, comparing MDs with and without an internship, but I suspect that the number of such MDs is vanishingly small (remember that what we used to call a medical internship is now called a first-year residency). As far as social work is concerned, all MSW graduates have to complete an intensive fieldwork, amounting to about 70% FTE. So there appears to be widespread agreement that something like an internship is an essential component for adequate training in the health professions.

 

Given the amount of clinical experience graduate students receive, even in clinical science programs, it is anachronistic to insist on internship training for all students.

Response:  If you read my 2004 paper, you'll see that, in my view, even clinical students who are heading toward careers in practice may be getting too much clinical experience prior to their internships -- to the detriment of classroom work in scientific psychology and the scientific bases of clinical practice. Students now build up practicum hour upon practicum hour, in an attempt to look attractive to accredited, prestigious, paid internships. They wouldn't have to do this if they knew that they would slide easily into their own in-house internship, once they completed their other requirements.

 

And given the changes in health care, our students are significantly at risk by having licensure delayed this long. insurance companies are increasingly not allowing non-licensed practitioners to provide services and this leads to the non-sensical situation in which a licensed social worker with 2 years clinical experience can be paid to see clients that a ph.d. student with 4-5 years clinical experienced cannot.

Response: They're not having their licensure delayed. If they could stop accumulating clinical hours, because their internship placement was assured, they could focus on their academic work, with a little practicum experience thrown in, and then -- just like medical students these days -- get their intensive exposure to "the living material of the field" on their internship. Medical students spend 3 years in medical school + 1 year in internship, for a total of 4 years -- more if they add on a second and third year of residency. Clinical psychology graduate students spend 4 years in graduate school + 1 year in internship, for a total of 5 years. That extra year will amortize itself pretty quickly.

 

Obviously changing licensing laws feels daunting but if we advocate within apa for this and if we continue advocating for clinical science students through apcs, then we can place pressure on licensing boards to make these changes. it will take time but it will never happen if we do not start. perhaps there is a legitimate reason for delaying licensure for two years post the ph.d. but I have not seen it. my concern is that there may be vested interests involved.  and I offer this from the perspective of an internship director for two decades.

Response:  I count 1 year, for the required internship. But persuading APA to change its accreditation requirements, so as to require all accredited programs to mount an in-house internship, will take a lot less time and effort than persuading the licensing authorities in 50 states, D.C., the military, and the VA.

 

How about having a cap on the number of clinical hours that students would be allowed to indicate on their appic applications. this would eliminate any advantage to seeing more patients then necessary and allow students to focus on their scholarship. the number should be the amount that graduate faculty agree would be a reasonable upper limit.

Response: This would be fine, except that the application will, inevitably, contain a free-response section in which the applicant is encouraged to volunteer any additional information that would be relevant. And, inevitably, the savvy applicant will fill this space with all the tests that he or she has administered, and all the therapy he or she has delivered, above and beyond the required minimum. If clinical programs had their own in-house internships, this wouldn't happen, because students who completed the minimum pre-internship practica would automatically move into an internship.

 

Second, we should push appic to limit the number of students that enter the match from any one program. this number could be either the median (maybe plus 1 sd to be generous) or the total derived from the number of internship slots divided by number of programs (novel concept: students now have the advantage). the high volume programs (i.e., professional schools) would obviously be disadvantaged but they created this problem so that seems about right

Response:  As discussed by others on this list, such a proposal would almost certainly constitute an illegal restraint of trade. You can't prevent the local bodega from becoming a Wall-Mart, if the market can bear it. How major-league baseball gets away with it I don't know, and maybe APA should hire their lawyer. But there is no restraint of trade by instituting program requirements that have a clear pedagogical and ethical rationale.

 

Correspondent #22:

Sell that to an Arts&Sciences dean these days, any dean....The costs of this are considerable as has been pointed out before. Ideal solutions are one thing; practical ones, another.Who's gonna pay for all this?

Response: I think there are a lot of different models for funding possible. Obviously, since clinical psychology interns, like medical interns, provide services to their internship site, we would expect them to get paid. I understand that APA expects accredited internships to pay a reasonable stipend, and I agree with this. Of course, part of the internship crisis is created by the dearth, not to mention loss, of paid internships. And I also understand that my proposal for in-house internships might well require additional faculty on the department budget. For example, if Berkeley were to make arrangements with UCSF, someone would have to buy the time of the UCSF faculty to release them from their other teaching, research, and clinical responsibilities.

Here's where another part of my 2004 paper kicks in: I believe that we should dissociate research training, leading to the PhD, from clinical training, leading to the PsyD. In my view, professional training for clinical practice would look a lot like medical school. The PhD and PsyD programs would have very different curricula. And, frankly, I think professional training should be paid for by the students themselves.

Graduate tuition and fees at UCB this year (2011-2012) come to $18,669.25 per semester for nonresidents. In psychology, we guarantee all of our graduate students, including all all of our clinical students, a TA stipend amounting to roughly $16,000 per year plus a tuition and fee remission. That's pretty standard practice for PhD programs nationwide (though the dollar amounts will differ from institution to institution). But note that we treat our clinical psychology students the same way we treat our students in cognitive or social psychology, even though some of them may be headed toward careers in professional practice.

Now consider how our professional schools -- which, critically, don't offer financial aid to all of their students -- operate:

The UCB Business School has the same basic tuition and fees, but it also imposes a supplemental tuition on nonresidents of $6,122 and a "professional degree supplemental tuition" (PDST) of $13,082 per semester..
The UCB Law School has the same basic tuition and fees, and the same supplemental tuition on nonresidents, but a PDST of $13,555.
The UCB School of Public Health has the same basic and supplemental tuition, but a PDST of $3,379.
The UCB-UCSF joint medical program has the same basic and supplemental tuition, but a PDST of $9,318.00.
UCSF itself charges between $58,346 and $63,754 (depending on the year) for each of four years of basic medical education, plus nonresident supplemental tuition of $12,245.
The UCB School of Social Welfare has the same basic and supplemental tuition, but a PDST of $2,000.

I don't know how the various PDSTs are determined, but I assume that the market plays a role in this somewhere.

And now let's look at some of the free-standing competition:

The Wright Institute, a free-standing, nonprofit institution, charges tuition and fees of $28,100.
The San Francisco campus of the California School of Professional Psychology, run by Alliant Inernational University, also a nonprofit, charges $15,150 per semester ($1010 per credit for a 15-credit courseload), plus $3,030 per semester for internships.

You can see where this is going. I think we should start charging those students who are headed toward professional (as opposed to academic) careers in clinical psychology a PDST. Some portion of that PDST would be used to defray the costs of adjunct faculty who provide practicum and internship experiences. The remainder could be put toward internship stipends. I'm assuming that the usual sources of internship funding would remain in play: whoever pays for accredited internships now would still pay for them, though some of those costs could be offset by the PDST.

How much? Let's go with a nice round figure of $13,000 per semester. I figure that a UCB PsyD is worth at least as much as an MBA, where the market is inflated, or a JD, where the market is collapsing.

What's to prevent students from claiming to be interested in research, and collecting a TA stipend, and then walking away with a doctoral degree and internship to go into clinical practice? Of course, such a thing doesn't happen now. But that's why we would have two programs. Students who say they are interested in research would enroll in a "Clinical Science" PhD program which focuses on research training, just like all the other PhD students in the Department. They wouldn't pay a PDST, and they would receive a TA stipend like all the other PhD students, but -- crucially -- this program would be unaccredited, just like all the other PhD programs. Accreditation would only apply to the "Clinical Practice" PsyD program, for which students would pay the PDST premium.

But don't the PhD students need an internship? Yes, they do. And they would get access to the same in-house internship that the PsyD students get, to give them the intensive exposure to "the living material of the field" that David Shakow had in mind. The PDST would help pay their stipends, too.

But what about a PhD student who decides he wants to practice -- perhaps because he finds the academic job market for PhDs weak? Such a student could be offered a one-year "retooling" program, at a pro-rated rate, in which he would take the courses and practica that are required for an accredited clinical degree, and which weren't part of his "Clinical Science" curriculum. And, of course, he would already have completed an accredited internship (see above). This is no different from the "respecialization" programs that many departments now offer.

What about a PsyD student who decides he wants to focus on research? He is free to apply to the PhD program, which he might be able to complete in an accelerated fashion, by virtue of having done a lot of the coursework already.

That's a lot of money! But not when you consider that it's a professional degree. It's comparable to what's charged at the UCB professional schools, and still cheaper than the free-standing programs. Besides, the PsyD students might be able to earn some of that money back by being employed as TAs or RAs, with some tuition remission, in the event that suitable vacancies occur in the department. Serving as TAs would reinforce their knowledge of basic research in psychology; it would reduce the need for departments to go outside their own students to fill empty TA slots; and it might even allow departments to reduce the number of students enrolled in its PhD programs, where the academic market is weak. That's a win-win-win-win situation.

I'm sure that there are other details that would need to be addressed, but this will give the general outline of the approach.

 

Correspondent #23:

I have valued reading the various options regarding the internship imbalance that have been posted to this list. Many of them have been or are being examined; more are always welcomed. Although I do not see a quick fix on the horizon, I do see progress within the communities to take responsibility for this problem.

Response: I see a complete fix on the horizon, exactly 10 years from now, when the most recently accredited program gets its accreditation renewed because it has instituted an in-house internship that guarantees an internship to all of its students in good standing.

 

I can also assure you that APA has been very involved in many related efforts (see Grus, McCutcheon and Berry, 2011). Our advocacy work has provided federal funding for internships and doctoral programs, and we have worked for many years to support and build community among the various education and training councils, especially the Council of Chairs of Training Councils (CUDCP, VA Training Council, CCPTP, APPIC, NCSPP, etc.), to address issues for which we are all responsible. We hosted the historic 2008 meeting of "difficult dialogues," all of which is documented in the reference previously noted. Although APCS has expressly declined to participate in CCTC, many if not all of their programs are also in CUDCP which has been a very active participant. We rely on these councils for communication to their members. We are now also supporting a collaboration among CCTC, APA and COGDOP that is developing a plan for next steps in addressing a broader set of issues. We believe that the internship imbalance is a symptom of larger problems in professional education and training.

Response: If there really is a shortage of clinical psychologists, and I'm sure there is, then APA has been doing the right thing by promoting the expansion of clinical training, and the expansion of federal funding to support that training. The only problem is that APA has neglected the internship component of clinical training. It is one thing to increase the number of students in training -- either by increasing the number of seats in current programs, or by adding programs, or both. But what did not happen, and should have happened, is a commensurate increase in internship slots. With all due respect, I lay primary responsibility for this situation on the APA accreditation process -- though it is also apparently the case that some programs have also heedlessly expanded their size, without regard to the availability of internship slots, and they should be ashamed of themselves. But it's clear that we've reached the tipping point now. Someone had better do something soon.

 

There are many issues to address, but one is related to the fact that psychology has been unwilling to adopt the standard of accreditation for internship training; doctoral programs have historically wanted to maintain flexibility for their students. Thus there is no standard in the accreditation process that requires that doctoral programs place their students in an accredited internship.

Response: It may be the case that "there is no standard in the accreditation process that requires that doctoral programs place their students in an accredited internship", but that's something that could be fixed, on the quite reasonable assumption that an accredited internship is going to be more rigorous than an unaccredited internship.

I can understand why the clinical programs themselves would resist such a requirement: after all, they have no control over the number of seats available in accredited internships, and it would be unfair to impose a requirement on them when they can't actually affect the outcome of the placement process. However, a requirement that they host an in-house internship would relieve them of this concern, and allow them to expand their program size indefinitely -- literally to whatever the market can bear, so long as they can create or otherwise guarantee a corresponding number of internship slots. So that puts the onus on the programs, which is exactly where it belongs.

I can also understand the programs' desire for some flexibility in the definition of an internship -- for example, to accommodate a student who is interested in mental-health policy, or public mental health, as opposed to the practical matters of assessment and treatment. I think we should encourage more students to go into policy and public health. But I'd much rather they pursue these specialized interests after their internships, as postdoctoral fellows -- or, if you follow the medical model, as 2nd- or 3rd-year residents. Again, this would follow the general outlines of the model of medical training. An MD might go after an MPH, or a PhD in policy, but he or she would ordinarily do that after going through a standard clinical internship (or first-year residency).  An MD who is only interested in research, and not the delivery of services, might well skip the internsip entirely.

 

Moreover, many students never even enter the Match, thus the Match statistics do not tell the whole story…. APA publishes the most comprehensive data regarding internship placement for accredited programs (see http://www.apa.org/ed/accreditation/about/research/internship-analysis.pdf) and includes placement information in Graduate Study in Psychology to promote truth in advertising. Also, for accredited doctoral programs, the CoA requires that they provide accurate information regarding internship placements to the public.

Response: It may be that many students never enter the match, and so don't show up in the APPIC match statistics, but I think we're all making the reasonable assumption that the APPIC data represents a representative sample of the whole story. Which means that there's a serious internship imbalance.  If students don't enter the match because they don't apply for an accredited internship, then we don't really have to deal with them. The internship crisis is, precisely, the crisis of The Match: that at the end of Phase I, there are far more unplaced applicants than there are unfilled slots -- so many more, that whatever happens in Phase II doesn't make much of a difference. If all of the 222 positions unfilled in 2012 at the end of Phase I were filled in Phase II, that would still leave 819 applicants unmatched.

 

It is also important to remember that accreditation policy is actually formulated by the APA Commission on Accreditation, not other units in APA. Many of you know that CoA is comprised of seats from the various training councils, COGDOP and stakeholder groups (see http://www.apa.org/ed/accreditation/about/policies/governing-policies.pdf). APA (through BSA, BEA, BAPPI and BPA/CAPP) actually nominates only a minority of those seats (8 of 32). All policies go out for public comment, and the CoA can also be contacted directly.

Response: I think we're all using "APA Accreditation" as shorthand for the accreditation process that is, rightfully, organized by APA. We all understand that there are other stakeholders, but APA is the leader, and should be (remember, I don't support the alternative PCSAS scheme).

 

With respect to APA policy, the Model Licensure Act does call for an APA accredited doctoral program, but this is a state level issue and, in my opinion, psychology has not made advocacy for this a high priority in the states. There are many exceptions made -- in contrast to the credentialing procedures for other health professions. However, for the first time the vast majority of the national education and training community as represented by the CCTC and the APA Board of Educational Affairs have asserted that health service providers in psychology should come from APA or CPA accredited doctoral and internship programs. This has also been supported by the APA, CCTC and COGDOP working group. In my view, should another accreditation system receive recognition by the U.S. Department of Education, that could also be included.

Response: As noted above, the Model Licensure Act does recommend requiring an accredited doctoral program.  I'm simply recommended that the standards for program accreditation be raised to include an accredited internship as part of that program. I am glad that the majority of the clinical training community now endorses this quite reasonable requirement. If anyone in the states is listening, there will now be an exorable movement toward putting this very requirement into law -- which will only put more pressure on clinical students to get an accredited internship. So, while the pressure is building relatively slowly, and before it blows a gasket, what is needed is for "APA" to turn this consensus into policy by requiring all accredited doctoral programs to place 100% of their students in accredited internships. The easiest way to do this is to require all doctoral programs, as a condition of accreditation, to mount an in-house internship which will meet APA accreditation standards. If APA starts now, this policy will be implemented in 10 years -- well before gasket-blowing time, I suspect. But it should start now, not later. This is entirely within the purview of APA. If individual doctoral programs don't want to meet it, they should feel free to drop their accreditation.

 

There are indeed a number of models in which change could occur. Dr. Kihlstrom has described one possibility and yes, there are departments with both accredited doctoral and accredited internship programs.

Response: It would be very good to have an analysis of these programs, to see exactly how accredited departments have been able to mount accredited internships.

 

There are other models as well, and I do look forward to healthy debate as our profession matures. However, whatever model is developed, in my personal opinion, psychology’s unwillingness to "bite the bullet" regarding requiring a quality assurance mechanism for the preparation of professional psychologists has had significant negative consequences on the coherence of our educational system, our students and our credibility with other professions and stakeholders, including the public.

Response: I agree that this is an issue for the status of clinical psychology as a profession. The last thing we need is for psychiatry, legislators, and other policy makers to get the sense that training in clinical psychology is anything short of the most highly rigorous training there can be. And I am glad to learn that APA is actively concerned with the internship crisis, and actively considering reform. And I am especially glad to learn that there is now a clear majority of stakeholders on CoA who favor requiring that students complete an accredited internship. Now, all we have to do is take the necessary, logical steps to make sure that all students in good standing have an accredited internship slot waiting for them at the conclusion of their training. That, in my view, is something that the doctoral programs themselves can and should control -- and the APA should make them do it, sooner rather than later. To quote Hillel the Elder: "If not now, when?"

 

Responses to the 2011 Internship Survey

In response to the internship crisis, the Board of the Society for a Science of Clinical Psychology sought to gather information about the internship process and possible solutions to some of the problems associated with the internship process.  In 2010, and again in 2011, they surveyed psychology graduate students, interns, post docs, and faculty in both psychology departments and at internship training sites.  

Here are my responses to the 2011 version of the survey.

 

Please indicate the extent to which you agree with the following statements regarding the current psychology predoctoral internship process.

 

The current internship application process imposes unreasonable monetary cost to students in the form of application fees, travel costs for interviewing, and costs associated with relocating.

Response: All of this is true.  An in-house internship would absolve students of all of these expenses -- and not just money, but also the disruption caused to their academic progress and personal lives.  A student who, for whatever readon, wished to apply for an external internship would incur those expenses, but that would be his or her choice.

 

The current internship application process imposes unreasonable pressure on students to log increasingly more clinical and assessment hours during graduate school in order to be a competitive internship candidate, detracting from other training (e.g., research, teaching).

Response:  Yes.  This is not so much of a problem for students who are intent on pursuing careers in professional practice, as opposed to research.  These students are, presumably, more or less going through the motions of research, doing as little of it as possible; in like manner, students who are intent on pursuing careers in research, as opposed to professional practice, are going through the motions of practical training, doing as little of it as possible.  But even the professionally oriented students are likely spending too much time building up hours in an attempt to enhance their standing in what has become a cutthroat competitive process.  Everybody would be much better off if students could move through their academic training much as medical students do, saving all those clinical hours for the intensive experience that is the clinical psychology internship.

 

The current internship application and match process takes an unreasonable emotional toll on students (e.g., in the form of anxiety about not matching).

Response: I think this goes without saying.  Graduate studies are bad enough, what with the economy and poor employment outlook, for students to have to worry about whether they're going to get an internship, and what they're going to have to do to secure one.

 

The worsening match rate – only 79% in 2011 – suggests that the current predoctoral internship program is facing a crisis that should be addressed by the psychology community.

Response: Well, duh!  This crisis has been building since 1991, and is growing exponentially now.  If somebody doesn't do something fast, the whole field is going to blow itself up.

 

As previously mentioned, the match rate for the predoctoral internship program continues to worsen. In 2010, the match rate was 77% with a considerable number of predoctoral students (a total of 846) not placed at an internship. During the most recent match (2011), 804 students did not match during phases I or II.

Potential solutions to this worsening match rate are presented on the following pages. Please indicate the extent to which you believe that each would be (a) an effective solution for addressing the worsening match rate and (b) a feasible solution that could be implemented by the psychology community.

(A) Clearer guidelines provided by internship sites explicitly stating what constitutes a competitive candidate (e.g., clarifying the criteria that are most strongly considered when reviewing applications) so that students can make more educated decisions about whether to apply to certain sites.

Response: This would obviously help, but it's not feasible, because it goes against human nature.  However a "competitive candidate" is defined, potential applicants will attempt to enhance their attractiveness by presenting more of that feature.

 

(B) More guidance and supervision at the departmental level for students who are preparing to apply for a predoctoral internship (e.g., a weekly or monthly seminar helping students navigate the internship application process, stricter guidelines with regard to determining whether a student is "ready" to apply).

Response: I'm never one to argue against more and better advising, but the problem is not one of students getting bad advice.  The problem is that there aren't enough internships to go around, and the process of applying for one is too onerous and stressful.

 

(C) Increased funding to create additional internship positions (e.g., from the Graduate Psychology Education program).

Response: More money would obviously help.  But two kinds of money are needed.  First, money to pay the interns who are currently being denied internships because there isn't enough funding for all the slots are needed.  Second, money to pay the clinical faculty who will actually provide the internship training.

 

(D) Development of "in-house" internships created by psychology departments.

Response: This is the obvious, and totally effective solution.  See the exchanges documented above concerning this very proposal.

 

(E) Elimination of the predoctoral internship requirement altogether.

Response: The primary advantage of shifting the internship year from predoctoral to postdoctoral, and it is a nontrivial advantage, is that it would force MDs to address psychology interns as "doctor".  But so long as an internship is required for licensure, elimination of the predoctoral internship requirement.  Unless there are enough internship slots to meet the demand, eliminating the predoctoral internship requirement is likely to constitute an abdication of responsibility by the clinical training programs toward their students.

 

F) Making the predoctoral internship requirement a post-doctoral internship, as in medicine.

Response: A postdoctoral internship has the single advantage mentioned in my response to (E).  But neither the clinical training programs nor the Committee on Accreditation should be allowed to abdicate their ethical responsibility to insure that all predoctoral students or postdoctoral graduates in good standing have access to the internship that is the minimal requirement for them to pursue the profession for which they have trained in good faith.

 

(G) Adding fewer students to graduate programs.

Response: Arguably, the increase in clinical training programs, and the number of students enrolled in them, is the primary cause of the current internship crisis.  Graduate programs should be allowed to admit only so many students as they can guarantee internship slots.

 

(H) Making the accreditation process less expensive and cumbersome for internship sites.

Response: Obviously this would enhance the likelihood that clinical training programs, and other potential sites, would actually mount internship programs.  But if "making the process less expensive and cumbersome" means defining the internship down, so that the training of new students is degraded, that will obviously work to the detriment of both the individual student (and his or her patients) and the profession as a whole.

 

(I) replacing the internship requirement with a requisite number of clinical hours to be completed at some point during graduate training.

Response: I suspect that this is just a trick, analogous to the old bait-and-switch, to make the internship crisis seem to go away.  I agree with David Shakow, who institutionalized the internship so long ago: there is no substitute for the intensive clinical experience afforded by an internship.  That's why MDs go through it (even though they now call it a first-year residency).

 

(J) Having a Phase II to the match for students who do not match during the initial phase.

Response: APPIC does this now, and obviously it helps, because some applicants are matched in Phase II who did not match in Phase I.  But it's still an onerous, anxiety-evoking process -- and, for that matter, likely to occur in the students last semester of formal study, when he or she should be finalizing his or her dissertation.  And even with a Phase II, some students don't get matched, as a necessary consequence of the fact that Phase I ends with so many more unmatched applicants than there are unfilled slots.  We will probably always need a Phase II, but the whole process would be made a whole lot better by increasing the number of internship slots, and the most effective and efficient way of increasing the number of internship slots is to require every accredited program to mount an in-house internship with enough slots to accommodate its own students.

 

References

Cherry, D. K., Messenger, L. C., & Jacoby, A. M. (2000). An examination of training model outcomes in clinical psychology programs. Professional Psychology: Research & Practice, 31, 562-568.

Gould, S. J. (2003). The hedgehog, the fox, and the magister's pox: Mending the gap between science and the humanities. New York: Harmony Books.

Kihlstrom, J. F. (2000). Personal statement concerning research training in the behavioral and social sciences. In National Research Council (Ed.), Addressing the nation's changing needs for biomedical and behavioral scientists. Report of the Committee on National Needs for Bomedical and Behavioral Scientists, Education and Career Studies Unit, Office of Scientific and Engineering Personnel (pp. 101-107). Washington, D.C.: National Academy Press.  Read text at: www.nap.edu/openbook/0309069815/html/101.html.  The text is also available, with a prefatory note, at www.institute-shot.com/national_research_council_report.htm.

Kihlstrom, J.F.  (2000, October).  "Several things went wrong": Commentary on the NRC report on research training in the behavioral and social sciences.  APS Observer, 13(10), 1, 17-18.  Link to text at: http://www.psychologicalscience.org/newsresearch/publications/observer/nihcomment.html.  Link to expanded commentary at: www.institute-shot.com/More_on_training_health_researchers.htm.

Kihlstrom, J.F.  (2001, February).  Response to Sutton: Further commentary on the NRC report on research training in the behavioral and social sciences.  APS Observer, 14(2), 5.

Kihlstrom, J. F., & Canter Kihlstrom, L. (1998, August). The living material of the field. Paper presented at the American Psychological Association, San Francisco.  Link to text at www.institute-shot.com/clinical_psychology.htm.

Kihlstrom, J. F., & Canter Kihlstrom, L. (1998). Integrating science and practice in an environment of managed care, The science of clinical psychology: Accomplishments and future directions. (pp. 281-293). Washington, DC, USA: American Psychological Association.  Link to text at: www.institute-shot.com/integrating_science_and_practice_in_a_changing_environment.htm.

Maher, B. A. (1999). Changing trends in doctoral training programs in psychology: A comparative analysis of research-oriented versus professional-applied programs. Psychological Science, 10(6), 475.

Shakow, D. (1938). An internship year for psychologists (with special reference to psychiatric hospitals). Journal of Consulting Psychology, 2, 73-76.

Wood, J. M., Nezworski, M. T., Lilienfeld, S. O., & Garb, H. N. (2003). What's Wrong with the Rorschach? Science Confronts the Controversial Inkblot Test. New York: Jossey-Bass.

Yu, L. M., Rinaldi, S.A., Templer, D. I., Colbert, L. A., Siscoe, K., & Van Patten, K. (1997). Score on the Examination for Professional Practice in Psychology as a function of attributes of clinical psychology graduate training programs. Psychological Science, 8, 347-350.

 

This page last revised 02/01/2020.