Examination for Professional Practice in Psychology

The Examination for Professional Practice in Psychology (EPPP) is a licensing examination developed by the Association of State and Provincial Psychology Boards (ASPPB) that is used in most U.S. states and Canadian provinces.

As of 2020, the EPPP is a two-part examination that assesses foundational knowledge, EPPP (Part 1-Knowledge), and skills, EPPP (Part 2-Skills). The EPPP (Part 1-Knowledge) was previously known as the EPPP. It has been used by American and Canadian jurisdictions for many years and will continue to be used by these jurisdictions for licensing purposes. In 2020 jurisdictions will have the option of requiring the EPPP (Part 2-Skills), with the EPPP (Part 1-Knowledge) serving as the pre-requisite, and a passing score on both parts being required for licensure. Jurisdictions who sign on to require the EPPP (Part 2-Skills) will be known as early adopters.

History
The American Association of State Psychology Boards (ASPPB) was founded in 1961 by the American Psychological Association's Board of Professional Affairs Committee on State Licensure. A primary goal of ASPPB was to enhance the ability of psychologists to practice across state and national borders, specifically in the United States and Canada. To facilitate this mobility, the development of a standardized licensure examination was identified as crucial. This initiative culminated in the creation of the Examination for Professional Practice in Psychology (EPPP) in 1965. The EPPP's initial adoption was gradual, but its acceptance steadily increased. By the mid-1980s, it had become the dominant entry-level examination for independent practice licensure in most jurisdictions across both countries. Beginning in 2001, ASPPB transitioned the EPPP to a computer-administered format, which is now the prevalent mode of assessment in the vast majority of U.S. and Canadian jurisdictions.

The ASPPB announced the development of a second component to the EPPP, the EPPP Step 2, in March 2016. This initiative's stated aim is to establish a standardized, multi-part assessment for evaluating clinical skills required for entry-level practice. The goal was to move away from relying solely on graduate program evaluations and non-standardized measures such as oral examinations. Beginning in 2018, the EPPP Step 2 was implemented by several jurisdictions. The ASPPB has announced that the exam is scheduled to become a mandatory component of licensing requirements for all jurisdictions currently utilizing the EPPP starting on January 1, 2026.

Content of the Exam
Both parts of the EPPP involve multiple choice questions; the EPPP (Part 2-Skills) also includes multiple choice/multiple response questions, scenario-based questions, point and click questions, and drag and drop questions. The EPPP (Part 1-Knowledge) includes eight domains, with each domain representing a specific percentage of the examination: biological bases of behavior (10%), cognitive-affective bases of behavior (13%), social and multicultural bases of behavior (11%), growth and lifespan development (12%), assessment and diagnosis (16%), treatment, intervention, prevention, and supervision (15%), research methods and statistics (7%), and ethical, legal and professional issues (16 %). The EPPP (Part 2-Skills) includes six domains, with each domain representing a specific percentage of the examination: scientific orientation (6%), assessment and intervention (33%), relational competence (16%), professionalism (11%), ethical practice (17%), and collaboration, consultation, supervision (17 %).

Development of the Exam
Development of the two-part EPPP and the original EPPP (now known as the EPPP (Part 1-Knowledge)) has consistently adhered to the Standards for Educational and Psychological Testing for credentialing/licensing examinations. For these types of examinations, test development efforts must focus on content validation, with the goal being to develop an examination that demonstrates candidates have the knowledge (or skills) necessary for initial licensure. As stated in the Standards: “Validation of credentialing tests depends mainly on content-related evidence, often in the form of judgments that the test adequately represents the content domain associated with the occupation or specialty being considered. Such evidence may be supplemented with other forms of evidence external to the test. For example, information may be provided about the process by which specifications for the content domain were developed and the expertise of the individuals making judgments about the content domain. Criterion-related evidence is of limited applicability because credentialing examinations are not intended to predict individual performance in a specific job but rather to provide evidence that candidates have acquired the knowledge, skills, and judgment required for effective performance, often in a wide variety of jobs or settings (we use the term judgment to refer to the applications of knowledge and skill to particular situations). In addition, measures of performance in practice are generally not available for those who are not granted a credential (pp. 175-176).”

The process of test development and validation for the EPPP involves a number of steps, all consistent with the Standards and with the expectations of credentialing organizations such as the Council on Licensure, Enforcement & Regulation (CLEAR). The process begins with a job task analysis (JTA), which is a comprehensive study that involves psychologists who are subject matter experts (SMEs) to establish the knowledge and skills that are required for psychology practice. The resulting requirements are sent via survey to thousands of licensed psychologists throughout the United States and Canada. The survey respondents indicate which areas are important for entry level practice in psychology. The results of the survey establish the test specifications for the exam. Essentially, therefore, the expertise of licensed psychologists establishes what should be assessed by the EPPP. Following this, SMEs write exam items according to the test specifications established by the JTA. Each item is then reviewed by an Item Development Committee (IDC) SME who is an established expert in that specific domain of the item. Items are reviewed in an iterative process between the reviewer and the item writer until the item is acceptable to both or is discarded. Each item is again reviewed prior to being placed on an exam by an Examination Committee. The committee includes at least 10 SMEs who are psychologists with particular expertise in each of the domains on the exam and who represent various areas of psychology practice and training. Items that have been approved by the IDC are again reviewed for accuracy, relevancy to practice, clarity, and freedom from bias, among other factors. Once approved by the Examination Committee, each item is pretested (or “beta tested”) prior to becoming an operational item that is actually scored as an item on an exam. Items that do not perform well during pretesting, according to psychometric standards, are not included on a candidate’s overall scores. A final step in the development process involves establishing the pass point of the exam though a rigorous review process called a standard setting. This involves a committee of SMEs who are licensed psychologists, most of whom are typically early career psychologists. These SMEs review the exam form item by item and provide rating data on item difficulty. The data are analyzed to determine the appropriate pass point which represents the minimal knowledge or skills required for entry level practice. These multiple levels of review by psychologists and the ongoing analysis of psychometric data ensures that the EPPP is accurate, relevant, valid, and legally defensible.

Over the years, the test development strategies used by ASPPB in developing the EPPP have been described many times in the psychological literature. Despite this, concerns have been expressed about the lack of evidence for criterion validity or predictive validity of the EPPP. As stated above, the Standards for Educational and Psychological Testing clearly indicate that such validity evidence is of little relevance because credentialing/licensing examinations are not designed to predict the performance of individual practitioners, either at the point of licensure or at some future point in their professional work. Importantly, these points have often been made by in discussions of the validity of the EPPP.

It has been demonstrated that graduates of regionally accredited programs tend to get higher mean scores than those of regionally unaccredited programs. It has also been argued that clinical psychology programs whose graduates have higher EPPP scores tend to possess the following features: higher admissions standards, a higher faculty to graduate student ratio, and more research. In general, doctoral students score higher than master's students, PhDs outperform PsyDs and EdDs, and clinical psychology students outperform counseling and school psychology students.

In a study using program-level national data (i.e., not individual data points per application but aggregated by program), EPPP scores were positively associated with: GRE scores, U.S. News & World Report Scores, Research Emphasis of the program, % of faculty in the program who ascribe to a cognitive behavioral orientation, GPA, and % of students receiving an APA-approved internship. Negative predictors of EPPP score included: Rate of admittance, U.S. News & World Report Rank, and % of minorities in the program.

Further, a recent study of 4,892 doctoral-level applicants found significant differences in failure rate according to ethnicity (i.e., Blacks = 38.50%; Hispanics = 35.60%; Asians = 24.00%; Whites = 14.07%). Therefore, Black and Hispanic applicants appear to be failing the exam at over 2.5 times the White rate. When using White applicants as the reference group, Black and Hispanic applicants passed the exam at rates lower than 80%, indicating the potential for adverse impact.