User:Eyoungstrom/sandbox

LOTR edits
Several areas in or close by to Glenorchy were settings for scenes in the Lord of the Rings, including some in Lothlorien, Isengard (along the Routeburn Track), and Amon Hen.

Local companies offer themed treks, horseback, or boat rides to see the sites.

Template for writing articles about a psychological measure
This section is NOT included in the actual page. It is an overview of what is generally included in a page.


 * Versions, if more than one kind or variant of the test or procedure exists
 * Psychometrics, including validity and reliability of test results
 * History of the test
 *  Use in other populations, such as other cultures and countries
 * Research
 * Limitations

Lead section
The lead section gives a quick summary of what the assessment is. Here are some pointers (please do not use bullet points when writing article):


 * What are its acronyms?
 * What is its purpose?
 * What population is it intended for? What do the items measure?
 * How long does it take to administer?
 * Who (individual or groups) was it created by?
 * How many questions are inside? Is it multiple choice?
 * What has been its impact on the clinical world in general?
 * Who uses it? Clinicians? Researchers? What settings?

Reliability
The rubrics for evaluating reliability and validity are now on published pages in Wikiversity. You will evaluate the instrument based on these rubrics. Then, you will delete the code for the rubric and complete the table (located after the rubrics). Don't forget to adjust the headings once you copy/paste the table in!

An example using the table from the General Behavior Inventory is attached below.

Reliability
Reliability refers to whether the scores are reproducible. Unless otherwise specified, the reliability scores and values come from studies done with a United States population sample. Here is the rubric for evaluating the reliability of scores on a measure for the purpose of evidence based assessment.

Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures such as the CAGE, diagnostic accuracy and discriminative validity are probably the most useful ways of looking at validity.

Validity
Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures, diagnostic accuracy and discriminative validity are probably the most useful ways of looking at validity. Unless otherwise specified, the validity scores and values come from studies done with a United States population sample. Here is a rubric for describing validity of test scores in the context of evidence-based assessment.

Development and history

 * Why was this instrument developed? Why was there a need to do so? What need did it meet?
 * What was the theoretical background behind this assessment? (e.g. addresses importance of 'negative cognitions', such as intrusions, inaccurate, sustained thoughts)
 * How was the scale developed? What was the theoretical background behind it?
 * How are these questions reflected in applications to theories, such as cognitive behavioral therapy (CBT)?
 * If there were previous versions, when were they published?
 * Discuss the theoretical ideas behind the changes

Impact

 * What was the impact of this assessment? How did it affect assessment in psychiatry, psychology and health care professionals?
 * What can the assessment be used for in clinical settings? Can it be used to measure symptoms longitudinally? Developmentally?

Use in other populations

 * How widely has it been used? Has it been translated into different languages? Which languages?

Research

 * Any recent research done that is pertinent?

Limitations

 * If self report, what are usual limitations of self-report?
 * State the status of this assessment (is it copyrighted? If free, link to it).

Example page

 * General Behavior Inventory

RCADS Section
This is a draft section for the RCADS.

https://www.childfirst.ucla.edu/wp-content/uploads/sites/163/2018/03/RCADSUsersGuide20150701.pdf

Reliability
The rubrics for evaluating reliability and validity are now on published pages in Wikiversity. You will evaluate the instrument based on these rubrics. Then, you will delete the code for the rubric and complete the table (located after the rubrics). Don't forget to adjust the headings once you copy/paste the table in!

An example using the table from the General Behavior Inventory is attached below.

Reliability
Reliability refers to whether the scores are reproducible. Unless otherwise specified, the reliability scores and values come from studies done with a United States population sample. Here is the rubric for evaluating the reliability of scores on a measure for the purpose of evidence based assessment.

Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures such as the CAGE, diagnostic accuracy and discriminative validity are probably the most useful ways of looking at validity.

Validity
Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures, diagnostic accuracy and discriminative validity are probably the most useful ways of looking at validity. Unless otherwise specified, the validity scores and values come from studies done with a United States population sample. Here is a rubric for describing validity of test scores in the context of evidence-based assessment.

Development and history

 * Why was this instrument developed? Why was there a need to do so? What need did it meet?
 * What was the theoretical background behind this assessment? (e.g. addresses importance of 'negative cognitions', such as intrusions, inaccurate, sustained thoughts)
 * How was the scale developed? What was the theoretical background behind it?
 * How are these questions reflected in applications to theories, such as cognitive behavioral therapy (CBT)?
 * If there were previous versions, when were they published?
 * Discuss the theoretical ideas behind the changes

Impact

 * What was the impact of this assessment? How did it affect assessment in psychiatry, psychology and health care professionals?
 * What can the assessment be used for in clinical settings? Can it be used to measure symptoms longitudinally? Developmentally?

Use in other populations

 * How widely has it been used? Has it been translated into different languages? Which languages?

Research

 * Any recent research done that is pertinent?

Limitations

 * If self report, what are usual limitations of self-report?
 * State the status of this assessment (is it copyrighted? If free, link to it).

Example page

 * General Behavior Inventory

Elizabeth Suddreth
The Child PTSD Symptom Scale has been been translated and validated for use in several other populations, including Spanish-speaking, Turkish, Israeli, and Nepalese populations. While the Spanish CPSS showed good internal consistency, it did not show sufficient construct validity in comparison to the English version.

In the Nepali version of the CPSS, researchers found that it demonstrated moderate to good validity in comparison to the English version of the scale. Specifically, there were two items that did not apply for Nepali children, including avoidance of places/people and lack of interest. Elizabeth -- you could go ahead and add the citations for the specific studies for the Dutch,Turkish, etc. and I will play with packaging later. You and Julia could also work to merge the content each of you founds. -EAY

Julia Whitfield
Cite the paper with Sophie.

Reliability
Reliability refers to whether the scores are reproducible. Not all of the different types of reliability apply to the way that the CAGE is typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of the CAGE; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview).

Explanation with references

Validity
Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures such as the CAGE, diagnostic accuracy and discriminative validity are probably the most useful ways of looking at validity.

Rating (adequate, good, excellent, too good*) * Table from Youngstrom et al., extending Hunsley & Mash, 2008; *indicates new construct or category

Rachel Peltzer
<<<Rachel -- this is a great start! First paragraph looks great for utility. Second paragraph may have parts that would fit in validity (above) with the details about accuracy. The underidentification part of paragraph 2 could be Utility, talking about why we need free tools to improve accuracy. Third paragraph could go into validity, too; I am thinking about ways of tweaking it for utility, too. Think of utility as a more "bottom line" -- do the risks (false positives, false negatives), benefits (it's much better than not using a rating scale), and costs (it's free!) make it clinically useful or not? >>>

Utility

The CPSS provides a symptom severity score by assessing PTSD symptoms in three clusters, reexperiencing, avoidance, and arousal (the three clusters defined by the DSM-IV). Its classification as a self-report measure requires "minimal clinician and administration time". It should be seen as a practical tool for use in schools, communities, and research settings.

"Results suggest a large discrepancy between rates of probable PTSD identified through standardized assessment and during the emergency room psychiatric evaluation (28.6% vs. 2.2%). Upon discharge, those with probable PTSD were more than those without to be assigned a diagnosis of PTSD (45% vs. 7.1%), a comorbid diagnosis of major depressive disorder (30% vs .14.3%), to be prescribed an antidepressant medication (52.5% vs. 33.7%), and to be prescribed more medications. The underidentification of trauma exposure and PTSD has important implications for the care of adolescents given that accurate diagnosis is a prerequisite for providing effective care. Improved methods for identifying trauma-related problems in standard clinical practice are needed"

The CPSS scale assesses avoidance and change of activities, which may not accurately reflect pathology. This could possibly result in higher PTSD prevalence estimations. In a study, the CPSS scale correctly classified 72.2% of children. Nearly one quarter of children were misclassified and 5.6% were misclassified (false negative)

asdfads

-

toms in the three clusters of

DSM–IV

(reexperiencing,

avoidance, and  arousal)  and  thus  provides  a  PTSD

symptom severity score. Second, it includes a seven-

item assessment  of  functional  impairment. As with

other self-report measures, the CPSS requires minimal

clinician and administration time, and it therefore con

-

stitutes a practical screening tool in school, community,

and research settings for groups of children exposed to

trauma

The Child PTSD Symptom Scale (CPSS) is a 26-item self-report measure that assesses PTSD diagnostic criteria and symptom severity in children ages 8 to 18. It includes 2 event items, 17 symptom items, and 7 functional impairment items. Symptom items are rated on a 4-point frequency scale (0 = "not at all" to 3 = "5 or more times a week").

Reliability
Reliability refers to whether the scores are reproducible. Not all of the different types of reliability apply to the way that the CAGE is typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of the CAGE; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview).

Explanation with references

Validity
Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures such as the CAGE, diagnostic accuracy and discriminative validity are probably the most useful ways of looking at validity.

Rating (adequate, good, excellent, too good*) * Table from Youngstrom et al., extending Hunsley & Mash, 2008; *indicates new construct or category

Goals and Publications
The Society of Clinical Child and Adolescent Psychology has three main goals**** The AP-LS publishes the Journal of Law and Human Behavior and a newsletter entitled AP-LS News

History
The American Psychology–Law Society (AP-LS) was developed at a San Francisco meeting in September 1968, by founders Eric Dreikurs and Jay Ziskin. The society was created for forensic and clinical psychologists. The first newsletter was published in October 1968. The original constitution was published later that year, and outlined the reasons for creating the society. These were to promote the study of law, influence legislation and policy, and to promote psychology in legal processes. A year after the San Francisco meeting, the AP-LS had 101 members. Most of the members were clinical psychologists, and nine of these original members were women. This group had a stronger focus on psychology, as opposed to the Law and Society Association, which has similar goals, but a broader focus.

There was a controversy in 1971, when the founder, Jay Ziskin, wrote a book which stated that psychological evidence often did not meet reasonable criteria and should not be used in court of law. This statement sprouted debate in the society and caused the society’s popularity to decline for a while. After this, June Louin Tapp became president of the society.

In 1976, Bruce Sales became the society’s president, and helped refocus the society on the field of psychology and law. Sales had the goal to have the American Psychology–Law Society be the driving force behind the group. Sales, along with Ronald Roesch, helped the group publish many books, including Psychology in the Legal Process, Perspectives in Law and Psychology, and the Journal of Law and Human Behavior.

In the 1980’s, Florence Kaslow asked the group to help develop a certification for forensic psychologists, but the group was not interested. This led Kaslow to create the American Board of Forensic Psychology, which helped keep the American Psychology–Law Society and forensic psychology separated. In the 1980’s Division 41 of the APA began to discuss law and psychology, and began covering many similar policies of the AP-LS. Therefore, in 1983, Division 41 and AP-LS merged together, under the agreement that Law and Human Behavior would be the journal for the group, and that the biennial meetings would continue to be held. The “new AP-LS” allowed for previous presidents to have a second term in the society, including Bruce Sales, who was the first president of the merged society.

Specialty Guidelines
In 1991, the Committee on Ethical Guidelines for Forensic Psychologists began working to establish rules for forensic psychologists to follow in the court room. In 1992, the committee released “Specialty Guidelines” for forensic psychologists, on top of the Code Of Conduct that they already were required to follow. Additionally in the 1990s, the society also established the Committee on Careers and Education, to help students find training programs to become psychologists in the legal system. In 1995, they held a conference to discuss education at undergraduate and post doctorate levels, how to offer legal psychology courses in the curriculum, and how to offer students experiences. The AP-LS also provides grants and funding for students who are interested in attending school for law-related psychology.

The Specialty Guidelines for Forensic Psychologists were first published in 1991. They are guidelines for forensic psychologists to encourage professional, quality, and systematic work in the law system and to those who the forensic psychologists serve. These are the only sets of APA-approved guidelines for a specific area of practice. The guidelines cover 11 points – responsibilities, competence, diligence, relationship, fees, informed consent notification and assent, conflicts in practice, privacy confidentiality and privilege, methods and procedures, assessment, and professional and other public communications.

After an extensive revision process, the Specialty Guidelines for Forensic Psychology were updated in 2013.

The Specialty Guidelines may be found on the APA’s website.

Membership
The AP-LS is composed of APA members, graduate and undergraduate students, and people in related fields to join the society. The members primarily have an interest in psychology and law issues. Many members are also members of the American Psychological Association, though it is not a requirement. Members gain access to the publications of Law and Human Behavior and the American Psychology-Law newsletter.

Awards and Honors
The AP-LS offers many grants and aid for undergraduates, graduates, early careers, and research. In addition to grants, many awards are handed out yearly.


 * The AP-LS Award for Best Undergraduate Paper: This award is given to an undergraduate student who has a paper focused on psychology and law.
 * Dissertation Awards: These awards are distributed for scientific research relevant to the study of psychology and law. Winners may present their research at the AP-LS annual conference.
 * The Saleem Shah Award: This award is also sponsored by the American Academy of Forensic Psychology. It is awarded for early career excellence and contributions to the field.
 * Outstanding Teaching and Mentoring in the Field of Psychology and Law: This is an award to recognize excellence in teaching of subjects related to psychology and law.
 * AP-LS Award for Distinguished Contribution to Psychology and Law: This is not awarded on a regular basis. It is an award reserved for unusual excellence and contributions in the field that are so important that it merits special commendations.
 * The American Psychology-Law Society Book Award: This is awarded to a book each year to recognize outstanding scholarship in the field of psychology and law.
 * AP-LS/AACP Award for Contributions to Correctional Psychology: This is also given by the American Association for Correctional Psychology. This award is given to professionals who have made an impact on the field of correctional psychology.

Publications

 * Law and Human Behavior: This journal is published six times a year.  It discusses the issues that arise in law an psychology, including the legal process, the legal system, and the relationships between these and human behavior.
 * AP-LS News: This is a newsletter that the society publishes three times each year. These news letters update on activities, important legal cases that are occurring, new publications, and emerging topics in the field of psychology and the law.