standardized_testing   418

« earlier    

How to Fairly Sort Students for College – People's Policy Project
"Any method of academic evaluation you use is going to sort affluent kids higher than non-affluent kids on average. This is because affluent kids acquire a higher average level of academic competency than non-affluent kids. To believe that all groups have equal average academic competency at age 18 is to believe that poverty has no negative effects on learning, which is clearly wrong."

--- This sounds compelling.
education  inequality  standardized_testing 
april 2018 by cshalizi
PsyArXiv Preprints | How much does education improve intelligence? A meta-analysis
Intelligence test scores and educational duration are positively correlated. This correlation can be interpreted in two ways: students with greater propensity for intelligence go on to complete more education, or a longer education increases intelligence. We meta-analysed three categories of quasi-experimental studies of educational effects on intelligence: those estimating education-intelligence associations after controlling for earlier intelligence, those using compulsory schooling policy changes as instrumental variables, and those using regression-discontinuity designs on school-entry age cutoffs. Across 142 effect sizes from 42 datasets involving over 600,000 participants, we found consistent evidence for beneficial effects of education on cognitive abilities, of approximately 1 to 5 IQ points for an additional year of education. Moderator analyses indicated that the effects persisted across the lifespan, and were present on all broad categories of cognitive ability studied. Education appears to be the most consistent, robust, and durable method yet to be identified for raising intelligence.
education  psychology  standardized_testing  meta-analysis  via:hardsci 
november 2017 by rvenkat
PsycNET Record Display - PsycNET
This meta-analysis examined the validity of the Graduate Record Examinations (GRE) and undergraduate grade point average (UGPA) as predictors of graduate school performance. The study included samples from multiple disciplines, considered different criterion measures, and corrected for statistical artifacts. Data from 1,753 independent samples were included in the meta-analysis, yielding 6,589 correlations for 8 different criteria and 82,659 graduate students. The results indicated that the GRE and UGPA are generalizably valid predictors of graduate grade point average, 1st-year graduate grade point average, comprehensive examination scores, publication citation counts, and faculty ratings. GRE correlations with degree attainment and research productivity were consistently positive; however, some lower 90% credibility intervals included 0. Subject Tests tended to be better predictors than the Verbal, Quantitative, and Analytical tests. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
education  psychology  standardized_testing  meta-analysis  via:hardsci 
november 2017 by rvenkat
The Validity of the Graduate Record Examination for Master’s and Doctoral Programs: A Meta-Analytic InvestigationEducational and Psychological Measurement - Nathan R. Kuncel, Serena Wee, Lauren Serafin, Sarah A. Hezlett, 2010
Extensive research has examined the effectiveness of admissions tests for use in higher education. What has gone unexamined is the extent to which tests are similarly effective for predicting performance at both the master’s and doctoral levels. This study empirically synthesizes previous studies to investigate whether or not the Graduate Record Examination (GRE) predicts the performance of students in master’s programs as well as the performance of doctoral students. Across nearly 100 studies and 10,000 students, this study found that GRE scores predict first year grade point average (GPA), graduate GPA, and faculty ratings well for both master’s and doctoral students, with differences that ranged from small to zero.
education  psychology  standardized_testing  meta-analysis  via:hardsci 
november 2017 by rvenkat
Test Score Measurement and the Black-White Test Score Gap | The Review of Economics and Statistics | MIT Press Journals
"Research as to the size of the black-white test score gap often comes to contradictory conclusions. Recent literature has affirmed that the source of these contradictions and other controversies in education economics may be due to the fact that test scores contain only ordinal information. In this paper, I propose a normalization of test scores that is invariant to monotonic transformations. Under fairly weak assumptions, this metric has interval properties and thus solves the ordinality problem. The measure can serve as a valuable robustness check to ensure that any results are not simply statistical artifacts from the choice of scale."
mental_testing  standardized_testing  re:g_paper  to:NB 
october 2017 by cshalizi
The Testing Charade: Pretending to Make Schools Better, Koretz
"For decades we’ve been studying, experimenting with, and wrangling over different approaches to improving public education, and there’s still little consensus on what works, and what to do. The one thing people seem to agree on, however, is that schools need to be held accountable—we need to know whether what they’re doing is actually working. But what does that mean in practice?
"High-stakes tests. Lots of them. And that has become a major problem. Daniel Koretz, one of the nation’s foremost experts on educational testing, argues in The Testing Charade that the whole idea of test-based accountability has failed—it has increasingly become an end in itself, harming students and corrupting the very ideals of teaching. In this powerful polemic, built on unimpeachable evidence and rooted in decades of experience with educational testing, Koretz calls out high-stakes testing as a sham, a false idol that is ripe for manipulation and shows little evidence of leading to educational improvement. Rather than setting up incentives to divert instructional time to pointless test prep, he argues, we need to measure what matters, and measure it in multiple ways—not just via standardized tests.
"Right now, we’re lying to ourselves about whether our children are learning. And the longer we accept that lie, the more damage we do. It’s time to end our blind reliance on high-stakes tests. With The Testing Charade, Daniel Koretz insists that we face the facts and change course, and he gives us a blueprint for doing better. "
to:NB  books:noted  education  social_measurement  standardized_testing  mental_testing  re:g_paper 
september 2017 by cshalizi
Harvard Law, Moving to Diversify Applicant Pool, Will Accept GRE Scores - The New York Times

Harvard Law School, moving to open its doors to a larger, more diverse pool of applicants, said on Wednesday that it would accept the graduate record examination, known as the GRE, for the admission of students entering its fall 2018 class.

The law school, whose alumni include senators, chief executives, Chief Justice John G. Roberts Jr. and President Barack Obama, is the second accredited law school in the United States to accept the GRE for admission. It follows the University of Arizona James E. Rogers College of Law, which made the change a year ago.

At the time, Arizona’s decision provoked a heated debate in the legal profession, which has long supported the Law School Admissions Test, or LSAT, over whether that test should be relied on as a single valid predictor of law school success.

Since Arizona’s move, around 150 law school deans, including Martha Minow of Harvard Law, have expressed support for the change. Now Harvard Law is taking the same step. The school said it would start a pilot program in the fall, when students begin submitting applications for the three-year juris doctor program that begins in 2018.

The change “will encourage more students in the United States and internationally from a greater degree of disciplines to apply,” said Jessica Soban, assistant dean and chief admissions officer. Applicants who want to can still submit LSAT scores.
Harvard  law_schools  diversity  applicants  standardized_testing  HLS  pilot_programs 
march 2017 by jerryking
What grades and achievement tests measure
"Intelligence quotient (IQ), grades, and scores on achievement tests are widely used as measures of cognition, but the correlations among them are far from perfect. This paper uses a variety of datasets to show that personality and IQ predict grades and scores on achievement tests. Personality is relatively more important in predicting grades than scores on achievement tests. IQ is relatively more important in predicting scores on achievement tests. Personality is generally more predictive than IQ on a variety of important life outcomes. Both grades and achievement tests are substantially better predictors of important life outcomes than IQ. The reason is that both capture personality traits that have independent predictive power beyond that of IQ."

--- Contributed, so who knows?
to:NB  mental_testing  psychometrics  iq  standardized_testing  re:g_paper 
november 2016 by cshalizi
Standardized Assessments of College Learning
"In a new report released with New America’s Higher Education Program, “Standardized Assessments of College Learning: Past and Future,” Fredrik deBoer, a scholar and lecturer at Purdue University, looks at current research regarding assessment policies and outcomes — while making recommendations for developing future assessment for American higher education.
"“Effective assessment of student learning in any context represents a significant challenge, and controversies persist at all levels of education about which methods of data collection and analysis are most effective and appropriate,” writes deBoer. “Some fear that the creation of a widespread testing system at the college level will lead to teaching to the test and invite test fraud.”"
to:NB  to_read  education  academia  standardized_testing 
april 2016 by cshalizi
Parents: You Have the Right to Opt Out! | Diane Ravitch's blog
Here is a short summary. The tests now in use are tied to a cut score (passing mark) that guarantees that about 70% of all students will fail. The tests will be especially harmful to students with disabilities and students who are English language learners. But the harm extends far beyond those students.


The tests do not provide any useful information. They will tell you whether your child is a 1, 2, 3, or 4. What good is that? It will tell you what percentile rank your child is. What good is that? The test will not help your child or her teacher. It will label your school as a failure because of low scores (it is designed to produce low pass rates). If your school is labeled a failure, it will be set up to be taken over by a charter chain. You will lose your community public school because of the data produced by the test. The testing companies will make money. Your child will gain nothing. In fact, while taking the test online, the testing company will gather data about your child that will be mined for developing products.
education  standardized_testing 
april 2016 by sctonkin
Social-Emotional Testing
A placeholder while I find time to organize my thoughts on this.

Angela Duckworth (works supposedly on the psychology of grit and self-control)
education  measurement  policy  standardized_testing  from notes
march 2016 by rvenkat
High-Stakes Schooling: What We Can Learn from Japan's Experiences with Testing, Accountability, and Education Reform, Bjork
"If there is one thing that describes the trajectory of American education, it is this: more high-stakes testing. In the United States, the debates surrounding this trajectory can be so fierce that it feels like we are in uncharted waters. As Christopher Bjork reminds us in this study, however, we are not the first to make testing so central to education: Japan has been doing it for decades. Drawing on Japan’s experiences with testing, overtesting, and recent reforms to relax educational pressures, he sheds light on the best path forward for US schools.
"Bjork asks a variety of important questions related to testing and reform: Does testing overburden students? Does it impede innovation and encourage conformity? Can a system anchored by examination be reshaped to nurture creativity and curiosity? How should any reforms be implemented by teachers? Each chapter explores questions like these with careful attention to the actual effects policies have had on schools in Japan and other Asian settings, and each draws direct parallels to issues that US schools currently face. Offering a wake-up call for American education, Bjork ultimately cautions that the accountability-driven practice of standardized testing might very well exacerbate the precise problems it is trying to solve. "
in_NB  books:noted  education  standardized_testing  japan 
january 2016 by cshalizi
Opting Out | FairTest
“Opting out” of testing is a powerful way to resist No Child Left Behind and the way standardized testing distorts and corrupts K-12 classrooms. Growing numbers of parents, teachers and students are questioning the value of federal, state and district testing, saying they want to exercise the right to opt out, boycott or refuse.
standardized_testing  education  unitedstates 
may 2015 by sctonkin
[no title]
The first review seems the most compelling to me: the methodology sounds really dodgy (50% drop-out!), and, crucially, unreviewable.
education  academia  standardized_testing  via:?  have_read 
march 2015 by cshalizi
Report on the Pan-Canadian Assessment of Science, Reading, and Mathematics
The Pan-Canadian Assessment Program (PCAP) is the continuation of CMEC’s commitment to inform Canadians about how well their education systems are meeting the needs of students and society. The information gained from this pan-Canadian assessment provides ministers of education with a basis for examining the curriculum and other aspects of their school systems.

School programs and curricula vary from jurisdiction to jurisdiction across the country, so comparing results from these programs is a complex task. However, young Canadians in different jurisdictions learn many similar skills in reading, mathematics, and science. PCAP has been designed to determine whether students across Canada reach similar levels of performance in these core disciplines at about the same age, and to complement existing jurisdictional assessments with comparative Canada-wide data on the achievement levels attained by Grade 8/Secondary II students across the country.
report  PDF  Canada  standardized_testing  education_research 
november 2014 by pathways_to_education

« earlier    

related tags

21st_century_student  academia  accommodations  accountability  achievement_gap  achievement_gaps  act  admissions  affordability  african-americans  anthony_carnevale  applicants  april14  arne_duncan  assessment  assessments_&_evaluations  august12  averages  bad_data_analysis  barack_obama  bibliography  blog  books:noted  business  california  canada  ceos  cheating  chicago  china  chris_christie  civics  college_access  college_board  college_productivity  colleges_&_universities  comedy  common_core  common_core_standards  community_colleges  constitutions  core_knowledge  cory_booker  creating_valuable_content  creativity  curriculum  data  data_driven  december11  december13  developmental_ed  digital_media  diigo  disabilities  discoveries  diversity  drive  dropout_prevention  dropout_rates  economics  editorials  education  education_data  education_reform  education_research  educational_attainment  engaged_citizenry  enraging  entrance_exams  evisceration  exit_exams  extra  february12  february13  february14  felix_salmon  foundational  ged  gmat  grade_inflation  hard_work  harvard  have_read  high_school  high_schools  hls  howto  ifttt  imagination  in_nb  indices  inequality  insights  inventiveness  iq  is_college_worth_it  january13  january14  japan  july13  july14  june13  june14  k-12  khan_academy  konrad_yakabuski  law_schools  learning_outcomes  low-income_students  machine_learning  march12  march13  march14  mark_zuckerberg  market_failures_in_everything  marketing_communications  math_curriculum  mathematics  may12  may13  may14  mbas  measurement  mental_testing  meta-analysis  metrics  minnesota  mismatches  mississauga  mississippi  naep  new_hampshire  new_york  newark  no_child_left_behind  no_tag  o'neil.cathy  october13  oecd  ontario  opinion  our_decrepit_institutions  outperformance  outsourcing  paradoxes  parenting  pdf  performance-based_funding  performance  physical_therapy  pilot_programs  pisa  platforms  policy  politicians  preparation  privatization  psychology  psychometrics  public_education  public_schools  questions  racial_disparities  ranked_list  rankings  ratings_system  re:g_paper  readiness  reading  reading_achievement  reading_comprehension  real_estate  remedial_education  report  reproducibility  research  sat  school_districts  schools  september12  september13  ses  social_measurement  sped  standardized_tests  standards-based_reform  state_education_policy  statistics  stem  student_success  student_support  students  teachers  technology  test-score_data  test  testing  texas  text_mining  the_american_dilemma  time  to:blog  to:fb  to:nb  to:twitter  to_read  to_teach:data-mining  to_teach:undergrad-ada  tom_friedman  tools  tuda  underrated  unitedstates  us-ms  us-nh  usa  utter_stupidity  value-added_measurement_in_education  virginia  vouchers  whats_gone_wrong_with_america  with  work  workforce_development 

Copy this bookmark: