top of page

Tuesday, April 10, 9:00 AM – 12:00 PM

 

Opening Plenary Session

 

Introductory Remarks

Andrew Maul, University of California, Santa Barbara

 

Measuring the Health Needs of Young Adults with Mental Illness & Substance Use Disorders using a applied mixed-methods approach to improve clinical practice

Skye Barbic, University of British Columbia

Steve Mathias, University of British Columbia

 

Applying the Mixture Rasch Model to the  Middle Grades Self-Efficacy to Teach Statistics (SETS-MS) Instrument

Leigh M. Harrell-Williams, University of Memphis

Jennifer N. Lovett, Middle Tennessee State University

M. Alejandra Sorto, Texas State University

Rebecca L. Pierce, Ball State University

Lawrence M. Lesser, The University of Texas at El Paso

Teri J. Murphy, University of Cincinnati

 

Measuring Reading Strategy Use in a Multidimensional, Multilingual Context

Daniel Katz, University of California, Santa Barbara

Anthony Clairmont, University of California, Santa Barbara

Diana Arya, University of California, Santa Barbara

Andrew Maul, University of California, Santa Barbara

 

Constructs Behind the College Information Level of High School Students: The Chilean Case

Maria Veronica Santelices, Pontificia Universidad Católica de Chile

Ximena Catalán, Pontificia Universidad Católica de Chile

Magdalena Zarhi, Universidad Diego Portales

Paulina Perez, Reston, Virginia

 

Item Difficulty Modeling of Reading Comprehension Items  Using the Rasch Latent Regression Linear Logistic Test Models

Yukie Toyama, Graduate School of Education, University of California, Berkeley

 

Guttman Errors for Raters: Exploring Rater Fit using adjacent-categories Mokken Scale Analysis

Stefanie A. Wind, The University of Alabama

 

 

Tuesday, April 10, 12:00 PM – 1:00 PM, Poster Session 

Poster Session

 

Using the MIMIC method with iterative process to assess DIF with collected nuisance grouping variables

Chen, Chi-Chen, Institute of Education, National Sun Yat-Sen University

Hsiu-Yi Chao, Department of Psychology, National Chung Cheng University

Jyun-Hong Chen, Office of Institutional Research, National Sun Yat-sen University

 

Ratings, Latent Traits, and Evaluations of Pedagogical Intervention: An Application of the Many-Facet Rasch Measurement Model

T. Ryan Duckett, University of Toledo, Judith Herb College of Education

Gale A. Mentzer, Acumen Research and Evaluation, LLC

Svetlana Beltyukova, University of Toledo, Judith Herb College of Education

 

Graphical Methods for Validating Survey Instruments

P. Cristian Gugiu, Department of Educational Studies, College of Education and Human Ecology, The Ohio State University

 

Digital Competence: investigations of the alignment between the conceptual perspectives and the empirical data

Fazilat Siddiq, Nordic Institute for Studies in Innovation, Research and Education

Perman Gochyyev, University of California, Berkeley

 

Using Scaffolds to Measure Optimal Performance in Preschool Literacy

Julia Volkman, Harvard University Extension School

 

Creating measurements of teachers' experiences

Emily Winchip, University of Nottingham

 

Examining Differential Item Functioning Related to SES in the Torrance Tests of Creative Thinking Figural Form A

Süreyya Yörük, Marmara University

 

Tuesday, April 10, 1:00 PM – 2:30 PM, Symposium A1

 

Rasch in International Development

 

Measuring Genuine Progress By Scaling Economic Indicators to Think Global / Act Local

William P. Fisher, Jr., BEAR Center, University of California, Berkeley

 

Using Rasch Analysis to Create an Index of Socio-economic Status (SES) for Use in Low and Middle Income Countries: An Example From Indonesia

Nancy E. Mayo, Research Institute of the McGill University Health Center

Susan C. Scott, Research Institute of the McGill University Health Center

Mónica Ruiz-Casares, Department of Psychiatry, McGill University;  SHERPA University Institute, CIUSSS du centre-ouest-de-l'Île-de-Montréal

 

Changes in Child Nutrition in Liberian Counties, 2007–2013: A Rasch Analysis

R. Jerome Anderson, ORISE Fellow, Walter Reed Army Institute of Research

 

Examining differential item functioning in the Household Food Insecurity Scale: Does participation in SNAP affect measurement invariance?

Victoria Tanaka, The University of Georgia

George Engelhard, Jr., The University of Georgia

Matthew P. Rabbitt, Economic Research Service, U.S. Department of Agriculture

 

 

 

 

Tuesday, April 10, 1:00 PM – 2:30 PM, Paper Session A2

 

Facet Models for Raters

 

Creating and Visualizing Incomplete Rating Designs intended for Many-Facet Rasch Model Analysis

Mary R. McEwen, Brigham Young University

Richard R. Sudweeks, Brigham Young University

 

The Effects of Incomplete Rating Designs on Results from Many-Facet-Rasch Model Analyses

Mary R. McEwen, Brigham Young University

Richard R. Sudweeks, Brigham Young University

 

Using a Simple Two-Facet Rasch Model to find Hawks and Doves in a One-Examiner OSCE Setting

Karen Coetzee, Touchstone Institute

Sandra Monteiro, McMaster University

Debra Sibbald, Touchstone Institute

 

Exploring the Psychometric Properties of the Mind Map Scoring Rubric Using the Many-Facet Rasch Model

Hua Cheng, The University of Alabama

 

 

 

Tuesday, April 10, 1:00 PM – 2:30 PM, Paper Session A3

 

Fresh Perspectives on the Rasch Model

 

Metrology for the Social Sciences: A case for Rasch Measurement Theory (not Rasch Analysis)

Stefan J. Cano, Modus Outcomes

Leslie R. Pendrill, RI.SE

Jeanette Melin, RI.SE

Theresa Köbe, Department of Neurology, Charité University Medicine Berlin

William P. Fisher, Jr., BEAR Center, University of California, Berkeley

A. Jackson Stenner, MetaMetrics, Inc. and The University of North Carolina, Chapel Hill

 

Rasch versus Lord: two paradigms to understand the Rasch model

Ernesto San Martín, Pontificia Universidad Católica de Chile

 

The Gaussian as a special case of the Rasch measurement distribution in which the role of the instrument is explicit

David Andrich, Graduate School of Education, The University of Western Australia

 

Two educational measurements with units analogous to those in the natural sciences

David Andrich, Graduate School of Education, The University of Western Australia

 

 

 

 

Tuesday, April 10, 2:30 PM – 4:00 PM, Paper Session B1

 

Models and Reality

 

Measurement across the sciences: Objectivity

Andrew Maul, University of California, Santa Barbara

 

Measurement across the sciences: Intersubjectivity

Mark Wilson, BEAR Center, University of California, Berkeley

 

The Rasch paradigm, model to reality thinking, and mechanical objectivity

Joshua McGrane, Oxford University

 

Realism, measurement, and the latent variable model in psychometric research

Trisha Nowland, Macquarie University

 

 

 

 

Tuesday, April 10, 2:30 PM – 4:00 PM, Paper Session B2

 

Longitudinal Measurement of Health and Well-Being Using Polytomous Rasch Models

 

A Rasch Analysis of The Pittsburgh Sleep Quality Index in Notre Dame Health & Well-Being Data Across Ten Years.

Laura E. Baird, University of Virginia

Karen M. Schmidt, University of Virginia

Cindy S. Bergeman, University of Notre Dame

 

You Know What They Say About When You Assume: Assessing the Robustness of Invariant Comparisons in Longitudinal CES-D Responses

Austin T. Boyd, University of Virginia

Karen M. Schmidt, University of Virginia

Cindy S. Bergeman, University of Notre Dame

 

A Comparison of Longitudinal Rasch Anchoring Methods Utilizing the Geriatric Depression Scale

Tara L. Valladares, University of Virginia

Karen M. Schmidt, University of Virginia

Cindy S. Bergeman, University of Notre Dame

 

Discovering Subscales of The Perceived Stress Scale and Longitudinal Anchoring Using Rasch Models

Leah S. Brady, University of Virginia

Gustav R. Sjobeck, University of Virginia

Lily Zamanali, University of Virginia

Karen M. Schmidt, University of Virginia

Cindy S. Bergeman, University of Notre Dame

 

Analyzing the Stability of Perceived Control in Older Adults Over Five Years using Polytomous Rasch Models

Lily Zamanali, University of Virginia

Chelsea N. Dickens, Pennsylvania State University

Karen M. Schmidt, University of Virginia

Cindy S. Bergeman, University of Notre Dame

 

 

 

 

Tuesday, April 10, 2:30 PM – 4:00 PM, Paper Session B3

 
Surveys in the Schools

 

Measuring Teachers’ Enactment of Practice for Equity: A Rasch Model and Facet Theory-based Approach

Wen-Chia C. Chang, Lynch School of Education, Boston College

Larry H. Ludlow, Lynch School of Education, Boston College

Marilyn Cochran-Smith, Lynch School of Education, Boston College

Lexie Grudnoff, Faculty of Education and Social Work, The University of Auckland

Fiona Ell, Faculty of Education and Social Work, The University of Auckland

Mavis Haigh, Faculty of Education and Social Work, The University of Auckland

Mary Hill, Faculty of Education and Social Work, The University of Auckland

 

Construct Refinement and Item Development: A Journey Guided by  the Rasch Partial Credit Model

Amanda E. Ferster, University of Georgia

A. Adrienne Walker, Gwinnett County Public Schools

 

Validating a Rasch-based instrument to measure school climate large-scale. Can it meaningfully and reliably differentiate schools when policy and practical considerations impacted best practice?

Shelagh Peoples, Massachusetts Department of Elementary and Secondary Education

 

The Effect of a Science Enrichment Program in an Under-resourced School: Changes in Attitudes and Scores

R. Jerome Anderson, ORISE Fellow, Walter Reed Army Institute of Research

 

 

 

Wednesday, April 11, 9:00 AM – 10:00 AM

 

Invited Speaker

 

An Urgent Educational-Assessment Question and a Proposed Answer

Robert J. Mislevy, ETS

 

 

Suppose we hold a situative, sociocognitive perspective of the nature of human capabilities and how they develop. Suppose we also think that the practices, the concepts, and the modeling tools of educational measurement, which evolved under trait and behavioral perspectives, can nevertheless hold value for practical work in educational assessment. How might we conceive of educational measurement models such that both our interpretations and uses of them are consistent with the sociocognitive perspective? 

 

I propose an articulation through a sociocognitively-framed, argument-structured, constructivist-realist, subjectivist-Bayesian variant of latent-variable measurement modeling. The presentation will parse this admittedly unwieldy phrase. I call particular attention to implications for the interpretation of latent variables, probabilities, validity, and measurement.  

 

 

 

Wednesday, April 11, 10:30 AM – 11:45 AM, Paper Session C1

 

Psychological and Social Measurement: The Career and Contributions of Benjamin D. Wright

 

Cogitations on invariant measurement: A memo to Ben Wright on the perspectives of Rasch and Guttman

George Engelhard, Jr., University of Georgia

 

Ben Wright: Quotable and quote-provoking

Stefanie Wind, The University of Alabama

 

The value of subjectivity in sorting out competing claims about what matters in measurement

William P. Fisher, Jr., BEAR Center, University of California, Berkeley

 

Things I learned from Ben

Mark Wilson, BEAR Center, University of California, Berkeley

 

 

 

Wednesday, April 11, 10:30 AM – 11:45 AM, Paper Session C2

 

Health Outcomes

 

A Mixed Methods Psychometrics Framework to Reduce the Uncertainty in the Measurement of Patient-Centered Outcomes

Antoine Regnault, Modus Outcomes, Lyon, France

Sophie Cleanthous, Modus Outcomes, Letchworth Garden City, UK

Jessica T Markowitz, Modus Outcomes, Cambridge, USA

Patrick Marquis, Modus Outcomes, Cambridge, USA

Stefan J Cano, Modus Outcomes, Letchworth Garden City, UK

 

 

Using the Rasch Model to Evaluate the Family Support for Exercise Survey for Bariatric Surgery Patients

Jennifer Cotto, Human Sciences, The Ohio State University

Keely J. Pratt, Human Sciences, The Ohio State University

 

Oral health related quality of life in children and adolescents assessed with the German Version of the Child Perceptions Questionnaire (CPQ11-14): can our self-reported instruments be more targeted?

Maisa Omara, Section for Outcomes Research, Center for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna

Tanja Stamm, Section for Outcomes Research, Center for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna

Maren Boecker, RWTH Aachen University, Institute for Medical Psychology and Medical Sociology

Valentin Ritschl, Section for Outcomes Research, Center for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna

Erika Mosor, Section for Outcomes Research, Center for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna

Thomas Salzberger, Institute for Statistics and Mathematics, Vienna University of Economics and Business

Christian Hirsch, Department of Paediatric Dentistry, Department of Head Medicine and Oral Health, University of Leipzig

Katrin Bekes, Department of Paediatric Dentistry, School of Dentistry, Medical University of Vienna

 

How is functional impairment linked to measurement of mental health symptoms among adolescents – A gender perspective

Curt Hagquist, Karlstad University, Centre for Research on Child and Adolescent Mental Health

 

 

 

Wednesday, April 11, 10:30 AM – 11:45 AM, Paper Session C3

 

Polytomous Models

 

Does the threshold distribution in polytomous items matter in computer adaptive testing?

Thomas Salzberger, Vienna University of Economics and Business, Institute for Statistics and Mathematics

Mike Horton, University of Leeds, Institute of Rheumatic and Musculoskeletal Medicine

 

The Method of Successive Dichotomization: a new approach towards estimating polytomous Rasch model parameters

Chris Bradley, Johns Hopkins School of Medicine

Robert W. Massof, Johns Hopkins School of Medicine

 

Evaluation of person and item fit of simulated rating scale data to polytomous Rasch models

Robert W. Massof, Johns Hopkins University

Chris Bradley, Johns Hopkins University

 

Polytomous Item Explanatory IRT Models with Random Item Effects

Jinho Kim, Graduate School of Education, University of California, Berkeley

Mark Wilson, Graduate School of Education, University of California, Berkeley

 

 

 

Wednesday, April 11, 1:00 PM – 2:15 PM, Paper Session D1

 

Validity and Fairness

 

To validate or not to validate the internal structure of connectedness, empowerment, and meaning scales: The meaning and results of a multi-dimensional item response model study

Brent Duckor, San Jose State University

Joshua Sussman, University of California, Berkeley

Jason Sellars, University of California, Berkeley

 

Increasing Test Validity by Examining for Cross-Cultural Differential Item Functioning (DIF)

Xian Wu, University of Kentucky

Rongxiu Wu, University of Kentucky

Michael Peabody, American Board of Family Medicine

Thomas O'Neill, American Board of Family Medicine

 

Differential Item Functioning (DIF) for special education populations in the Desired Results Developmental Profile Assessment: The impact of DIF and what we can (and should) do about it.

Joshua Sussman, University of California, Berkeley

 

Examining the Consequential Validity of the edTPA: Perspectives of Preservice Teachers

Nadia Behizadeh, Georgia State University

Adrian Neely, Georgia State University

 

 

 

Wednesday, April 11, 1:00 PM – 2:15 PM, Symposium D2

 

Raters and Fit

 

Addressing Scoring Challenges in a New Era of Integrated Writing Assessments

Kevin Raczynski, The University of Georgia

Jue Wang, The University of Georgia

George Engelhard, Jr., The University of Georgia

Allan Cohen, The University of Georgia

 

Using Unfolding Models to Evaluate Ratings in Rater-Mediated Assessments

Jue Wang, The University of Georgia

George Engelhard, Jr., The University of Georgia

 

Exploring the correspondence between traditional score resolution methods and person fit indices in rater-mediated writing assessments

Stefanie A. Wind, The University of Alabama

A. Adrienne Walker, Gwinnett County Public Schools

 

Investigating Person Fit using Empirical Confidence Bands

Jeremy Kyle Jennings, National Board of Examiners in Optometry

George Engelhard, Jr., University of Georgia

 

 

 

 

Wednesday, April 11, 1:00 PM – 2:15 PM, Paper Session D3

 

Meta-science

 

Motivations of invalid responses: Consequences for Validity

Anthony Clairmont, University of California, Santa Barbara

Melissa Gordon Wolf, University of California, Santa Barbara

Andrew Maul, University of California, Santa Barbara

 

Do Many Psychological Studies Fail to be Replicated Because of Too Few Items?: A Monte Carlo Simulation Study

Jinho Kim, Graduate School of Education, University of California, Berkeley

Weeraphat Suksiri, Graduate School of Education, University of California, Berkeley

Shruti Bathia, Graduate School of Education, University of California, Berkeley

James M Mason, Graduate School of Education, University of California, Berkeley

Mark Wilson, Graduate School of Education, University of California, Berkeley

 

Instrument validation in the absense of traditional constructs

Melissa Gordon Wolf, University of California, Santa Barbara

 

How to construct concepts in psychology? A proposal inspired in Spearman (1904)

Trinidad González, Pontificia Universidad Católica de Chile

Ernesto San Martín, Pontificia Universidad Católica de Chile

 

 

 

Wednesday, April 11, 2:30 PM – 3:30 PM, Paper Session E1

 

Multidimensionality

 

Derived measures model for obtaining composite and dimension scores.

Perman Gochyyev, University of California, Berkeley

Mark Wilson, University of California, Berkeley

 

Application of the Multidimensional Random Coefficients Multinomial Logit Model into Online Testing

Daeryong Seo, Pearson

Se-Kang Kim, Fordham University

 

Toward an Objectivity Statistic

Mark H. Moulton, Educational Data Systems

 

 

 

 

Wednesday, April 11, 2:30 PM – 3:30 PM, Paper Session E2

 

Growth

 

Demonstrating the equivalence of sequential and simultaneous equating of a large-scale vertically equated test

Ida Marais, The University of Western Australia

David Andrich, The University of Western Australia

 

Evaluating item fit in the presence of learning

Ben Stenhaug, Stanford University

Ben Domingue, Stanford University

 

Visualizing Location and Growth: Design Principles for Person-Item Maps

Derek Briggs, University of Colorado Boulder

 

 

 

Wednesday, April 11, 2:30 PM – 3:30 PM, Paper Session E3

 

Psychometric Issues

 

Disentangling guessing from discrimination in multiple choice items using the Rasch measurement model

Sonia Sappl, The University of Western Australia

David Andrich, The University of Western Australia

 

Equivalence of two methods for equating scores between two tests using the Rasch measurement model

Dragana Surla, Department of Education

David Andrich, The University of Western Australia

 

Discontinuous Levels of Complexity in Measurement Information Infrastructures: Separating the Distinct Roles of KidMaps, Wright Maps, and Construct Maps

William P. Fisher, Jr., BEAR Center, University of California, Berkeley

A. Jackson Stenner, MetaMetrics, Inc.

 

 

 

Wednesday, April 11, 3:30 PM – 4:00 PM

 

Conference Concluding Session

 

 

 

Thursday, April 12, 9:00 AM – 4:00 PM

 

Data and Software Workshops

 

9:00AM – 12:00PM      Overview and Review of Software

 

RUMM — David Andrich

Winsteps — Stefanie Wind

Facets — Richard Sudweeks

ConQuest — Rebecca Freund 

Openbugs — Hong Jiao 

jMetrik — Patrick Meyer 

Damon on Python — Mark Mouton

 

12:00PM – 1:00 PM     Panel on Measurement and Big Data

 

1:00PM – 2:30PM         Workshop Session 1 

 

RUMM              Room TBA

Winsteps          Room TBA

jMetrik             Room TBA

 

 2:30PM – 4:00PM       Workshop Session 2 

 

Facets              Room TBA

ConQuest         Room TBA

Openbugs         Room TBA

bottom of page