Skip to main content

Rate Law Assessment

    OVERVIEW
    Overview
    Listed below is general information about the instrument.
    Summary
    Original author(s)
    • Brandriet, A., Rupp, C.A., Lazenby, K., & Becker, N.M.

    Original publication
    • Brandriet, A., Rupp, C.A., Lazenby, K., & Becker, N.M. (2018). Evaluating students' abilities to construct mathematical models from data using latent class analysis. Chemistry Education Research and Practice, 19(1), 375-391.

    Year original instrument was published 2018
    Inventory
    Number of items 18
    Number of versions/translations 1
    Cited implementations 1
    Language
    • English
    Country United States
    Format
    • Multiple Choice
    • Open Ended
    Intended population(s)
    • Students
    • Undergraduate
    Domain
    • Cognitive
    Topic
    • Kinetics
    Evidence
    The CHIRAL team carefully combs through every reference that cites this instrument and pulls all evidence that relates to the instruments’ validity and reliability. These data are presented in the following table that simply notes the presence or absence of evidence related to that concept, but does not indicate the quality of that evidence. Similarly, if evidence is lacking, that does not necessarily mean the instrument is “less valid,” just that it wasn’t presented in literature. Learn more about this process by viewing the CHIRAL Process and consult the instrument’s Review (next tab), if available, for better insights into the usability of this instrument.

    Information in the table is given in four different categories:
    1. General - information about how each article used the instrument:
      • Original development paper - indicates whether in which paper(s) the instrument was developed initially
      • Uses the instrument in data collection - indicates whether an article administered the instrument and collected responses
      • Modified version of existing instrument - indicates whether an article has modified a prior version of this instrument
      • Evaluation of existing instrument - indicates whether an article explicitly provides evidence that attempt to evaluate the performance of the instrument; lack of a checkmark here implies an article that administered the instrument but did not evaluate the instrument itself
    2. Reliability - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    3. Validity - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    4. Other Information - information that may or may not directly relate to the evidence for validity and reliability, but are commonly reported when evaluating instruments; please see the Glossary for term definitions
    Publications: 1

    General

    Original development paper
    Uses the instrument in data collection
    Modified version of existing instrument
    Evaluation of existing instrument

    Reliability

    Test-retest reliability
    Internal consistency
    Coefficient (Cronbach's) alpha
    McDonald's Omega
    Inter-rater reliability
    Person separation
    Generalizability coefficients
    Other reliability evidence

    Vailidity

    Expert judgment
    Response process
    Factor analysis, IRT, Rasch analysis
    Differential item function
    Evidence based on relationships to other variables
    Evidence based on consequences of testing
    Other validity evidence

    Other information

    Difficulty
    Discrimination
    Evidence based on fairness
    Other general evidence
    Review
    DISCLAIMER: The evidence supporting the validity and reliability of the data summarized below is for use of this assessment instrument within the reported settings and populations. The continued collection and evaluation of validity and reliability evidence, in both similar and dissimilar contexts, is encouraged and will support the chemistry education community’s ongoing understanding of this instrument and its limitations.
    This review was generated by a CHIRAL review panel. Each CHIRAL review panel consists of multiple experts who first individually review the citations of the assessment instrument listed on this page for evidence in support of the validity and reliability of the data generated by the instrument. Panels then meet to discuss the evidence and summarize their opinions in the review posted in this tab. These reviews summarize only the evidence that was discussed during the panel which may not represent all evidence available in the published literature or that which appears on the Evidence tab.
    If you feel that evidence is missing from this review, or that something was documented in error, please use the CHIRAL Feedback page.

    Panel Review: Rate Law Assessment

    (Post last updated 09 June 2023)

    Review panel summary   
    The Rate Law Assessment (RLA) was developed to assess students’ abilities to construct rate laws from initial concentration and rate data [1]. The RLA is an 18-item assessment that contains a mix of item types. The first few items are open-ended questions designed to probe students' general understanding of rate laws. The remaining items are framed around three scenarios, which each include a chemical reaction equation and a set of rate data that students use to determine the rate law for the reaction. After each scenario, students are first provided multiple possible rate laws and asked to select the one that makes the most sense based on the data provided. In two subsequent open-ended questions, students are asked to explain their procedure and reasoning for how the order of each reactant was determined. Two of the three scenarios also include a hypothetical student’s rate law, which is followed by a yes/no question about if the student’s rate law is correct and an open-ended question asking for an explanation about why the rate law is correct/incorrect.

    The items on the RLA were developed based on a qualitative study of students’ reasoning about the method of initial rates [2]. A draft interview protocol was piloted with 3 undergraduate chemistry students and 1 postdoctoral research associate to obtain initial validity evidence based on test content. Minor modifications were made to the wording of the items. Interviews were conducted with 2 chemistry faculty to gain further evidence based on test content. Lastly, semi-structured interviews were conducted with 15 undergraduate students from the second-semester of a two-semester introductory chemistry course for STEM majors. The analysis of these interview data suggested five main types of arguments about rate laws. After establishing the coding scheme, 2 raters independently applied the coding scheme until an acceptable level of interrater reliability was established.  

    The RLA is designed to evaluate students’ abilities to construct rate laws from initial concentration and rate data. Therefore, student responses to the six open-ended items about their procedure and reasoning for how the reactant orders were determined are the primary focus of data generation from the RLA. Data from 768 students (enrolled in the second-semester of an introductory chemistry class) were collected in the fall 2015 and spring 2016 semesters [1]. Student responses to the six open-ended items were coded based on the same coding scheme used in the development of the RLA, [2] with the purpose being to investigate the extent to which the levels generalized to a larger sample of responses. The coding scheme describes the five levels of sophistication in students’ responses when working on items related to rate laws namely: (1) incorrect evidence, (2) low level use of data, (3) relating concentration and rate, (4) interpreting data, and (5) interpreting the exponent. To evaluate how consistently the coding scheme was applied to the data, interrater reliability was calculated as Cohen’s Kappa, and values were reported to be above the acceptable value of 0.8 [1].

    To provide support for the levels of student reasoning originally identified [2], student responses to the procedure and reasoning items from each scenario were analyzed using Latent Class Analysis (LCA) [1]. The analysis produced 5 classes of response patterns, which were well aligned with the original reasoning levels.

    To provide support for the ordering of the levels of sophistication within students’ responses to some RLA items, the Test of Logical Thinking (TOLT) which is a measure of scientific reasoning skills [3], was administered three to four weeks prior to the RLA. Evidence based on relation to other variables was reported as a correlation between students’ class membership from the LCA and levels of reasoning as measured by the TOLT. A moderate correlation was reported and a strong case was made for why the TOLT was used to look for correlations, since the reasoning skills measured by TOLT were deemed to be necessary to answer the items on the RLA.

    Recommendations for use   
    The RLA has been used to assess whether second-semester introductory chemistry students are meaningfully engaging with data when constructing rate laws. There is evidence in support of the validity of RLA item data based on test content. Additionally, there is strong interrater reliability evidence for the coding scheme used to analyze the level(s) of reasoning students use when interpreting rate law data. Finally, evidence based on relation to other variables was reported in the form of a moderate correlation with the TOLT. The specific skills of proportional reasoning, control of variables reasoning, and correlational reasoning had the highest correlation with RLA items.

    While semi-structured interviews were conducted with students during development of the RLA, future studies using the instrument may find it useful to collect validity evidence based on response process. This information could help when drawing inferences about how students are utilizing different reasoning skills when engaging with rate law items.

    Overall, there is some evidence to support the validity and reliability of data collected with the RLA, specifically the open-ended items assessing student engagement with data when constructing rate laws. To date, no evidence in support of the validity or reliability of the data has been reported for the other items included in the RLA. As such, data obtained using these items should be interpreted with caution, and researchers interested in using these items may consider conducting their own validity and reliability investigations.

    Details from panel review   
    The 18-item RLA was developed from interviews conducted with 15 second-semester introductory chemistry students. The initial interview protocol was piloted with 3 chemistry students and 1 postdoctoral research associate. This resulted in minor modifications being made to the items. Validity evidence based on test content was obtained by interviewing 2 chemistry faculty. The feedback indicated that the RLA items had the potential to elicit higher-level responses. Semi-structured interviews were then conducted with 15 second-semester introductory chemistry students to obtain information on how students were interpreting terms related to chemical kinetics and how students were engaging with data when constructing rate laws. The analysis yielded five main approaches that students were utilizing when constructing rate laws. Interrater reliability evidence for the coding scheme was established when 2 raters independently applied the coding scheme to the 15 transcripts. Initial percentage agreement was 88.2% for one order of approaches and 79.5% for another order of approaches. The raters discussed discrepancies and came to a consensus for subsequent coding resulting in the five main approaches used by students.

    The RLA was administered to 768 second-semester introductory chemistry students over the fall 2015 and spring 2016 semesters post-instruction and post-testing on rate law concepts. The previously developed coding scheme was used to analyze student responses to 6 open-ended items with the aim of investigating the extent to which the five levels generalized to a larger sample of students. Interrater reliability in the form of Cohen’s Kappa was reported to evaluate how consistently the coding scheme was applied to qualitative data. Two versions of Cohen’s Kappa were reported, one that assumed the five levels of coding were nominal and the other that assumed the five levels were ordinal. This was done for the fall 2015 and spring 2016 data. The Cohen’s Kappa values were all above the threshold of 0.8 suggesting good interrater reliability.

    Latent Class Analysis (LCA) was conducted to provide support for the levels of student reasoning originally identified. The full-information maximum likelihood (FIML) parameter estimation method was used for LCA to allow inference to be made for cases with missing responses. The developers ran six LCA models before choosing the five-class solution as the best fit for the data, which reflects five themes with varying levels of sophistication in how students analyze data when constructing rate laws. The five classes are: (1) incorrect evidence, (2) low-level data use, (3) transitional, (4) quantifying data patterns, and (5) mathematical reasoning. Class 1 was the lowest in sophistication and also the largest group with a latent class prevalence value of 0.379. Class 5 was the highest in sophistication and it was the second largest with a latent class prevalence value of 0.287.

    Evidence based on relation to other variables is reported in the form of correlations with the TOLT which was administered three to four weeks prior to the RLA. The developers have provided a strong claim for using the TOLT since some of the reasoning skills measured by the TOLT are necessary for answering RLA items. The specific skills of proportional reasoning, correlational reasoning, and control of variable reasoning as measured by the TOLT were focused on and it was reported that these three had the highest correlation with student class membership. The Spearman rho correlation values were 0.229, 0.237, and 0.245 respectively for these three reasoning skills. This is important information for users of the RLA when drawing conclusions regarding student utilization of reasoning skills when answering rate law items.

    References

    [1] Becker, N., Rupp, C., & Brandriet A. (2018) Engaging students in analyzing and interpreting data to construct mathematical models: an analysis of students’ reading in a method of initial rates task. Chem. Educ. Res. & Pract. 18(4), 798-810.

    [2] Brandriet, A., Rupp, C. A., Lazenby, K., & Becker, N. M. (2017) Evaluating students’ abilities to construct mathematical models from data using latent class analysis. Chem. Educ. Res. & Pract. 19, 375-391.

    [3] Tobin K. G. & Capie, W. (1981) The development and validation of a group test of logical thinking. Educ. Psychol. Meas. 41(2), 413-423.

     

    Versions
    This instrument has not been modified nor was it created based on an existing instrument.
    Citations
    Listed below are all literature that develop, implement, modify, or reference the instrument.
    1. Brandriet, A., Rupp, C.A., Lazenby, K., & Becker, N.M. (2018). Evaluating students' abilities to construct mathematical models from data using latent class analysis. Chemistry Education Research and Practice, 19(1), 375-391.