Skip to main content

Quantization And Probability Representations Inventory

QuPRI

    OVERVIEW
    Overview
    Listed below is general information about the instrument.
    Summary
    Original author(s)
    • Roche, Allred Z.D., & Bretz, S.L.

    Original publication
    • Roche Allred, Z.D., & Bretz, S.L. (2019). Development of the Quantization and Probability Representations Inventory as a Measure of Students' Understandings of Particulate and Symbolic Representations of Electron Structure. Journal of Chemical Education,

    Year original instrument was published 2019
    Inventory
    Number of items 26
    Number of versions/translations 1
    Cited implementations 1
    Language
    • English
    Country United States
    Format
    • Multiple Choice
    • Response Scale
    Intended population(s)
    • Students
    • Undergraduate
    Domain
    • Cognitive
    Topic
    • Quantum
    Evidence
    The CHIRAL team carefully combs through every reference that cites this instrument and pulls all evidence that relates to the instruments’ validity and reliability. These data are presented in the following table that simply notes the presence or absence of evidence related to that concept, but does not indicate the quality of that evidence. Similarly, if evidence is lacking, that does not necessarily mean the instrument is “less valid,” just that it wasn’t presented in literature. Learn more about this process by viewing the CHIRAL Process and consult the instrument’s Review (next tab), if available, for better insights into the usability of this instrument.

    Information in the table is given in four different categories:
    1. General - information about how each article used the instrument:
      • Original development paper - indicates whether in which paper(s) the instrument was developed initially
      • Uses the instrument in data collection - indicates whether an article administered the instrument and collected responses
      • Modified version of existing instrument - indicates whether an article has modified a prior version of this instrument
      • Evaluation of existing instrument - indicates whether an article explicitly provides evidence that attempt to evaluate the performance of the instrument; lack of a checkmark here implies an article that administered the instrument but did not evaluate the instrument itself
    2. Reliability - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    3. Validity - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    4. Other Information - information that may or may not directly relate to the evidence for validity and reliability, but are commonly reported when evaluating instruments; please see the Glossary for term definitions
    Publications: 1

    General

    Original development paper
    Uses the instrument in data collection
    Modified version of existing instrument
    Evaluation of existing instrument

    Reliability

    Test-retest reliability
    Internal consistency
    Coefficient (Cronbach's) alpha
    McDonald's Omega
    Inter-rater reliability
    Person separation
    Generalizability coefficients
    Other reliability evidence

    Validity

    Expert judgment
    Response process
    Factor analysis, IRT, Rasch analysis
    Differential item function
    Evidence based on relationships to other variables
    Evidence based on consequences of testing
    Other validity evidence

    Other information

    Difficulty
    Discrimination
    Evidence based on fairness
    Other general evidence
    Review
    DISCLAIMER: The evidence supporting the validity and reliability of the data summarized below is for use of this assessment instrument within the reported settings and populations. The continued collection and evaluation of validity and reliability evidence, in both similar and dissimilar contexts, is encouraged and will support the chemistry education community’s ongoing understanding of this instrument and its limitations.
    This review was generated by a CHIRAL review panel. Each CHIRAL review panel consists of multiple experts who first individually review the citations of the assessment instrument listed on this page for evidence in support of the validity and reliability of the data generated by the instrument. Panels then meet to discuss the evidence and summarize their opinions in the review posted in this tab. These reviews summarize only the evidence that was discussed during the panel which may not represent all evidence available in the published literature or that which appears on the Evidence tab.
    If you feel that evidence is missing from this review, or that something was documented in error, please use the CHIRAL Feedback page.

    Panel Review: Quantization And Probability Representations Inventory (QuPRI)

    (Post last updated 09 June 2023)

    Review panel summary   
    The Quantization and Probability Representation Inventory (QuPRI) is a 27-item concept inventory designed to assess students’ understanding of the structure of the atom [1]. The first 24 items of the QuPRI are multiple choice (with one correct answer) and have been categorized by the instrument’s developers as: items measuring students’ interpretations of the Bohr model (3), items assessing students’ ideas about electron probability (10), items assessing students’ ideas about energy quantization (8), items measuring students’ ideas about both probability and energy quantization (2), and items measuring students’ ideas about probability and the Bohr model (1). The last 3 items investigate the concepts of probability, quantization, and the Bohr model with questions that ask students to choose their preferred model rather than selecting a single “correct answer”[1]. The development of items and distracters included in the QuPRI was informed by data from 34 semi-structured cognitive interviews with students in general chemistry (GC, n=26), physical chemistry (PC, n=3), and biophysical chemistry (BPC, n=5), as these courses teach the structure of the atom with increasing sophistication.

    Evidence based on test content was provided during the development of the QuPRI in two different ways. First, item distracters were constructed directly from alternative conceptions that students expressed in interviews. Second, the authors received expert feedback from 11 faculty members with experience teaching general chemistry I and/or physical chemistry at multiple universities. These faculty reviewed the 31-item pre-pilot version of the inventory for accuracy of chemistry content and clarity of items and distracters. Additionally, the faculty were given the task to choose which items represented the concepts of probability and energy quantization [1]. After analyzing faculty feedback, various revisions were made to the pre-pilot items, resulting in a 26-item pilot version of the QuPRI (23 multiple choice items (with one correct answer) and 3 items about students' preferred model (that do not have a correct answer)).

    The 26-item pilot version of the inventory was used to conduct a quantitative study with 862 GC students and 46 PC/BPC students. Additional evidence to support the validity and reliability of data gathered with the QuPRI was assessed using the 26-item pilot version of the inventory, although quantitative analyses (e.g., evidence based on relation to other variables, single administration reliability, item difficulty, and item discrimination) were only conducted for the 23 multiple choice items (each with one correct answer).

    Evidence based on response process was provided through 18 cognitive interviews with students who completed the 26-item pilot version of the QuPRI. Interviews took place one week after the administration of the pilot version. In total, 10 GC and 8 PC/BPC students were purposefully sampled based on their responses to the QuPRI, as researchers wanted to interview students displaying a large range of scores as well as demographic backgrounds. Two items were modified on the basis of students’ interview responses, and one new multiple choice item (with one correct answer) was added, bringing the total number of items included in the final version of the QuPRI to 27.

    The authors report evidence related to “concurrent validity,” which is an aspect of evidence based on relation to other variables. This evidence was provided by comparing the average QuPRI scores of GC students (n=655) and PC/BPC students (n=38). The theoretical background for this comparison is that students who have received more instruction on the content assessed by the QuPRI (i.e., PC/BPC students) should be able to perform better than those who have received less instruction (i.e., GC students). To test this theory, the authors conducted a Mann-Whitney U test, which showed a significant difference with a small effect size favoring the students in the upper division chemistry courses (i.e., students who had received more instruction).

    The authors reported evidence related to single administration reliability in the form of coefficient alpha. Alpha values were reported for the full 23-item multiple choice portion of the QuPRI, in addition to various item groupings (by concept), for each course of interest. All alpha values were found to be below the commonly reported threshold of 0.7. While the coefficient alpha values were found to be outside of the acceptable range, the authors argued that this measure of reliability may not be appropriate for assessments that are not unidimensional (e.g., concept inventories), “because individual items are constructed to detect students’ fragmented knowledge” [1].

    In addition to evidence related to validity and reliability, the authors also gathered evidence related to item difficulty, and discrimination. Item difficulty was first explored by running tests of normality of distribution. For the GC sample, the authors conducted a Kolmogorov-Smirnov test. For the PC/BPC sample, the authors selected a Shapiro-Wilks test, which is designed to test normality for samples smaller than 50. These tests revealed a variety of item distributions, with some items that were skewed, suggesting that QuPRI items had various levels of difficulty. The authors also calculated item difficulty as the proportion of students who selected the correct response to the item. Item difficulties of 0.3 or below are considered difficult. In total, 6 items displayed a difficulty below 0.3 for the GC students and 2 items displayed a difficulty below 0.3 for the PC/BPC students, indicating that most of the items were less difficult for the PC/BPC students. Additionally, the GC data showed 2 items with difficulty values above 0.8, representing easy items. For the PC/BPC data, 6 items were found to be easy. Regarding item discrimination, values of 0.3 or greater suggest that an item can differentiate between top-performing (27%) and low-performing (27%) students. For both the GC and PC/BPC data sets, several items displayed discrimination values below 0.3 and 2 items (3 and 23) exhibited poor item discrimination for both samples. In order to further explore item discrimination the authors calculated Ferguson’s delta. Values of 0.9 or above indicate a broadly distributed sample. The authors reported values above this threshold for both GC and PC/BPC students [1].

    Recommendations for use   
    The Quantization and Probability Representation Inventory (QuPRI) has been used in general chemistry, physical chemistry, and biophysical chemistry courses as a measure of student understanding after instruction on the structure of the atom. Additionally, the QuPRI has been used to demonstrate the idea that students with more instruction on this topic (e.g, PC/BPC students) will perform better than students who have not had much instruction on this topic (e.g., GC students) by comparing the scores between these two groups [1].

    Evidence related to item difficulty, and item discrimination indicates that many of the items included on the QuPRI fall within recommended cutoffs for these metrics in both GC courses and upper division (i.e., PC/BPC) courses. That said, a variety of items have been shown to fall outside of the recommended cutoffs for item difficulty and/or item discrimination in either GC or PC/BPC courses, and items 3 and 23 have shown high difficulty and poor discrimination in both groups. Therefore, the results of these items should be interpreted with caution.

    QuPRI subscores have been calculated using developer-derived item groupings (by topic) [1], which may be of interest to future QuPRI users. As these are not statistically derived groupings, it is recommended that interested users review the items used to calculate each subscore to determine if they will be useful in understanding the data from their students.

    Finally, the evidence in support of validity and reliability for the data collected with the QuPRI is currently limited to the 26-item pilot version of the concept inventory. Because an additional multiple choice item is included in the final version of the QuPRI, future users of the concept inventory may find it useful to collect additional evidence, especially that related to item difficulty, and discrimination for the additional item.

    Details from panel review   
    Evidence based on relation to other variables has been provided for the data collected with the QuPRI by comparing the average scores of GC students (n=655) and PC/BPC students (n=38). The statistical analysis used to compare QuPRI scores between GC and PC/BPC students was the Mann-Whitney U test. The authors report U=20976.5 p<0.001,ղ2=0.074. These results suggest that there is a statistically significant difference between the two student groups, favoring the PC/BPC students, with a small effect size, supporting the theory that more content instruction results in higher QuPRI scores [1].

    The authors presented single administration reliability evidence by reporting coefficient alpha; however, the values reported were below the commonly reported threshold in every case and the authors did not report further reliability measures. The authors argue that this measure of reliability is likely not appropriate for concept inventories like the QuPRI, because “individual items are constructed to detect students’ fragmented knowledge” [1]. Reported alpha values for the 23 multiple choice QuPRI item were 0.63 for GC students and 0.56 for PC/BPC students. When items were grouped into the author-derived categories, the alpha values for the ‘Probability’ items were 0.45 for GC and 0.28 for PC/BPC and alpha values for the ‘Energy Quantization’ items were 0.65 for GC and 0.56 for PC/BPC.

    item discrimination was reported as Ferguson’s delta values of 0.96 for GC and 0.92 for PC/BPC students. These values are acceptable. However, upon evaluation of individual items , items 3 and 23 were found to not discriminate well between top performing (top 27%) and bottom performing (bottom 27%) students for both samples. The authors indicate that these items were challenging for students in both courses because they ask about the connection, if any, between the energy-level diagram and a 2s atomic orbital representation. This level of integration of information is advanced and not likely to appear as part of the normal curriculum for either course. However, the authors decided to retain these items despite the high level of difficulty and poor discrimination, because they test an important concept at a deeper level.[1]

    References

    [1] Roche Allred, Z.D. & Bretz, S.L. (2019). Development of the Quantization and Probability Representations Inventory as a Measure of Students’ Understandings of Particulate and Symbolic Representations of Electron Structure. J. Chem. Educ. 96(8), 1558-1570.

    Versions
    This instrument has not been modified nor was it created based on an existing instrument.
    Citations
    Listed below are all literature that develop, implement, modify, or reference the instrument.
    1. Roche Allred, Z.D., & Bretz, S.L. (2019). Development of the Quantization and Probability Representations Inventory as a Measure of Students' Understandings of Particulate and Symbolic Representations of Electron Structure. Journal of Chemical Education,