Skip to main content

Quantum Chemistry Concept Inventory

QCCI

    OVERVIEW
    Overview
    Listed below is general information about the instrument.
    Summary
    Original author(s)
    • Dick-Perez, M., Luxford, C.J., Windus, T.L., & Holme, T.

    Original publication
    • Dick-Perez, M., Luxford, C.J., Windus, T.L., & Holme, T. (2016). A Quantum Chemistry Concept Inventory for Physical Chemistry Classes. Journal of Chemical Education, 93(4), 605-612.

    Year original instrument was published 2016
    Inventory
    Number of items 14
    Number of versions/translations 2
    Cited implementations 2
    Language
    • English
    Country Canada, United States
    Format
    • Multiple Choice
    Intended population(s)
    • Students
    • Undergraduate
    • Graduate
    Domain
    • Cognitive
    Topic
    • Quantum
    Evidence
    The CHIRAL team carefully combs through every reference that cites this instrument and pulls all evidence that relates to the instruments’ validity and reliability. These data are presented in the following table that simply notes the presence or absence of evidence related to that concept, but does not indicate the quality of that evidence. Similarly, if evidence is lacking, that does not necessarily mean the instrument is “less valid,” just that it wasn’t presented in literature. Learn more about this process by viewing the CHIRAL Process and consult the instrument’s Review (next tab), if available, for better insights into the usability of this instrument.

    Information in the table is given in four different categories:
    1. General - information about how each article used the instrument:
      • Original development paper - indicates whether in which paper(s) the instrument was developed initially
      • Uses the instrument in data collection - indicates whether an article administered the instrument and collected responses
      • Modified version of existing instrument - indicates whether an article has modified a prior version of this instrument
      • Evaluation of existing instrument - indicates whether an article explicitly provides evidence that attempt to evaluate the performance of the instrument; lack of a checkmark here implies an article that administered the instrument but did not evaluate the instrument itself
    2. Reliability - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    3. Validity - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    4. Other Information - information that may or may not directly relate to the evidence for validity and reliability, but are commonly reported when evaluating instruments; please see the Glossary for term definitions
    Publications: 1 2

    General

    Original development paper
    Uses the instrument in data collection
    Modified version of existing instrument
    Evaluation of existing instrument

    Reliability

    Test-retest reliability
    Internal consistency
    Coefficient (Cronbach's) alpha
    McDonald's Omega
    Inter-rater reliability
    Person separation
    Generalizability coefficients
    Other reliability evidence

    Vailidity

    Expert judgment
    Response process
    Factor analysis, IRT, Rasch analysis
    Differential item function
    Evidence based on relationships to other variables
    Evidence based on consequences of testing
    Other validity evidence

    Other information

    Difficulty
    Discrimination
    Evidence based on fairness
    Other general evidence
    Review
    DISCLAIMER: The evidence supporting the validity and reliability of the data summarized below is for use of this assessment instrument within the reported settings and populations. The continued collection and evaluation of validity and reliability evidence, in both similar and dissimilar contexts, is encouraged and will support the chemistry education community’s ongoing understanding of this instrument and its limitations.
    This review was generated by a CHIRAL review panel. Each CHIRAL review panel consists of multiple experts who first individually review the citations of the assessment instrument listed on this page for evidence in support of the validity and reliability of the data generated by the instrument. Panels then meet to discuss the evidence and summarize their opinions in the review posted in this tab. These reviews summarize only the evidence that was discussed during the panel which may not represent all evidence available in the published literature or that which appears on the Evidence tab.
    If you feel that evidence is missing from this review, or that something was documented in error, please use the CHIRAL Feedback page.

    Panel Review: The Quantum Chemistry Concept Inventory (QCCI)

    (Post last updated March 1, 2023)

    Review panel summary 
    The Quantum Chemistry Concept Inventory (QCCI) was designed to assess students’ conceptual understanding of topics commonly taught in the quantum section of undergraduate physical chemistry courses [1]. While the QCCI includes 14 multiple-choice items, one of the items was identified as having two possible correct answers. Therefore, reported results in the original publication [1] only included the 13 remaining items, and a subsequent usage paper [2] chose to administer the 13-item version. The QCCI has been administered as a pre-/post-course assessment at a variety of institutions across the United States and at a large public Canadian university [1]. Pre-/post-course data collected using the QCCI has been investigated using normalized gains analysis. Results found that students performed better on most items included in the QCCI (except for Item 1) after completing the course of interest, with the largest gains occurring in items related to content that is typically addressed for the first time in physical chemistry courses. In addition, the QCCI has been administered as only a post-course assessment as part of a larger study [2].

    Evidence based on test content was provided during the development of the QCCI. The developers reported that literature on student misconceptions, related to topics that are commonly taught in the quantum section of physical chemistry, was used to inform the development of items [1], although details related to how the literature informed the items and distracters were not discussed. Additionally, the pilot version of the QCCI was developed and edited by professors and postdoctoral students [1], although the content areas and teaching experiences of these individuals were not described.

    Item difficulty and item discrimination of the data generated by the QCCI was also examined. Pre-course data collected with the QCCI showed that item difficulty fell within the acceptable cutoff range for all but five items, while post-course data showed that item difficulty fell within the acceptable cutoff range for all items [1]. Regarding item discrimination, the QCCI showed acceptable values for Fergusons’ delta, which demonstrated that the measure could distinguish students’ understanding across the full range of scores [1]. Additionally, the point biserial correlation coefficient was used to identify correlations between students’ score on an item and their total score on the instrument. All items except one (Item 8) fell above the designated cutoff for the point biserial correlation coefficient [1].

    Recommendations for use 
    The QCCI has been used to assess students’ conceptual understanding of topics commonly taught in the quantum section of undergraduate physical chemistry courses. Evidence based on test content, item difficulty, and item discrimination have been published, although details related to supporting the evidence based on test content are limited [1]. While the developers of the QCCI suggested that the quantitative data collected in their study indicated that the assessment successfully elicited student misconceptions within the multiple choice format of the instrument, without collecting evidence based on response process, it is impossible to know if these distracters are actually representative of students’ misconceptions. For this reason, future users are encouraged to provide evidence based on response process for data collected with this assessment. This type of validity evidence would greatly strengthen the inferences that can be made from QCCI data.

    Details from panel review 
    The developers of the QCCI designed the measure with a “focus on content coverage” [1], although it was not explicitly described how topics of interest were identified or the prevalence of their coverage in the courses used for data collection. Of the incorrect multiple choice options (distracters) included in the QCCI, the developers found that most distracters (all except 5) attracted 10% or more of the students who completed the pre-test, while fewer distracters (all except 13) attracted 10% or more of the students who completed the post-test [1]. The developers suggested that this difference may stem from the fact that students are better able to avoid misconceptions after a semester of instruction. Alternatively, on 6 items, post-test distracters were chosen by greater than 25% of students [1]. In this case, the developers suggested that a semester of instruction does not completely dispel some students’ misconceptions or lack of original understanding. However, without collecting evidence based on response process, it is difficult to know if these data were actually indicative of students’ misconceptions, or if students decided to select or avoid distracters for other reasons.

    The developers of the QCCI gathered and reported evidence based on test content, item difficulty, and item discrimination [1]. Regarding evidence based on test content, details related to how literature on student misconceptions was used in item development were not described. Additionally, it is unclear what feedback the professors and postdoctoral students provided on the items, and no descriptions of their content areas and/or teaching experiences was included. Finally, it was noted by the developers of the QCCI that results from the pilot study informed minor changes within existing items and the addition of two items relating to spectroscopy [1]. As the 12 original pilot items were not provided, and details related to the data/feedback that informed changes to the pilot was not included, it is unknown how or why these items were edited. For item difficulty, pre-course data showed that values fell within the acceptable cutoff range of 0.3-0.8 for all but five items (Items 3,4,5,8,12) while post-course data showed that item difficulty fell within the acceptable cutoff range for all items [1]. While five items of the QCCI were found to fall outside of the acceptable range of difficulty when considering the pre-course data, this may not be problematic, as students who have not yet received instruction on a particular topic may reasonably find items related to that topic more difficult. Turning to item discrimination, the QCCI showed acceptable values for Fergusons’ delta (> 0.9) and the point biserial correlation coefficient (≥ 0.2) for all items except Item 8 [1]. While this method of investigating item discrimination is appropriate for the sample sizes reported in the study, future users of the assessment with access to larger sample sizes may be interested in determining item-level discrimination using upper and lower quartiles to investigate how well items on the QCCI differentiate between high and low performers.

    A later study investigated a possible correlation between student’s total scores on the QCCI and their performance on a quantum chemical model rating task; however, no significant correlation between the scores was found [2]. As the researchers who conducted the investigation did not provide any theoretical support for why a correlation between these scores might exist, the investigation does not provide evidence based on relations to other variables. Additionally, while a total score for the QCCI was reported in this case, many concept inventory developers suggest that outcomes for these types of assessments be analyzed at the item level only.

    References

    [1] García, M.P., Luxford, C.J., Windus, T.L., & Holme, T. A Quantum Chemistry Concept Inventory for Physical Chemistry Classes. J. Chem. Educ. 93(4), 605-612.

    [2] Muniz, M.N., Crickmore, C., Kirsch, J., & Beck, J.P. Upper-division chemistry students’ navigation and use of quantum chemical models. Chem. Educ. Res. & Pract. 19, 767-782.

    Versions
    Listed below are all versions and modifications that were based on this instrument or this instrument were based on.
    Instrument is derived from:
    Name Authors
    • Cataloglu, E., & Robinett, R.W.

    • Brown, B.

    Name Authors
    • Partanen, L.

    • Moon, A., Zotos, E., Finkenstaedt-Quinn, S., Gere, A.R., & Shultz, G.

    Citations
    Listed below are all literature that develop, implement, modify, or reference the instrument.
    1. Dick-Perez, M., Luxford, C.J., Windus, T.L., & Holme, T. (2016). A Quantum Chemistry Concept Inventory for Physical Chemistry Classes. Journal of Chemical Education, 93(4), 605-612.

    2. Muniz, M.N., Crickmore, C., Kirsch, J., & Beck, J.P. (2018). Upper-division chemistry students' navigation and use of quantum chemical models. Chemistry Education Research and Practice, 19(3), 767-782.