Skip to main content

Students' Understanding Of Models In Science

SUMS

    OVERVIEW
    Overview
    Listed below is general information about the instrument.
    Summary
    Original author(s)
    • Treagust, D.F., Chittleborough, G., & Mamiala, T.L.

    Original publication
    • Treagust, D.F., Chittleborough, G., & Mamiala, T.L. (2002). Students' understanding of the role of scientific models in learning science. International Journal of Science Education, 24(4), 357-368.

    Year original instrument was published 2002
    Inventory
    Number of items 27
    Number of versions/translations 1
    Cited implementations 13
    Language
    • English
    • Unknown
    Country Australia, China, Taiwan, United States
    Format
    • Response Scale
    Intended population(s)
    • Students
    • Secondary School
    • High School
    • Undergraduate
    • Teachers
    • In-service Teachers
    • Pre-service
    Domain
    • Cognitive
    Topic
    • Models/Modeling
    Evidence
    The CHIRAL team carefully combs through every reference that cites this instrument and pulls all evidence that relates to the instruments’ validity and reliability. These data are presented in the following table that simply notes the presence or absence of evidence related to that concept, but does not indicate the quality of that evidence. Similarly, if evidence is lacking, that does not necessarily mean the instrument is “less valid,” just that it wasn’t presented in literature. Learn more about this process by viewing the CHIRAL Process and consult the instrument’s Review (next tab), if available, for better insights into the usability of this instrument.

    Information in the table is given in four different categories:
    1. General - information about how each article used the instrument:
      • Original development paper - indicates whether in which paper(s) the instrument was developed initially
      • Uses the instrument in data collection - indicates whether an article administered the instrument and collected responses
      • Modified version of existing instrument - indicates whether an article has modified a prior version of this instrument
      • Evaluation of existing instrument - indicates whether an article explicitly provides evidence that attempt to evaluate the performance of the instrument; lack of a checkmark here implies an article that administered the instrument but did not evaluate the instrument itself
    2. Reliability - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    3. Validity - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    4. Other Information - information that may or may not directly relate to the evidence for validity and reliability, but are commonly reported when evaluating instruments; please see the Glossary for term definitions
    Publications: 1 2 3 4 5 6 7 8 9 10 11 12 13

    General

    Original development paper
    Uses the instrument in data collection
    Modified version of existing instrument
    Evaluation of existing instrument

    Reliability

    Test-retest reliability
    Internal consistency
    Coefficient (Cronbach's) alpha
    McDonald's Omega
    Inter-rater reliability
    Person separation
    Generalizability coefficients
    Other reliability evidence

    Validity

    Expert judgment
    Response process
    Factor analysis, IRT, Rasch analysis
    Differential item function
    Evidence based on relationships to other variables
    Evidence based on consequences of testing
    Other validity evidence

    Other information

    Difficulty
    Discrimination
    Evidence based on fairness
    Other general evidence
    Review
    DISCLAIMER: The evidence supporting the validity and reliability of the data summarized below is for use of this assessment instrument within the reported settings and populations. The continued collection and evaluation of validity and reliability evidence, in both similar and dissimilar contexts, is encouraged and will support the chemistry education community’s ongoing understanding of this instrument and its limitations.
    This review was generated by a CHIRAL review panel. Each CHIRAL review panel consists of multiple experts who first individually review the citations of the assessment instrument listed on this page for evidence in support of the validity and reliability of the data generated by the instrument. Panels then meet to discuss the evidence and summarize their opinions in the review posted in this tab. These reviews summarize only the evidence that was discussed during the panel which may not represent all evidence available in the published literature or that which appears on the Evidence tab.
    If you feel that evidence is missing from this review, or that something was documented in error, please use the CHIRAL Feedback page.

    Panel Review: Students' Understanding of Models in Science (SUMS)

    (Post last updated June 16, 2022)

    Review panel summary

    The Students' Understanding of Models in Science (SUMS) instrument is a twenty-seven item, 5-point Likert-scale assessment, with response options including strongly disagree (1), disagree (2), not sure (3), agree (4), and strongly disagree (5). SUMS was designed to assess students’ understanding of the role of scientific models in science. It has been evaluated with students enrolled in high school science (chemistry, physics, and biology) classes in both Australia [1] and the United States [2, 3, 5], as well as undergraduate first-semester general chemistry courses in the United States [4]. Several aspects of validity and reliability have been assessed for the data generated by SUMS. During the development of this instrument, it was noted that the word “phenomenon” in SUMS items had a high response rate of “not sure” on the Likert scale [1]; the SUMS developers inferred that this indicated that students were likely unfamiliar with the word “phenomenon” and suggested readers to cautiously consider results from items containing this word, although interviews with students were not conducted to further probe the meaning in their responses to these specific items. In a subsequent study [4], student interviews were conducted to provide evidence for response process validity in which students were asked to respond to each SUMS item, describing their thought processes and any unclear language within the items. Potential issues regarding clarity were found with SUMS items 8 and 15 (confusing, double-barreled items) as well as SUMS items 16 and 23 (difficult to interpret); these issues were also noted in factor analyses [4]. In terms of validity, evidence for internal structure validity was provided through an exploratory factor analysis, resulting in a five-factor solution [1]. Subsequent studies in the literature have used this five-factor model without conducting any additional internal structure analyses [2, 3, 5]; however, one study within the context of first-semester general chemistry [4] reported an exploratory factor analysis which resulted in a four-factor solution, based on an included scree plot and eigenvalues greater than 1.00. This study additionally excluded four items due to insufficient factor loadings and cross-loadings with more than one factor indicated [4]; ultimately, a confirmatory factor analysis was conducted, which failed to indicate good model fit for the four-factor model. Evidence for relation to other variables (subject matter/age) [2] has also been reported in connection with SUMS; although gender was reported as having a significant correlation to one of the SUMS factors, no data was shown in the publication [1]. In terms of evidence for reliability, coefficient alpha has been used to estimate single administration reliability separately for each of the five factors [1-3]; however, one study [4] intentionally does not report coefficient alpha due to concerns regarding inconsistencies in the evidence for internal structure validity of data when using SUMS in the context of undergraduate chemistry students’ knowledge of scientific models.

    Recommendations for use

    SUMS was intended to assess students’ understanding of the role of scientific models in learning science, originally developed on a 5-point Likert scale [1]. Differences exist in the literature regarding how this response scale is used, with most studies collecting and analyzing data on the 5-point scale [2-4]. However, several studies in the literature truncate/collapse this scale for analysis [1, 5], including the original SUMS study [1]. This lack of consistency in how the SUMS scale is used makes it unclear which scale is most appropriate; it is suggested that future researchers provide support for the selection of a response scale based on theoretical framing. Additionally, differences exist in the literature regarding how to best analyze the data for item/factor scores. Although the original development study does not clarify scoring [1], a subsequent study reverse codes some items within one of the factors (models as exact replicas (ER)) [2], meaning that they assigned a value of 5 to strongly disagree responses and value of 1 to strongly agree responses. Other reports in the literature explicitly do not reverse score/code any items [4] in order to best compare to the original development of the instrument [1], meaning that a higher level of agreement to items does not necessarily align with a high level of understanding of scientific models. This lack of consistency in how the SUMS scale is scored makes it unclear which scoring strategy is most appropriate. It is suggested that future researchers provide support for the selection of the response scale based on theoretical framing. Finally, it has been reported that SUMS does not generate valid and reliable data in undergraduate chemistry student contexts [4].

    Details from panel review

    While some aspects of validity and reliability evidence for the SUMS instrument have been reported, the panel found it concerning that there is currently no evidence for test content validity, although one study provides minimal details on the exclusion of SUMS item 23 based on expert interpretation of problematic wording within that item. Regarding internal structure validity, the original development of the SUMS allowed for factor cross-loadings and reported correlations between factors without theoretical backing for why those factors might be related [1]; with the failed 4-factor model [4] and potential issues with the 5-factor model and its origins [1], more evidence related to the internal structure of the SUMS instrument is warranted in future studies.

    References

    [1] Treagust, D.F., Chittleborough, G., & Mamiala, T.L. (2002). Students’ understanding of the role of scientific models in learning science. International Journal of Science Education, 24(4), 357-368. https://doi.org/10.1080/09500690110066485

    [2] Gobert, J.D., O’Dwyer, L., Horwitz, P., Buckley, B.C., Levy, S.T., & Wilensky, U. (2011). Examining the relationship between students’ understanding of the nature of models and conceptual learning in biology, physics, and chemistry. International Journal of Science Education, 33(5), 653-684. https://doi.org/10.1080/09500691003720671

    [3] Park, M., Liu, X., Smith, E., & Waight, N. (2017). The effect of computer models as formative assessment on student understanding of the nature of models. Chemistry Education Research and Practice, 18(4), 572-581. https://doi.org/10.1039/C7RP00018A

    [4] Lazenby, K., & Becker, N.M. (2021). Evaluation of the students’ understanding of models in science (SUMS) for use in undergraduate chemistry. Chemistry Education Research and Practice, 22(1), 62-76. https://doi.org/10.1039/D0RP00084A

    [5] Levy, S.T., & Wilensky, U. (2009). Students’ learning with the Connected Chemistry (CC1) curriculum: Navigating the complexities of the particulate world. (2009). Journal of Science Education and Technology, 18, 243-254. https://doi.org/10.1007/s10956-009-9145-7

    Versions
    This instrument has not been modified nor was it created based on an existing instrument.
    Citations
    Listed below are all literature that develop, implement, modify, or reference the instrument.
    1. Treagust, D.F., Chittleborough, G., & Mamiala, T.L. (2002). Students' understanding of the role of scientific models in learning science. International Journal of Science Education, 24(4), 357-368.

    2. Wei, S., Liu, X., & Jia, Y. (2014). Using rasch measurement to validate the instrument of students’ understanding of models in science (SUMS). International Journal of Science and Mathematics Education, 12(5), 1067-1082.

    3. Chittleborough, G., & Treagust, D.F. (2007). The modelling ability of non-major chemistry students and their understanding of the sub-microscopic level. Chemistry Education Research and Practice, 8(3), 274-292.

    4. Cheng, M.-F., & Lin, J.-L. (2015). Investigating the Relationship between Students’ Views of Scientific Models and Their Development of Models. International Journal of Science Education, 37(15), 2453-2475.

    5. Underwood, S.M., Reyes-Gastelum, D., & Cooper, M.M. (2016). When do students recognize relationships between molecular structure and properties? A longitudinal comparison of the impact of traditional and transformed curricula. Chemistry Education Research

    6. Pierson, A.E., Clark, D.B., & Sherard, M.K. (2017). Learning progressions in context: Tensions and insights from a semester-long middle school modeling curriculum. Science Education, 101(6), 1061-1088.

    7. Levy, S.T., & Wilensky, U. (2009). Students' learning with the connected chemistry (CC1) curriculum: Navigating the complexities of the particulate world. Journal of Science Education and Technology, 18(3), 243-254.

    8. Liu, X. (2006). Effects of combined hands-on laboratory and computer modeling on student learning of gas laws: A quasi-experimental study. Journal of Science Education and Technology, 15(1), 89-100.

    9. Chang, H.-Y., & Chang, H.-C. (2013). Scaffolding Students' Online Critiquing of Expert- and Peer-generated Molecular Models of Chemical Reactions. International Journal of Science Education, 35(12), 2028-2056.

    10. Gobert, J.D., O'Dwyer, L., Horwitz, P., Buckley, B.C., Levy, S.T., & Wilensky, U. (2011). Examining the relationship between students' understanding of the nature of models and conceptual learning in biology, physics, and chemistry. International Journal

    11. Lazenby, K., & Becker, N.M. (2021). Evaluation of the students’ understanding of models in science (SUMS) for use in undergraduate chemistry. Chemistry Education Research and Practice, 22(1), 44924.

    12. Otto, C.A., Luera, G.R., & Everett, S.A. (2009). An innovative course featuring action research integrated with unifying science themes. Journal of Science Teacher Education, 20(6), 537-552.

    13. Park, M., Liu, X., Smith, E., & Waight, N. (2017). The effect of computer models as formative assessment on student understanding of the nature of models. Chemistry Education Research and Practice, 18(4), 572-581.