(Post last updated July 21, 2021)
Review panel summary
The Meaningful Learning in the Laboratory Instrument (MLLI) is a thirty-item, Likert-scale, instrument. The instrument was designed to measure student's expectations before and after laboratory experiences [1]. It has been used successfully both in paper and online formats [1].
The instrument has been evaluated across general chemistry (first-year chemistry) students [1-5, 8], organic chemistry (second year chemistry) laboratory students [1-5, 8] in all kinds of institutions in the US (e.g., community colleges, PUI, comprehensive universities, and research universities) [1-6, 8], and in universities in the UK and Australia [7]. Additionally, there is one study where the MLLI was given to 3rd and 4th year upper division laboratory students in physical measurement [6], and one study with students in Chemical Engineering in Singapore [9].
There are several aspects of validity that have been studied. Experts in chemistry education have provided expert judgment on the categorization of items as measuring cognitive, affective, or cognitive/affective dimensions [1]. Student interviews have been conducted to provide response process validity with students in general chemistry and organic chemistry [1], which provides evidence that students at this stage interpret the MLLI items as intended.
Studies using MLLI have provided evidence that the instrument can be used in regard to relation to other variables to track the change of student expectations at the beginning and end of term (i.e., pre-post) [1, 3, 6, 9], and across time (longitudinally) [4].
Evidence relating to the internal structure provides support for interpreting MLLI data as positive and negative student expectations [1, 3]. Finally, coefficient alpha was used to estimate the single administration reliability of the item groupings by cognitive, affective, and cognitive/affective [1-8]. However, these groupings were not found during the factor analysis studies used to support the internal structure of MLLI data.
Recommendations for use
The MLLI's evidence of reliability and validity supports its use with students taking chemistry laboratory courses across different college stages. Most of the evidence has been collected on samples that skew white (60%-80%), so it is unclear how the instrument would function outside of this sample. Specifically, when the instrument was used in a Chemical Engineering population in Singapore the scale proved to not be as reliable [9], which poses questions on how the instrument might be interpreted by English Language Learners.
Many applications of MLLI compile and compare scores across cognitive, affective, and cognitive/affective dimensions [1-5, 8]. However, these dimensions are not supported by evidence relating to the internal structure of MLLI data [1, 3]. Given discrepancies between the expected item groupings (cognitive, affective, cognitive/affective) and the two-factor solution (negative and positive expectations) found across studies, the panel suggests that ongoing evidence to support the structure of MLLI data be gathered.
Details from panel review
A variety of validity and reliability evidence has been collected for MLLI data. Specifically, the panel noted that MLLI studies have provided evidence that the instrument can be used in regard to relation to other variables in:
a) general chemistry vs organic chemistry [1, 3].
b) change in expectations at the beginning and end of term (pre-post) [1, 3, 6, 9]. Additionally, interview findings support the changes observed in pre-post movement [6].
c) comparison of responses between affective and cognitive dimensions [1-9].
d) across cluster groupings depending on whether incoming expectations were high vs low or changed over time [2-3].
e) longitudinally to track how a student changes their expectations from the beginning of first year to the end of second year [4].
f) hybrid vs traditional lab [6].
However, when it comes to comparing across affective and cognitive dimensions [1-9] there is a need to provide additional evidence supporting the internal structure for this comparison. Across many studies, coefficient alpha was used to estimate the single administration reliability of the cognitive and affective item groupings. Coefficient alpha has been reported in ranges 0.7-0.88 using the cognitive and affective split, and 0.51-0.58 for the items labeled as both cognitive/affective [1-8]. When the instrument was used in a Chemical Engineering population in Singapore the scale proved to not be as reliable (0.28-0.83).
In two studies, researchers changed the 0-100 scale to a 5-point Likert scale [3, 7]. This change was supported by single administration reliability values, however the panelists would encourage future studies to investigate the potential impacts of this change. Finally, given the limited population tested, in terms of racial demographics and English language speakers, the panelists encourage future studies to investigate MLLI item functioning and data structure prior to any comparisons across groups in order to provide evidence that this type of analysis is supported.
References
[1] Galloway, K. R., & Bretz, S. L. (2015). Development of an assessment tool to measure students’ meaningful learning in the undergraduate chemistry laboratory. Journal of Chemical Education, 92(7), 1149-1158. https://doi.org/10.1021/ed500881y
[2] Galloway, K. R., & Bretz, S. L. (2015). Using cluster analysis to characterize meaningful learning in a first-year university chemistry laboratory course. Chemistry Education Research and Practice, 16(4), 879-892. https://doi.org/10.1039/C5RP00077G
[3] Galloway, K. R., & Bretz, S. L. (2015). Measuring meaningful learning in the undergraduate chemistry laboratory: a national, cross-sectional study. Journal of Chemical Education, 92(12), 2006-2018. https://doi.org/10.1021/acs.jchemed.5b00538
[4] Galloway, K. R., & Bretz, S. L. (2015). Measuring meaningful learning in the undergraduate general chemistry and organic chemistry laboratories: a longitudinal study. Journal of Chemical Education, 92(12), 2019-2030. https://doi.org/10.1021/acs.jchemed.5b00754
[5] Galloway, K. R., & Bretz, S. L. (2016). Video episodes and action cameras in the undergraduate chemistry laboratory: eliciting student perceptions of meaningful learning. Chemistry Education Research and Practice, 17(1), 139-155. https://doi.org/10.1039/C5RP00196J
[6] Schmidt-McCormack, J. A., Muniz, M. N., Keuter, E. C., Shaw, S. K., & Cole, R. S. (2017). Design and implementation of instructional videos for upper-division undergraduate laboratory courses. Chemistry Education Research and Practice, 18(4), 749-762. https://doi.org/10.1039/C7RP00078B
[7] George-Williams, S. R., Karis, D., Ziebell, A. L., Kitson, R. R., Coppo, P., Schmid, S., ... & Overton, T. L. (2019). Investigating student and staff perceptions of students' experiences in teaching laboratories through the lens of meaningful learning. Chemistry Education Research and Practice, 20(1), 187-196. https://doi.org/10.1039/C8RP00188J
[8] Enneking, K. M., Breitenstein, G. R., Coleman, A. F., Reeves, J. H., Wang, Y., & Grove, N. P. (2019). The evaluation of a hybrid, general chemistry laboratory curriculum: Impact on students’ cognitive, affective, and psychomotor learning. Journal of Chemical Education, 96(6), 1058-1067. https://doi.org/10.1021/acs.jchemed.8b00637
[9] Nguk, L. P., & Vijayan, N. (2020). Block Teaching of Chemistry Tutorial and Laboratory and the Effect on Competencies and Lesson Experience. Asian Journal of the Scholarship of Teaching and Learning, 10(1), 5-26.