MLLI
OVERVIEW
Summary | |
---|---|
Original author(s) |
|
Original publication |
|
Year original instrument was published | 2015 |
Inventory | |
Number of items | 31 |
Number of versions/translations | 1 |
Cited implementations | 10 |
Language |
|
Country | United States, Singapore |
Format |
|
Intended population(s) |
|
Domain |
|
Topic |
|
EVIDENCE
Information in the table is given in four different categories:
- General - information about how each article used the instrument:
- Original development paper - indicates whether in which paper(s) the instrument was developed initially
- Uses the instrument in data collection - indicates whether an article administered the instrument and collected responses
- Modified version of existing instrument - indicates whether an article has modified a prior version of this instrument
- Evaluation of existing instrument - indicates whether an article explicitly provides evidence that attempt to evaluate the performance of the instrument; lack of a checkmark here implies an article that administered the instrument but did not evaluate the instrument itself
- Reliability - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
- Validity - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
- Other Information - information that may or may not directly relate to the evidence for validity and reliability, but are commonly reported when evaluating instruments; please see the Glossary for term definitions
Publications: | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|---|---|---|---|---|
General |
||||||||||
Original development paper | ✔ | |||||||||
Uses the instrument in data collection | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Modified version of existing instrument | ||||||||||
Evaluation of existing instrument | ✔ | ✔ | ✔ | ✔ | ✔ | |||||
Reliability |
||||||||||
Test-retest reliability | ✔ | ✔ | ||||||||
Internal consistency | ||||||||||
Coefficient (Cronbach's) alpha | ✔ | ✔ | ✔ | ✔ | ✔ | |||||
McDonald's Omega | ||||||||||
Inter-rater reliability | ||||||||||
Person separation | ||||||||||
Generalizability coefficients | ||||||||||
Other reliability evidence | ||||||||||
Validity |
||||||||||
Expert judgment | ✔ | |||||||||
Response process | ✔ | |||||||||
Factor analysis, IRT, Rasch analysis | ✔ | ✔ | ✔ | |||||||
Differential item function | ||||||||||
Evidence based on relationships to other variables | ✔ | ✔ | ✔ | |||||||
Evidence based on consequences of testing | ||||||||||
Other validity evidence | ✔ | |||||||||
Other information |
||||||||||
Difficulty | ||||||||||
Discrimination | ||||||||||
Evidence based on fairness | ||||||||||
Other general evidence | ✔ |
REVIEW
This review was generated by a CHIRAL review panel. Each CHIRAL review panel consists of multiple experts who first individually review the citations of the assessment instrument listed on this page for evidence in support of the validity and reliability of the data generated by the instrument. Panels then meet to discuss the evidence and summarize their opinions in the review posted in this tab. These reviews summarize only the evidence that was discussed during the panel which may not represent all evidence available in the published literature or that which appears on the Evidence tab.
If you feel that evidence is missing from this review, or that something was documented in error, please use the CHIRAL Feedback page.
Panel Review: Meaningful Learning in the Laboratory Instrument
(Post last updated July 21, 2021)
Review panel summary
The Meaningful Learning in the Laboratory Instrument (MLLI) is a thirty-item, Likert-scale, instrument. The instrument was designed to measure student's expectations before and after laboratory experiences [1]. It has been used successfully both in paper and online formats [1].
The instrument has been evaluated across general chemistry (first-year chemistry) students [1-5, 8], organic chemistry (second year chemistry) laboratory students [1-5, 8] in all kinds of institutions in the US (e.g., community colleges, PUI, comprehensive universities, and research universities) [1-6, 8], and in universities in the UK and Australia [7]. Additionally, there is one study where the MLLI was given to 3rd and 4th year upper division laboratory students in physical measurement [6], and one study with students in Chemical Engineering in Singapore [9].
There are several aspects of validity that have been studied. Experts in chemistry education have provided expert judgment on the categorization of items as measuring cognitive, affective, or cognitive/affective dimensions [1]. Student interviews have been conducted to provide response process validity with students in general chemistry and organic chemistry [1], which provides evidence that students at this stage interpret the MLLI items as intended.
Studies using MLLI have provided evidence that the instrument can be used in regard to relation to other variables to track the change of student expectations at the beginning and end of term (i.e., pre-post) [1, 3, 6, 9], and across time (longitudinally) [4].
Evidence relating to the internal structure provides support for interpreting MLLI data as positive and negative student expectations [1, 3]. Finally, coefficient alpha was used to estimate the single administration reliability of the item groupings by cognitive, affective, and cognitive/affective [1-8]. However, these groupings were not found during the factor analysis studies used to support the internal structure of MLLI data.
Recommendations for use
The MLLI's evidence of reliability and validity supports its use with students taking chemistry laboratory courses across different college stages. Most of the evidence has been collected on samples that skew white (60%-80%), so it is unclear how the instrument would function outside of this sample. Specifically, when the instrument was used in a Chemical Engineering population in Singapore the scale proved to not be as reliable [9], which poses questions on how the instrument might be interpreted by English Language Learners.
Many applications of MLLI compile and compare scores across cognitive, affective, and cognitive/affective dimensions [1-5, 8]. However, these dimensions are not supported by evidence relating to the internal structure of MLLI data [1, 3]. Given discrepancies between the expected item groupings (cognitive, affective, cognitive/affective) and the two-factor solution (negative and positive expectations) found across studies, the panel suggests that ongoing evidence to support the structure of MLLI data be gathered.
Details from panel review
A variety of validity and reliability evidence has been collected for MLLI data. Specifically, the panel noted that MLLI studies have provided evidence that the instrument can be used in regard to relation to other variables in:
a) general chemistry vs organic chemistry [1, 3].
b) change in expectations at the beginning and end of term (pre-post) [1, 3, 6, 9]. Additionally, interview findings support the changes observed in pre-post movement [6].
c) comparison of responses between affective and cognitive dimensions [1-9].
d) across cluster groupings depending on whether incoming expectations were high vs low or changed over time [2-3].
e) longitudinally to track how a student changes their expectations from the beginning of first year to the end of second year [4].
f) hybrid vs traditional lab [6].
However, when it comes to comparing across affective and cognitive dimensions [1-9] there is a need to provide additional evidence supporting the internal structure for this comparison. Across many studies, coefficient alpha was used to estimate the single administration reliability of the cognitive and affective item groupings. Coefficient alpha has been reported in ranges 0.7-0.88 using the cognitive and affective split, and 0.51-0.58 for the items labeled as both cognitive/affective [1-8]. When the instrument was used in a Chemical Engineering population in Singapore the scale proved to not be as reliable (0.28-0.83).
In two studies, researchers changed the 0-100 scale to a 5-point Likert scale [3, 7]. This change was supported by single administration reliability values, however the panelists would encourage future studies to investigate the potential impacts of this change. Finally, given the limited population tested, in terms of racial demographics and English language speakers, the panelists encourage future studies to investigate MLLI item functioning and data structure prior to any comparisons across groups in order to provide evidence that this type of analysis is supported.
References
[1] Galloway, K. R., & Bretz, S. L. (2015). Development of an assessment tool to measure students’ meaningful learning in the undergraduate chemistry laboratory. Journal of Chemical Education, 92(7), 1149-1158. https://doi.org/10.1021/ed500881y
[2] Galloway, K. R., & Bretz, S. L. (2015). Using cluster analysis to characterize meaningful learning in a first-year university chemistry laboratory course. Chemistry Education Research and Practice, 16(4), 879-892. https://doi.org/10.1039/C5RP00077G
[3] Galloway, K. R., & Bretz, S. L. (2015). Measuring meaningful learning in the undergraduate chemistry laboratory: a national, cross-sectional study. Journal of Chemical Education, 92(12), 2006-2018. https://doi.org/10.1021/acs.jchemed.5b00538
[4] Galloway, K. R., & Bretz, S. L. (2015). Measuring meaningful learning in the undergraduate general chemistry and organic chemistry laboratories: a longitudinal study. Journal of Chemical Education, 92(12), 2019-2030. https://doi.org/10.1021/acs.jchemed.5b00754
[5] Galloway, K. R., & Bretz, S. L. (2016). Video episodes and action cameras in the undergraduate chemistry laboratory: eliciting student perceptions of meaningful learning. Chemistry Education Research and Practice, 17(1), 139-155. https://doi.org/10.1039/C5RP00196J
[6] Schmidt-McCormack, J. A., Muniz, M. N., Keuter, E. C., Shaw, S. K., & Cole, R. S. (2017). Design and implementation of instructional videos for upper-division undergraduate laboratory courses. Chemistry Education Research and Practice, 18(4), 749-762. https://doi.org/10.1039/C7RP00078B
[7] George-Williams, S. R., Karis, D., Ziebell, A. L., Kitson, R. R., Coppo, P., Schmid, S., ... & Overton, T. L. (2019). Investigating student and staff perceptions of students' experiences in teaching laboratories through the lens of meaningful learning. Chemistry Education Research and Practice, 20(1), 187-196. https://doi.org/10.1039/C8RP00188J
[8] Enneking, K. M., Breitenstein, G. R., Coleman, A. F., Reeves, J. H., Wang, Y., & Grove, N. P. (2019). The evaluation of a hybrid, general chemistry laboratory curriculum: Impact on students’ cognitive, affective, and psychomotor learning. Journal of Chemical Education, 96(6), 1058-1067. https://doi.org/10.1021/acs.jchemed.8b00637
[9] Nguk, L. P., & Vijayan, N. (2020). Block Teaching of Chemistry Tutorial and Laboratory and the Effect on Competencies and Lesson Experience. Asian Journal of the Scholarship of Teaching and Learning, 10(1), 5-26.
VERSIONS
Name | Authors |
---|---|
Meaningful Learning In The Laboratory Instrument (George-Williams Et Al., 2019) |
|
CITATIONS
Galloway, K.R., & Bretz, S.L. (2015). Development of an Assessment Tool to Measure Students' Meaningful Learning in the Undergraduate Chemistry Laboratory. Journal of Chemical Education, 92(7), 1149-1158.
Galloway, K.R., & Bretz, S.L. (2015). Measuring Meaningful Learning in the Undergraduate Chemistry Laboratory: A National, Cross-Sectional Study. Journal of Chemical Education, 92(12), 2006-2018.
Galloway, K. R., & Bretz, S. L. (2015). Measuring meaningful learning in the undergraduate general chemistry and organic chemistry laboratories: a longitudinal study. Journal of Chemical Education, 92(12), 2019-2030.
Enneking, K. M., Breitenstein, G. R., Coleman, A. F., Reeves, J. H., Wang, Y., & Grove, N. P. (2019). The evaluation of a hybrid, general chemistry laboratory curriculum: Impact on students’ cognitive, affective, and psychomotor learning. Journal of Chemical Education, 96(6), 1058-1067.
Altowaiji, S., Haddadin, R., Campos, P., Sorn, S., Gonzalez, L., Villafañe, S.M., & Groves, M.N. (2021). Measuring the effectiveness of online preparation videos and questions in the second semester general chemistry laboratory. Chemistry Education Resear
Galloway, K.R., Malakpa, Z., & Bretz, S.L. (2016). Investigating Affective Experiences in the Undergraduate Chemistry Laboratory: Students' Perceptions of Control and Responsibility. Journal of Chemical Education, 93(2), 227-238.
Galloway, K.R., & Bretz, S.L. (2015). Using cluster analysis to characterize meaningful learning in a first-year university chemistry laboratory course. Chemistry Education Research and Practice, 16(4), 879-892.
Nguk, L. P., & Vijayan, N. (2020). Block Teaching of Chemistry Tutorial and Laboratory and the Effect on Competencies and Lesson Experience. Asian Journal of the Scholarship of Teaching and Learning, 10(1), 5-26.
Galloway, K.R., & Bretz, S.L. (2016). Video episodes and action cameras in the undergraduate chemistry laboratory: Eliciting student perceptions of meaningful learning. Chemistry Education Research and Practice, 17(1), 139-155.
Schmidt-McCormack, J. A., Muniz, M. N., Keuter, E. C., Shaw, S. K., & Cole, R. S. (2017). Design and implementation of instructional videos for upper-division undergraduate laboratory courses. Chemistry Education Research and Practice, 18(4), 749-762.