Skip to main content

Motivated Strategies For Learning Questionnaire

MSLQ

    OVERVIEW
    Overview
    Listed below is general information about the instrument.
    Summary
    Original author(s)
    • Pintrich, P.R., Smith, D.A.F., Garcia, T., & McKeachie, W.J.

    Original publication
    • Pintrich, P. R. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ).

    Year original instrument was published 1991
    Inventory
    Number of items 81
    Number of versions/translations 12
    Cited implementations 9
    Language
    • English
    Country United States, Canada
    Format
    • Response Scale
    Intended population(s)
    • Students
    • Undergraduate
    • Middle School
    • High School
    Domain
    • Affective
    • Behavioral
    Topic
    • Attitutde
    Evidence
    The CHIRAL team carefully combs through every reference that cites this instrument and pulls all evidence that relates to the instruments’ validity and reliability. These data are presented in the following table that simply notes the presence or absence of evidence related to that concept, but does not indicate the quality of that evidence. Similarly, if evidence is lacking, that does not necessarily mean the instrument is “less valid,” just that it wasn’t presented in literature. Learn more about this process by viewing the CHIRAL Process and consult the instrument’s Review (next tab), if available, for better insights into the usability of this instrument.

    Information in the table is given in four different categories:
    1. General - information about how each article used the instrument:
      • Original development paper - indicates whether in which paper(s) the instrument was developed initially
      • Uses the instrument in data collection - indicates whether an article administered the instrument and collected responses
      • Modified version of existing instrument - indicates whether an article has modified a prior version of this instrument
      • Evaluation of existing instrument - indicates whether an article explicitly provides evidence that attempt to evaluate the performance of the instrument; lack of a checkmark here implies an article that administered the instrument but did not evaluate the instrument itself
    2. Reliability - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    3. Validity - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    4. Other Information - information that may or may not directly relate to the evidence for validity and reliability, but are commonly reported when evaluating instruments; please see the Glossary for term definitions
    Publications: 1 2 3 4 5 6 7 8 9

    General

    Original development paper
    Uses the instrument in data collection
    Modified version of existing instrument
    Evaluation of existing instrument

    Reliability

    Test-retest reliability
    Internal consistency
    Coefficient (Cronbach's) alpha
    McDonald's Omega
    Inter-rater reliability
    Person separation
    Generalizability coefficients
    Other reliability evidence

    Validity

    Expert judgment
    Response process
    Factor analysis, IRT, Rasch analysis
    Differential item function
    Evidence based on relationships to other variables
    Evidence based on consequences of testing
    Other validity evidence

    Other information

    Difficulty
    Discrimination
    Evidence based on fairness
    Other general evidence
    Review
    DISCLAIMER: The evidence supporting the validity and reliability of the data summarized below is for use of this assessment instrument within the reported settings and populations. The continued collection and evaluation of validity and reliability evidence, in both similar and dissimilar contexts, is encouraged and will support the chemistry education community’s ongoing understanding of this instrument and its limitations.
    This review was generated by a CHIRAL review panel. Each CHIRAL review panel consists of multiple experts who first individually review the citations of the assessment instrument listed on this page for evidence in support of the validity and reliability of the data generated by the instrument. Panels then meet to discuss the evidence and summarize their opinions in the review posted in this tab. These reviews summarize only the evidence that was discussed during the panel which may not represent all evidence available in the published literature or that which appears on the Evidence tab.
    If you feel that evidence is missing from this review, or that something was documented in error, please use the CHIRAL Feedback page.

    Panel Review: Motivated Strategies for Learning Questionnaire (MSLQ)

    (Post last updated 28 December 2024)

    Review panel summary   
    The Motivated Strategies for Learning Questionnaire (MSLQ) is a 81 item questionnaire designed to assess students’ motivation and learning strategies and has been administered to students from a variety of disciplines [1,2], including undergraduate biology [3], general chemistry [6-8], and organic chemistry [9]. The MSLQ includes 15 different subscales, which are designed to be used individually or in combination [1]. The survey is administered on a 7-point Likert-type scale from “not at all true of me” to “very true of me” [1,2,6,8,9], although the survey has also been administered using five-point Likert-type scales from “not at all true” to “very true” [4] and from “strongly disagree” to “strongly agree” [5]. Scale items were developed based on established models of motivation and learning and information processing [2], which provides some support for evidence based on test content. Data collected with initial items were informally evaluated for evidence of validity and reliability and revised before publication. Evidence for internal structure validity has been provided for data collected with the published items through two confirmatory factor analyses (CFAs); one that included the motivation subscales (6-factor CFA) and one with the cognitive and metacognitive learning strategy subscales (9-factor CFA) [2,3]. Single-factor CFAs have also been used to evaluate data collected for individual subscales [9]. Subscale means have been created by calculating the mean of all the item responses related to a single subscale [1,2]. Significant correlations to final course grade have been found for many individual item scores and subscale mean scores [1]. Additionally, some subscale means have shown significant correlations to the Achievement Emotions Questionnaire for Organic Chemistry (AEQ-OCHEM) subscales [10], as well as affective measures related to class, learning, and test taking (i.e., enjoyment, hope, pride, anger, anxiety, shame, hopelessness, and boredom) [5] and students’ chemistry self-concept, math self-concept, emotional satisfaction, and intellectual accessibility [6]. Some subscale scores have also been integrated into cluster analyses that demonstrate evidence based on relations to other variables, which resulted in clusters representing low, medium, and high affective groups [6,7]. Some subscales that have been used in studies to evaluate potential differences between test and control groups showed no significant differences between the groups, which provides some evidence for validity related to consequence testing [3,8]. Single administration reliability of data collected with the subscales have been estimated using coefficient alpha [1,2,4,5,9].

    Recommendations for use   
    The intent of the MSLQ is to allow instructors to pick and choose subscales of motivation and learning strategies related to the aspects they are interested in measuring for their population and environment [1]. While some aspects of validity and reliability evidence have been presented, there is currently limited evidence to support test content and response process validity for data collected with the MSLQ. Additionally, the evidence provided for internal structure validity suggests that while data collected with individual subscales may show evidence of good data-model fit [9], overall CFA models for motivation (6-factor correlated CFA) and learning strategies (9-factor correlated CFA) have not met the recommended cutoff values to suggest good data-model fit [1,2]. As MSLQ users have the ability to choose among different subscales to suit their needs, this panel suggests that validity and reliability evidence should continue to be provided for data collected with individual and/or multiple subscales of the MSLQ to provide further support for the proposed factor structure and that the population of interest interprets the items and subscales as intended.

    Details from panel review   
    While the development of the MSLQ items spanned many years and included data from over 1000 students [1,2], the published evidence in support of validity and reliability for data collected with the MSLQ is more limited. While the items were developed using models of motivation and learning [2], additional evidence to support test content validity is absent. Multiple sources include evidence related to the internal structure of the data collected with the instrument. CFAs with data collected with the motivation subscales (6-factor CFA) and learning strategies subscales (9-factor CFA) resulted in adequate fit statistics [1,2]. Most single-factor CFAs with individual subscales showed evidence of good data-model fit based on CFI and SRMR values [9].

    Correlations between a majority of item and subscale mean scores with final course grade were found to be significant [1,2]. Correlations between some MSLQ subscales and other affective measures have also been reported. The task-value subscale has been used to evaluate individual interest and showed significant correlations with situational interest measures [4]. Subscales of self-efficacy and task value have also been found to be significantly correlated with positive and negative affective measures related to enjoyment, hope, pride, anger, anxiety, shame, hopelessness, and boredom of class, learning, and test environments [5]. Self-efficacy and test anxiety subscales have shown significant correlations with chemistry and math self-concept measures, emotional satisfaction, and intellectual accessibility [6]. MSLQ subscales have also had significant correlations with the subscales from the AEQ-OCHEM measure [9]. Some subscales from the MSLQ (i.e., task value, self-efficacy, metacognition and self-regulation, study environment, effort regulation, and peer learning) have been used to ensure that test and control groups are similar in those aspects [3,8]. No evidence to support response process validity was presented. Evidence of reasonable single-administration reliability has been provided through coefficient alpha values ranging from 0.51-0.95 for each subscale [1,2,3,4,9].

    References

    [1] Pintrich, P.R., Smith, D.A.F., Garcia, T., and McKeachie, W.J. (1991). A Manual for the Use of the Motivated Strategies Learning Questionnaire (MSLQ). National Center for Research to Improve Postsecondary Teaching and Learning Project on Instructional Processes and Educational Outcomes, Washington, DC.

    [2] Pintrich, P.R., Smith, D.A.F., Garcia, T., and McKeachie, W.J. (1993). Reliability and Predictive Validity of the Motivated Strategies for Learning Questionnaire (Mslq). Educ. and Psych. Meas., 53(3), 801-813.

    [3] Feldon, D.F., Timmerman, B.C., Stowe, K.A., & Showman, R. (2010). Translating expertise into effective instruction: The impacts of cognitive task analysis (CTA) on lab report quality and student retention in the biological sciences. J. Res. Sci. Teach. 47(10), 1165-1185.

    [4] Linnenbrink-Garcia, L.L., Durik, A.M., Conley, A.M., Barron, K.E., Tauer, J.M., Karabenick, S.A., & Harackiewicz, J.M. (2010). Measuring Situational Interest in Academic Domains. Educ. and Psych. Meas., 70(4), 647-671.

    [5] Pekrun, R., Goetz, T., Frenzel, A.C., Barchfeld, P., & Perry, R.P. (2011). Measuring emotions in students’ learning and performance: The Achievement Emotions Questionnaire (AEQ). Contem. Educ. Psych., 36, 36-48.

    [6] Chan, J.Y.K. and Bauer, C.F. (2014). Identifying At-Risk Students in General Chemistry via Cluster Analysis of Affective Characteristics. J. Chem. Educ., 91(9), 1417-1425.

    [7] Chan, J.Y.K. and Bauer, C.F. (2016). Learning and studying strategies used by general chemistry students with different affective
    characteristics. Chem. Educ. Res. & Pract., 17, 675-684.

    [8] Underwood, S.M., Reyes-Gastelum, D., & Cooper, M.M. (2016). When do students recognize relationships between molecular structure and properties? A longitudinal comparison of the impact of traditional and transformed curricula. Chem. Educ. Res. & Pract., 17, 365-380.

    [9] Raker, J.R., Gibbons, R.E., Cruz-Ramirez de Arellano, D. (2018). Development and evaluation of the organic chemistry-specific achievement emotions questionnaire (AEQ-OCHEM). J. Res. Sci. Teach., 56(2). 163-183.

    Versions
    Listed below are all versions and modifications that were based on this instrument or this instrument were based on.
    Instrument has been modified in:
    Name Authors
    • Yalcinkaya, E., & Boz, Y.

    • Tuan, H.-L., Chin, C.-C., & Shieh, S.-H.

    • Velayutham, S., Aldridge, J., & Fraser, B.

    • Lati, W., Triampo, D., & Yodyingyong, S.

    • Tastan, Kirik O., & Boz, Y.

    • Santos, D. L., Barbera, J., & Mooring, S. R.

    • Lin, T.-J., & Tsai, C.-C.

    • Kadioglu-Akbulut, C., & Uzuntiryaki-Kondakci, E.

    • Naibert, N., Duck, K.D., Phillips, M.M., & Barbera, J.

    • Reimer, L. C., Leslie, J. M., Bidwell, S. L., Isborn, C. M., Lair, D., Menke, E., Stokes, B. J., & Hratchian, H. P.

    Citations
    Listed below are all literature that develop, implement, modify, or reference the instrument.
    1. Pintrich, P. R. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ).

    2. Chan, J.Y.K., & Bauer, C.F. (2016). Learning and studying strategies used by general chemistry students with different affective characteristics. Chemistry Education Research and Practice, 17(4), 675-684.

    3. Pekrun, R., Goetz, T., Frenzel, A.C., Barchfeld, P., & Perry, R.P. (2011). Measuring emotions in students' learning and performance: The Achievement Emotions Questionnaire (AEQ). Contemporary Educational Psychology, 36(1), 36-48.

    4. Feldon, D. F., Timmerman, B. C., Stowe, K. A., & Showman, R. (2010). Translating expertise into effective instruction: The impacts of cognitive task analysis (CTA) on lab report quality and student retention in the biological sciences. Journal of research in science teaching, 47(10), 1165-1185.

    5. Chan, J.Y.K., & Bauer, C.F. (2014). Identifying at-risk students in general chemistry via cluster analysis of affective characteristics. Journal of Chemical Education, 91(9), 1417-1425.

    6. Raker, J.R., Gibbons, R.E., & Cruz-Ramirez, de Arellano D. (2019). Development and evaluation of the organic chemistry-specific achievement emotions questionnaire (AEQ-OCHEM). Journal of Research in Science Teaching, 56(2), 163-183.

    7. Pintrich, P. R., Smith, D. A., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the Motivated Strategies for Learning Questionnaire (MSLQ). Educational and psychological measurement, 53(3), 801-813.

    8. Linnenbrink-Garcia, L., Durik, A.M., Conley, A.M.M., Barron, K.E., Tauer, J.M., Karabenick, S.A., & Harackiewicz, J.M. (2010). Measuring situational interest in academic domains. Educational and Psychological Measurement, 70(4), 647-671.

    9. Underwood, S.M., Reyes-Gastelum, D., & Cooper, M.M. (2016). When do students recognize relationships between molecular structure and properties? A longitudinal comparison of the impact of traditional and transformed curricula. Chemistry Education Research