Skip to main content

Metacognitive Awareness Inventory

MAI

    OVERVIEW
    Overview
    Listed below is general information about the instrument.
    Summary
    Original author(s)
    • Schraw, G., & Dennison, R.S.

    Original publication
    • Schraw, G., & Dennison, R.S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19(4), 460-475.

    Year original instrument was published 1994
    Inventory
    Number of items 52
    Number of versions/translations 2
    Cited implementations 4
    Language
    • English
    Country United States
    Format
    • Response Scale
    Intended population(s)
    • Students
    • Undergraduate
    Domain
    Topic
    Evidence
    The CHIRAL team carefully combs through every reference that cites this instrument and pulls all evidence that relates to the instruments’ validity and reliability. These data are presented in the following table that simply notes the presence or absence of evidence related to that concept, but does not indicate the quality of that evidence. Similarly, if evidence is lacking, that does not necessarily mean the instrument is “less valid,” just that it wasn’t presented in literature. Learn more about this process by viewing the CHIRAL Process and consult the instrument’s Review (next tab), if available, for better insights into the usability of this instrument.

    Information in the table is given in four different categories:
    1. General - information about how each article used the instrument:
      • Original development paper - indicates whether in which paper(s) the instrument was developed initially
      • Uses the instrument in data collection - indicates whether an article administered the instrument and collected responses
      • Modified version of existing instrument - indicates whether an article has modified a prior version of this instrument
      • Evaluation of existing instrument - indicates whether an article explicitly provides evidence that attempt to evaluate the performance of the instrument; lack of a checkmark here implies an article that administered the instrument but did not evaluate the instrument itself
    2. Reliability - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    3. Validity - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    4. Other Information - information that may or may not directly relate to the evidence for validity and reliability, but are commonly reported when evaluating instruments; please see the Glossary for term definitions
    Publications: 1 2 3 4

    General

    Original development paper
    Uses the instrument in data collection
    Modified version of existing instrument
    Evaluation of existing instrument

    Reliability

    Test-retest reliability
    Internal consistency
    Coefficient (Cronbach's) alpha
    McDonald's Omega
    Inter-rater reliability
    Person separation
    Generalizability coefficients
    Other reliability evidence

    Validity

    Expert judgment
    Response process
    Factor analysis, IRT, Rasch analysis
    Differential item function
    Evidence based on relationships to other variables
    Evidence based on consequences of testing
    Other validity evidence

    Other information

    Difficulty
    Discrimination
    Evidence based on fairness
    Other general evidence
    Review
    DISCLAIMER: The evidence supporting the validity and reliability of the data summarized below is for use of this assessment instrument within the reported settings and populations. The continued collection and evaluation of validity and reliability evidence, in both similar and dissimilar contexts, is encouraged and will support the chemistry education community’s ongoing understanding of this instrument and its limitations.
    This review was generated by a CHIRAL review panel. Each CHIRAL review panel consists of multiple experts who first individually review the citations of the assessment instrument listed on this page for evidence in support of the validity and reliability of the data generated by the instrument. Panels then meet to discuss the evidence and summarize their opinions in the review posted in this tab. These reviews summarize only the evidence that was discussed during the panel which may not represent all evidence available in the published literature or that which appears on the Evidence tab.
    If you feel that evidence is missing from this review, or that something was documented in error, please use the CHIRAL Feedback page.

    Panel Review: Metacognitive Awareness Inventory (MAI)

    (Post last updated 28 December 2024)

    Review panel summary   
    The Metacognitive Awareness Inventory (MAI) consists of 52 items on a continuous response scale and was developed to assess students' metacognitive awareness across eight dimensions of metacognition: declarative knowledge, procedural knowledge, conditional knowledge, planning, information management strategies, monitoring, debugging strategies, and evaluation.

    Evidence based on internal structure validity was demonstrated through confirmatory factor analysis of a two-factor structure—knowledge of cognition and regulation of cognition. Several studies have provided validity evidence of the MAI's data based on its relations with other variables, including task difficulty, self-confidence, familiarity [2], exposure to simulations [4], GPA (4), and metacognitive skills [3]. Reliability of the MAI in the form of single administration has been supported across studies, with reasonable coefficient alpha estimates for each subscale. The instrument is available with operational definitions for each construct [1], and a modified version has been explored [3].

    Recommendations for use   
    The use of a two-subscale model of the MAI instrument is well-supported, while evidence for the finer structure (i.e., the 8 dimensions) remains limited. Therefore, given the current evidence, it is suggested that only scores for ‘knowledge of cognition’ and ‘regulation of cognition’ be calculated and reported. Further research is needed to provide evidence to support response process validity, particularly given potential language shifts since the instrument's original development in 1994.

    Evidence supports the consequential validity of MAI data, as scores improved following a targeted metacognitive training intervention. However, the instrument is now 30 years old, and all prior studies have used paper-and-pencil data collection. With the increased prevalence of online data collection, examining the scale’s performance in an online format is recommended.

    The instrument’s applicability with diverse populations is supported, as previous studies indicate its suitability across various educational settings.

    Details from panel review   
    The initial version of the instrument was pilot tested with 120 items. The final 52 items were selected based on the pilot test results. Items with extreme mean scores and items with large score variability were dropped. The final item selection was done so that each of the proposed six subscales contained four items. The final version of the instrument was administered and the data were analyzed with unrestricted exploratory factor analysis using orthogonal and oblique solutions. The proposed 8 factor structure was not supported. Some factors were aligned with the knowledge of cognition category, while the other factors aligned with the regulation of knowledge category. A forced two-factor solution revealed support for the two major categories. Evidence for this two-factor structure was found in a second experiment, with different subjects [1]. No evidence related to response process validity was reported.

    Single administration reliability data in the form of coefficient alpha suggests that the two factor scales are reliable [1]. Validity evidence in the form of relation to other variables is provided by the scores on the two scales being highly, positively, correlated with other measures related to metacognition [1-3]. Additionally, evidence based on the consequence of testing comes in the form of the change in pre/post scores in a study where the treatment was metacognitive training [3].

    While the MAI was originally developed with introductory psychology students, it was used in a variety of educational settings, including general chemistry [2], upper division psychology [3], and engineering [4]. However, no statistical evidence of measurement invariance has been reported.

    References

    [1] Schraw G. and Dennison, R.S. (1994). Assessing Metacognitive Awareness. Contem. Educ. Psych., 19, 460-475.

    [2] Gulacar, O. and Bowman, C.R. (2014). Determining what our students need most: exploring student perceptions and comparing difficulty ratings of students and faculty. Chem. Educ. Res. & Pract. 15, 587-593.

    [3] Terlecki, M.S. & McMahon, A. (2018). A Call for Metacognitive Intervention: Improvements Due to Curricular Programming in Leadership. J. Lead. in Educ. 17(4), 130-145.

    [4] Elliott, L.J., Aqlan, F., Zhao, R., & Janney, M.S. (2020). Assessment of Metacognitive Skills in Design and Manufacturing. Amer. Soc. for Eng. Eudc. Ann. Conf. Proc., Paper 31121.

    Versions
    Listed below are all versions and modifications that were based on this instrument or this instrument were based on.
    Name Authors
    • Yuriev, E., Naidu, S., Schembri, L.S., & Short, J.L.

    Citations
    Listed below are all literature that develop, implement, modify, or reference the instrument.
    1. Schraw, G., & Dennison, R.S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19(4), 460-475.

    2. Terlecki, M. S., & McMahon, A. (2018). A Call for Metacognitive Intervention: Improvements Due to Curricular Programming in Leadership. Journal of Leadership Education, 17(4).

    3. Gulacar, O., & Bowman, C.R. (2014). Determining what our students need most: Exploring student perceptions and comparing difficulty ratings of students and faculty. Chemistry Education Research and Practice, 15(4), 587-593.

    4. Elliott, L. J., Aqlan, F., Zhao, R., & Janney, M. (2020, June). Assessment of Metacognitive Skills in Design and Manufacturing. In ASEE Annual Conference proceedings.