Skip to main content

Postsecondary Instructional Practices Survey

PIPS

    OVERVIEW
    Overview
    Listed below is general information about the instrument.
    Summary
    Original author(s)
    • Walter, E.M., Henderson, C.R., Beach, A.L., & Williams, C.T.

    Original publication
    • Walter, E.M., Henderson, C.R., Beach, A.L., & Williams, C.T. (2016). Introducing the postsecondary instructional practices survey (PIPS): A concise, interdisciplinary, and easy-to-score survey. CBE Life Sciences Education, 15(4).

    Year original instrument was published 2016
    Inventory
    Number of items 24
    Number of versions/translations 1
    Cited implementations 6
    Language
    • English
    Country United States, Qatar, China
    Format
    • Response Scale
    Intended population(s)
    • Faculty
    • Tertiary
    Domain
    • Behavioral
    Topic
    Evidence
    The CHIRAL team carefully combs through every reference that cites this instrument and pulls all evidence that relates to the instruments’ validity and reliability. These data are presented in the following table that simply notes the presence or absence of evidence related to that concept, but does not indicate the quality of that evidence. Similarly, if evidence is lacking, that does not necessarily mean the instrument is “less valid,” just that it wasn’t presented in literature. Learn more about this process by viewing the CHIRAL Process and consult the instrument’s Review (next tab), if available, for better insights into the usability of this instrument.

    Information in the table is given in four different categories:
    1. General - information about how each article used the instrument:
      • Original development paper - indicates whether in which paper(s) the instrument was developed initially
      • Uses the instrument in data collection - indicates whether an article administered the instrument and collected responses
      • Modified version of existing instrument - indicates whether an article has modified a prior version of this instrument
      • Evaluation of existing instrument - indicates whether an article explicitly provides evidence that attempt to evaluate the performance of the instrument; lack of a checkmark here implies an article that administered the instrument but did not evaluate the instrument itself
    2. Reliability - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    3. Validity - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
    4. Other Information - information that may or may not directly relate to the evidence for validity and reliability, but are commonly reported when evaluating instruments; please see the Glossary for term definitions
    Publications: 1 2 3 4 5 6

    General

    Original development paper
    Uses the instrument in data collection
    Modified version of existing instrument
    Evaluation of existing instrument

    Reliability

    Test-retest reliability
    Internal consistency
    Coefficient (Cronbach's) alpha
    McDonald's Omega
    Inter-rater reliability
    Person separation
    Generalizability coefficients
    Other reliability evidence

    Vailidity

    Expert judgment
    Response process
    Factor analysis, IRT, Rasch analysis
    Differential item function
    Evidence based on relationships to other variables
    Evidence based on consequences of testing
    Other validity evidence

    Other information

    Difficulty
    Discrimination
    Evidence based on fairness
    Other general evidence
    Review
    DISCLAIMER: The evidence supporting the validity and reliability of the data summarized below is for use of this assessment instrument within the reported settings and populations. The continued collection and evaluation of validity and reliability evidence, in both similar and dissimilar contexts, is encouraged and will support the chemistry education community’s ongoing understanding of this instrument and its limitations.
    This review was generated by a CHIRAL review panel. Each CHIRAL review panel consists of multiple experts who first individually review the citations of the assessment instrument listed on this page for evidence in support of the validity and reliability of the data generated by the instrument. Panels then meet to discuss the evidence and summarize their opinions in the review posted in this tab. These reviews summarize only the evidence that was discussed during the panel which may not represent all evidence available in the published literature or that which appears on the Evidence tab.
    If you feel that evidence is missing from this review, or that something was documented in error, please use the CHIRAL Feedback page.

    Panel Review: Postsecondary Instructional Practices Survey (PIPS)

    (Post last updated June 16, 2022)

    Review panel summary

    The Postsecondary Instructional Practices Survey (PIPS) was developed as a self-report instrument that would allow researchers, faculty developers, and others to learn about the way that faculty members teach. The PIPS lists 24 instructional practices and asks faculty members to report the extent to which these practices describe their teaching on a 5-point scale [1]. There is evidence to support that the test content of the PIPS adequately reflects its intended broad range of instructional practices, with a group of experts reviewing the items in the development phase. Additional evidence to support the interpretation of PIPS scores based on test content was collected in a pilot sample of postsecondary instructors, which is valuable because this is the intended audience. There is some concern in the overall evidence for the internal structure of the PIPS data. In the original development of the instrument, there were two different factor structures and associated scoring schema proposed: a two-factor solution containing instructor-centered and student-centered practices, and a five-factor solution containing student–student interactions, content delivery practices, formative assessment, student–content engagement, and summative assessment [1]. Subsequent studies utilized the five-factor model instead of the two-factor model more often [2-4]; the two-factor model was only used in one other study [4]. None of the other studies conducted any analyses to support the internal structure of the data collected with the instrument. While all studies using the PIPS provided coefficient alpha values as evidence in support of single administration reliability for the entire scale and each subscale used, this information is not sufficient to support the consistency of scores since the number of factors is not well-established. There is also evidence to support PIPS scores’ relations to other variables; scores on the PIPS have been found to be related to course and instructor characteristics, such as the number of students enrolled in a course and the number of years that a faculty member has been teaching, as well as years at their own institution [1].

    Recommendations for use

    The context of application for the PIPS is recommended by the instrument developers to be used across disciplines. There is evidence of its use in chemistry-specific environments [2, 3], however, neither of these studies had a large enough sample from which to draw psychometric evidence. Therefore, additional psychometric evidence supporting the data generated by PIPS would be warranted before its wide use to measure instructional practices. The PIPS is only available in English; while not translated, the instrument has been used in a Chinese population with an available translator for ad-hoc needs [4], however, there is currently no evidence to support using the instrument in another language.

    Details from panel review

    The questions on the PIPS were designed to measure four general areas of instruction: instructor-student interactions, student-content interactions, student-student interactions, and assessment. However, after administering the instrument, the developers conducted both exploratory factor analysis and confirmatory factor analysis, finding evidence for both a two-factor and a five-factor (where formative and summative assessment are separate) internal structure. It is uncommon to use multiple factor models with data generated from one instrument without robust justification.

    References

    [1] Walter, E.M., Henderson, C.R., Beach, A.L., & Williams, C.T. (2016). Introducing the Postsecondary Instructional Practices Survey (PIPS): A concise, interdisciplinary, and easy-to-score survey. CBE–Life Sciences Education, 15(4), ar53. https://doi.org/10.1187/cbe.15-09-0193

    [2] Barlow, A., & Brown, S. (2020). Correlations between modes of student cognitive engagement and instructional practices in undergraduate STEM courses. International Journal of STEM Education, 7, 1-15. https://doi.org/10.1186/s40594-020-00214-7

    [3] Houseknecht, J.B., Bachinski, G.J., Miller, M.H., White, S.A., & Andrews, D.M. (2020). Effectiveness of the active learning in organic chemistry faculty development workshops. Chemistry Education Research and Practice, 21, 387-398. https://doi.org/10.1039/C9RP00137A

    [4] Du, X., Kolmos, A., Hasan, M.A., Spliid, C.M., Lyngdorf, N.E.R., & Ruan, Y. (2020). Impact of a PBL-based professional learning program in Denmark on the development of the beliefs and practices of Chinese STEM university teachers. International Journal of Engineering Education, 36(3), 940-954.

    Versions
    Listed below are all versions and modifications that were based on this instrument or this instrument were based on.
    Instrument is derived from:
    Name Authors
    • Piburn, M., Sawada, D., Turley, J., Falconer, K., Benford, R., Bloom, I., & Judson, E.

    • Hora, M. T., Oleson, A., & Ferrare, J. J.

    Citations
    Listed below are all literature that develop, implement, modify, or reference the instrument.
    1. Walter, E.M., Henderson, C.R., Beach, A.L., & Williams, C.T. (2016). Introducing the postsecondary instructional practices survey (PIPS): A concise, interdisciplinary, and easy-to-score survey. CBE Life Sciences Education, 15(4).

    2. Hyson, A.R., Bonham, B., Hood, S., Deutschman, M.C., Seithers, L.C., Hull, K., & Jensen, M. (2021). Professional development, shifting perspectives, and instructional change among community college anatomy and physiology instructors. CBE Life Sciences Edu

    3. Houseknecht, J.B., Bachinski, G.J., Miller, M.H., White, S.A., & Andrews, D.M. (2020). Effectiveness of the active learning in organic chemistry faculty development workshops. Chemistry Education Research and Practice, 21(1), 387-398.

    4. Sabah, S., & Du, X. (2018). University faculty’s perceptions and practices of student centered learning in Qatar: Alignment or gap?. Journal of Applied Research in Higher Education, 10(4), 514-533.

    5. Du, Xiangyun & Kolmos, Anette & Hasan, Mahmood & Spliid, Claus & Lyngdorf, Niels & Ruan, Youjin. (2020). Impact of a PBL-Based Professional Learning Program in Denmark on the Development of the Beliefs and Practices of Chinese STEM University Teachers*. I

    6. Barlow, A., & Brown, S. (2020). Correlations between modes of student cognitive engagement and instructional practices in undergraduate STEM courses. International Journal of STEM Education, 7(1).