Skip to main content

Panel Review: Chemistry & Biology Assessment for Biochemistry

(Post last updated June 14, 2022)

Review panel summary

The Chemistry & Biology Assessment for Biochemistry (C&BA) is intended to assess foundational understanding of chemistry and biology topics that are viewed as prerequisite to biochemistry learning. The instrument consists of 24 items, divided into groups of three questions associated with each of eight topics: bond energy, free energy, London dispersion forces, pH/pKa, hydrogen bonding, alpha helix, amino acids, and protein function. It has been tested with undergraduate biochemistry students at a variety of US institutions, some of which are classified as Hispanic-serving institutions by the US Department of Education.

The developers of the C&BA described the process of instrument development in great detail [1]. Of note is the attention given to content validity. The items were written by a team of PhD faculty members in chemistry, biology, and biochemistry. The multiple choice items are highly structured so that across all three items designed for each topic, the four multiple choice options contain a single consistent correct idea and three common misconceptions. After an initial writing of many items, the final items selected were again revised for consistency and clarity by the authors. The data produced by the 24-item instrument were assessed for internal structure validity using confirmatory factor analysis. The fit statistics used to determine whether the eight-factor solution matched the data were satisfactory. However, the panel was unable to fully assess the internal structure validity evidence because the complete model was not described. While cognitive interviews were described in a subsequent paper [2, supplementary information], it is unclear how the information from them may have been used to improve the items or the degree to which they provided supportive evidence of response process validity.

Two publications [1, 2] reported single administration reliability using coefficient alpha values for each of the eight subscales. These values ranged widely and it is unclear whether they provide sufficient evidence for the reliability of the data.

While difficulty and discrimination values were not reported by item, the low overall scores in all the studies examined [1-5] suggest that the items are quite difficult and therefefore may be unlikely to discriminate well between high and low achieving students.

Recommendations for use

The C&BA has been used and tested with undergraduate biochemistry students in the US andis only available by contacting the authors [1] directly.

Traditionally, the C&BA is scored separately for each topic, with a student either getting all three items correct (scored as correct), or getting two or fewer items correct (scored as incorrect). When scored in this way, student scores tend to be quite low. Mean scores in study [1] on each topic were between 0.05 and 0.33, meaning that about 5-33% of students got all three questions right for that topic. When the instrument was scored as a single scale, as in [5], the average score on the pre-test was 7/21 (33%), and 10/21 (48%) on the post-test. Thus, we recommend that instructors consider use of the C&BA in ways that do not contribute to student grades, such as for trying to understand how prepared students are for the biochemistry course, or which topics might require additional review.
Many of the studies that have used the C&BA have opted to use the items associated only with topics that were related to specific interventions, and so smaller subsets of questions (i.e., 12, 18, or 21 out of the 24 total items) have been used for that purpose [2-5]. Because the internal structure validity of C&BA data was studied only for the 24-item instrument [1] and the 21-item instrument [2], further structural support for the abbreviated versions is recommended.

Details from panel review

No evidence of validity for relations to other variables has been published to date. Therefore, future studies could add to what is known about how student performance on C&BA items relate to other measures of similar content.

The panel expressed several concerns about the confirmatory factor analysis that was performed and used to justify evidence of internal structure validity. The complete CFA models [1, 2] were not published, only the fit statistics for the final model, leaving the question of what the correlations among the factors might be and whether any correlated errors among items were included in the model. Two items (17 and 23) had very low non-significant factor loadings (<0.25). Therefore, they may not contribute much information about student understanding of the proposed construct they are associated with. Additionally, in Villafañe et al. [2, supplementary information] the CFA was performed for the pre- and post-test, but with different degrees of freedom, indicating that the model was not the same. This raised questions about measurement invariance and whether the instrument performs the same as a pre- vs. post-test. Although the fit statistics reported were good, there was discussion that using the WLSMV estimator can result in inflated fit statistics for CFA. Therefore, future studies might consider the use of more stringent cutoff values when using the WLSMV estimator.

In order to provide support that the instrument is sensitive to learning gains or changes in instruction, future researchers utilizing a pre-post test design are encouraged to evaluate the stability of the data in the absence of instructional interventions.

With respect to single administration reliability, the coefficient alpha values ranged from 0.306 to 0.878 for each of the eight subscales [1]. An additional follow-on report [2] provided coefficient alpha values for the entire 21-item pre-test (0.62) and post-test (0.67). The supplementary information for Villafañe et al. [2] also included coefficient alpha values for the seven subscales used, which ranged from 0.10 to 0.89. The variability of the alpha scores, with some of them quite low, may be a concern as they may not represent strong evidence of internal consistency reliability.

References

[1] Villafañe, S.M., Bailey, C.P., Loertscher, J., Minderhout, V., & Lewis, J.E. (2011). Development and analysis of an instrument to assess student understanding of foundational concepts before biochemistry coursework. Biochemistry and Molecular Biology Education, 39(2), 102-109. https://doi.org/10.1002/bmb.20464

[2] Villafañe, S.M., Loertscher, J., Minderhout, V., & Lewis, J.E. (2011). Uncovering students’ incorrect ideas about foundational concepts for biochemistry. Chemistry Education Research and Practice, 12(2), 210-218. https://doi.org/10.1039/C1RP90026A

[3] Taylor, A.T.S., Olofson, E.L., & Novak, W.R.P. (2017). Enhancing student retention of prerequisite knowledge through pre-class activities and in-class reinforcement. Biology and Molecular Biology Education, 45(2), 97-104. https://doi.org/10.1002/bmb.20992

[4] Kopecki-Fjetland, M.A., & Steffenson, M. (2021). Design and implementation of active learning strategies to enhance student understanding of foundational concepts in biochemistry. Biochemistry and Molecular Biology Education, 49(3), 446-456. https://doi.org/10.1002/bmb.21498

[5] Miller, H.B., & Srougi, M.C. (2021). Growth mindset interventions improve academic performance but not mindset in biochemistry. Biochemistry and Molecular Biology Education, 49(5), 748-757. https://doi.org/10.1002/bmb.21556