OCSE
OVERVIEW
Summary | |
---|---|
Original author(s) |
|
Original publication |
|
Year original instrument was published | 2016 |
Inventory | |
Number of items | 12 |
Number of versions/translations | 1 |
Cited implementations | 1 |
Language |
|
Country | United States |
Format |
|
Intended population(s) |
|
Domain |
|
Topic |
|
EVIDENCE
Information in the table is given in four different categories:
- General - information about how each article used the instrument:
- Original development paper - indicates whether in which paper(s) the instrument was developed initially
- Uses the instrument in data collection - indicates whether an article administered the instrument and collected responses
- Modified version of existing instrument - indicates whether an article has modified a prior version of this instrument
- Evaluation of existing instrument - indicates whether an article explicitly provides evidence that attempt to evaluate the performance of the instrument; lack of a checkmark here implies an article that administered the instrument but did not evaluate the instrument itself
- Reliability - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
- Validity - information about the evidence presented to establish reliability of data generated by the instrument; please see the Glossary for term definitions
- Other Information - information that may or may not directly relate to the evidence for validity and reliability, but are commonly reported when evaluating instruments; please see the Glossary for term definitions
Publications: | 1 |
---|---|
General |
|
Original development paper | ✔ |
Uses the instrument in data collection | ✔ |
Modified version of existing instrument | |
Evaluation of existing instrument | ✔ |
Reliability |
|
Test-retest reliability | |
Internal consistency | |
Coefficient (Cronbach's) alpha | ✔ |
McDonald's Omega | |
Inter-rater reliability | |
Person separation | |
Generalizability coefficients | |
Other reliability evidence | |
Validity |
|
Expert judgment | |
Response process | |
Factor analysis, IRT, Rasch analysis | ✔ |
Differential item function | |
Evidence based on relationships to other variables | ✔ |
Evidence based on consequences of testing | |
Other validity evidence | ✔ |
Other information |
|
Difficulty | |
Discrimination | |
Evidence based on fairness | |
Other general evidence |
REVIEW
This review was generated by a CHIRAL review panel. Each CHIRAL review panel consists of multiple experts who first individually review the citations of the assessment instrument listed on this page for evidence in support of the validity and reliability of the data generated by the instrument. Panels then meet to discuss the evidence and summarize their opinions in the review posted in this tab. These reviews summarize only the evidence that was discussed during the panel which may not represent all evidence available in the published literature or that which appears on the Evidence tab.
If you feel that evidence is missing from this review, or that something was documented in error, please use the CHIRAL Feedback page.
Panel Review: Organic Chemistry Self-Efficacy (OCSE)
(Post last updated June 16, 2022)
Review panel summary
The Organic Chemistry Self-Efficacy (OCSE) instrument was developed at a large research university in the southeastern United States, as a series of five questionnaires (each with an increasing number of items) to be administered at distinct time points in the first semester of a two-semester organic chemistry sequence. The OCSE is designed to measure the confidence of students in their abilities to complete tasks commonly found on organic chemistry exams [1]. The authors of the instrument designed the items of the OCSE based on Bandura’s theoretical framework for self-efficacy, which characterizes self-efficacy as context-dependent and influenced by feedback. In this framework, self-efficacy refers to one’s confidence in their ability to successfully complete a specific task. The items of the OCSE, then, ask students “How well can you…?” followed by a task that would typically be on an organic chemistry exam, like “...identifying functional groups based on an IR spectrum”. The items were administered to students at five time points, 72 hours prior to each of the exams administered in a course. The OCSE has a different number of items at each administration (ranging from 3 at the first administration to 12 at the last), with only items that reflected content covered up to that point in the semester included. While the theoretical framework of self-efficacy provides some evidence for the test content of these items, the items were not reviewed by any other chemistry or chemical education researchers to provide additional evidence for the validity of interpreting scores generated using the items. Related to the content of the test, in the Bandura framework, the construct of self-efficacy is defined as a single factor for a specific set of tasks. So, the internal structure of each OCSE administration is supported by the theoretical framework to be one factor. In the development of the instrument, the authors found that a one-factor solution was supported by confirmatory factor analysis at each OCSE administration, providing evidence in support of the internal structure of OCSE data. There is also some evidence that responses across the single factor are consistent in the second, third, fourth, and fifth administrations, in that Cronbach’s alpha measures are within a reasonable range, providing some evidence for single-administration reliability. The alpha value found at the first administration, however, was less than desirable. There is evidence of OCSE score relations to other variables in a reciprocal causation model with exam performance [1].
Recommendations for use
There is evidence to support interpreting the scores generated by students taking the OCSE as a measure of their confidence in their ability to complete the tasks included in the items. However, the OCSE, in its current form, is only supported within the consideration of the specific items at each administration. For example, the version of the instrument administered around the time of the final exam should include all 12 OCSE items (assuming all content covered by these OCSE items has been covered in the course by that point), and the instrument administered earlier in the semester should only include those items related to content covered in the course up to that point.
Details from panel review
The evidence to support internal structure validity and single-administration reliability, in this case confirmatory factor analysis and Cronbach’s alpha, is summarized, without specific data listed [1]. Fit indices and factor loadings of the items on to the factor are not provided for each application of the instrument. However, the manuscript indicates that all are acceptable. The OCSE has not been used in a single-administration context, only in relation to each exam through a reciprocal-causation model.
References
[1] Villafañe, S.M., Xiaoying, X., & Raker, J.R. (2016). Self-efficacy and academic performance in first-semester organic chemistry: testing a model of reciprocal causation. Chemistry Education Research and Practice, 17, 973-984. https://doi.org/10.1039/C6RP00119J
VERSIONS
CITATIONS
Villafañe, S.M., Xu, X., & Raker, J.R. (2016). Self-efficacy and academic performance in first-semester organic chemistry: Testing a model of reciprocal causation. Chemistry Education Research and Practice, 17(4), 973-984.