Southern Online Journal of Nursing Research
Issue 4, Vol. 1, 2000


Evaluating the Student Clinical Learning Environment:
Development and Validation of the SECEE Inventory


Kari Sand-Jecklin, EdD, MSN, RN




Kari Sand-Jecklin, EdD, MSN, RN, Assistant Professor, West Virginia University School of Nursing, Health Restoration Dept., PO Box 9620, Morgantown, WV  26506


This paper describes the refinement and testing of the Student Evaluation of Clinical Education Environment (SECEE) inventory, which measures nursing student perceptions of their clinical learning environment. The applied cognition and cognitive apprenticeship learning theories highlight the significance of the learning environment as a factor in student learning. Although a quality clinical learning experience is considered critical to nursing education, no comprehensive instruments measuring the clinical learning environment have yet been published.

A convenience sample of nursing students from two small liberal arts institutions and one large university completed the 31-item SECEE inventory during the spring 1998 semester. Data analysis indicated that students responded consistently to the instrument as a whole and to its four sub-scales. Test-retest correlations for students completing evaluating the same clinical site twice during a three-week interval were positive, while there were no relationships between the test-retest correlations for students evaluating different clinical sites during the same three-week interval. Significant differences between scale scores were found between institutions and between clinical site groups at the smaller institutions. Student narrative comments also supported the validity of item content. Overall, the SECEE inventory appears to be a reasonably valid and reliable measure. Minor changes are suggested for future revisions.


Keywords: Learning Environment, Clinical; Student Attitudes; Teaching Methods, Clinical




 1Dunn, S. V. & Burnett, P. The development of a clinical learning environment scale. Journal of Advanced Nursing, 22, 1166-1173 (1995).

2Shuell, T. J. in Handbook of Educational Psychology (eds. Berliner, D. C. & Calfee, R. C.) 726-764 (Simon & Schuster Macmillan, New York, 1996).
















3Reilly, D. E. & Oerman, M. H. Clinical teaching in nursing education (National League for Nursing, New York, 1992), p.113



According to Dunn and Burnett1, the student learning environment consists of all the conditions and forces within an educational setting that impact learning. Shuell2 visualized the student learning environment as a rich psychological soup comprised of cognitive, social, cultural, affective, emotional, motivational and curricular factors, in which teachers and students work together toward learning. Without the correct environmental ingredients, it is very difficult to achieve a satisfactory learning product.

Student learning at the primary and secondary educational levels generally takes place  in the traditional classroom environment. In contrast, at the post secondary level, students experience an increasing number of applied learning environments, of which the clinical nursing education experience is a prime example. Clinical (applied learning) components of nursing education are critical to the overall curriculum, as they allow learners to “apply knowledge to practice, to develop problem-solving and decision making skills, and to practice responsibility for their own actions”3. However, the mere passage of time in this environment does not itself ensure clinical competence or a positive clinical experience. Many variables interact in the “soup” of clinical learning to contribute to student learning outcomes.  In order to ensure that the interaction of these environmental ingredients supports learning, the factors impacting learning in that context must be identified and evaluated.





One means to identify and evaluate the factors affecting the effectiveness of the teaching-learning experience is to look at the clinical educational environment through the students’ eyes. Student perceptions of the quality of the learning environment can provide educators with valuable information related to student learning in these environments. Thus, this investigation was undertaken to test the accuracy and efficiency of an instrument, the Student Evaluation of Clinical Education Environment (SECEE) Inventory, designed to measure student perceptions of the clinical learning environment.





4Fraser, B. J. (1991). Validity and use of classroom environment instruments. Journal of Classroom Interaction, 26, 5-11.

5McGraw, S. A. et al. (1996). Using process data to explain outcomes: an illustration from the child and adolescent trial for cardiovascular health. Evaluation Review, 20, 291-312.

6Brown, J. S., Collins, A. & Duguid, P. (1989).  Situated cognition and the culture of learning. Educational Researcher, 18, 32-42.

7Slavin, R. E. Educational Psychology Theory and Practice (Allyn and Bacon, Needham Heights, MA, 1997).

8Brown, 1989.

9Goodenow, C. (1992). Strengthening the links between eduational psychology and the study of social contexts. Educational Psychologist, 27, 177-196.



Relevant Literature

The perceived significance and contribution of the learning environment to student learning is addressed by several learning theories4,5. Cognitive apprenticeship learning theories particularly focus on the applied environment setting. Cognitive apprenticeship is a process whereby the learner develops expertise through interaction with an expert who models appropriate behaviors and coaches the learner in development of skills6,7. The instructor/expert assigns realistic tasks or problems, and provides support to allow the student to complete a task or problem that would not have been possible for the student alone. According to the cognitive apprenticeship theory, learning and cognition are fundamentally situated, a product of the learning activity as well as the context and culture in which they are developed and used8,9.







10Marshall, H. H. (1990). Beyond the workplace metaphor: the classroom as a learning setting. Theory into Practice, 29, 94-101.

11 Cust, J. (1996).  A relational view of learning: Implications for nurse education. Nursing Education Today,16, 256-266.

12Henderson, D., Fisher, D. & Fraser, B. J. in Annual Meeting of the American Educational Research Association (San Francisco, 1995,).

13Fraser, B. J., Williamson, J. C. & Tobin, K. G. (1987). Use of classroom and school climate scales in evaluating alternative high schools. Teaching & Teacher Education, 2 219-231.


The Cognitive Apprenticeship framework focuses on the process of learning and considers the student experience (and student perceptions of the experience) to be at the center of the learning context10. Student perceptions of the environment are considered by some to have even more influence on student learning than does the “actual” environment, as the perceptions influence the way a student approaches a task, and thus, ultimately impact the quality of learning outcomes11. Research consistently demonstrates that students' perceptions of their environment account for an appreciable amount of variance in learning outcomes12, even when prior knowledge and general ability were controlled13.












14 Reed, S. & Price, J. (1991). Audit of clinical learning areas. Nursing Times, 87, 57-58.




15Reilly, 1992.











16Peirce, A. G. (1991). Preceptorial students' view of their clinical experience. Journal of Nursing Education, 30, 244-250.


In contrast with the many investigations that have been conducted to measure the effect of the traditional classroom environment on student learning, relatively few research studies have investigated the impact of the applied learning environment or have suggested criteria for use in assessment of the quality of the clinical learning environment in nursing education. Reed and Price14 suggested evaluating the quality of the social climate, nursing climate, physical environment, relationship between classroom and practice settings, and learning outcomes. However, they did not identify specific criteria within these categories that should be addressed in such an assessment. Reilly and Oermann15 proposed clinical agency evaluation criteria for use by nursing faculty, including factors of flexibility of the learning environment, appropriateness of client population and adequate client numbers, range of learning opportunities, current care practices at the agency, availability of records, student orientation, and resource/space availability.

More recently, a few researchers have begun to investigate student perceptions of the clinical learning environment. Peirce16 developed an open-ended questionnaire to investigate 44 preceptored students’ perceptions of their clinical experience. Data were analyzed for themes, resulting in the categories of school/faculty factors, clinical site organizational and personnel factors, and student factors.








17Farrell, G. A. & Coombes, L. (1994). Student nurse appraisal of placement (SNAP): an attempt to provide objective measures of the learning environment based on qualitative and quantitative evaluations. Nursing Education Today 14, 331-336.







A review of the literature revealed only two recent studies aimed at developing and/or testing an instrument to evaluate student perceptions of the clinical education environment; both were conducted in Australia. The first focused on development of a tool to measure nursing students’ perception of their learning environment in a psychiatric rotation17. The Student Nurse Appraisal of Placement (SNAP) inventory consisted of nine semantic differential items addressing physical resources, learning opportunities, availability of staff, opportunities to practice interpersonal and technical skills, and overall student perceptions. Although no support for validity or reliability of the instrument was presented in the article, Farrell and Coombes18 stated that the instrument did reveal evaluation differences between clinical sites.



19Dunn, 1995.









The second study, conducted by Dunn and Burnett19, measured student perceptions of the applied component of nursing education using the Clinical Learning Environment Scale (CLES) inventory. Authors reported revising 55 items from Orten’s Ward Learning Climate Survey into a final 23 item instrument having five sub-scales measuring student-staff relationships, nurse manager commitment, patient relationships, interpersonal relationships, and student satisfaction. They administered the instrument to 423 nursing students and faculty at an Australian university. The authors claimed construct validity of the instrument as determined by the confirmatory factor analysis. Reported reliability alpha coefficients for the factors were .63 to .85.  Several items in Dunn and Burnett’s tool might be appropriate for an instrument designed to assess the clinical learning environment in the United States. However, other items seemed either too general or unrelated to the current clinical nursing environment in the United States. Examples of such items were (a) the amount of ritual on the ward, (b) whether this was a happy ward, and (c) whether this was a good ward for learning. It was apparent after review of the literature that although the clinical education environment was acknowledged as critical component of nursing education, there was no comprehensive tool that concisely measured student perceptions of the clinical learning environment in nursing.






















20Peirce, 1991.

21 Perese, E. F. (1996). Undergraduates' perceptions of their psychiatric practicum: Positive and negative factors in inpatient and community experience. Journal of Nursing Education, 35, 281-284.




Preliminary Investigation


The researcher met with a group of nursing faculty as well as a group of senior nursing students at a large mid-Atlantic university in order to identify what these two groups felt were important factors impacting student learning in the clinical environment. The initial Student Evaluation of Clinical Education Environment (SECEE) instrument was based on faculty and student input from the described meetings, a review of the literature, a review of sample university–agency contracts and unpublished course evaluations, and suggestions of evaluation experts. It consisted of thirteen items with Likert-based agreement response options. Instrument tems addressed the following issues: (a) student orientation; (b) nursing staff/preceptor availability, communication, role modeling, workload, and preparation to serve as a student resource; (c) learning resource availability; (d) student opportunity for hands on care; and (e) the impact of other students at the clinical site20,21. Open-ended items requested students to describe both the strengths and limitations of the clinical experience at a particular agency, and to describe the impact of other health professional students at the clinical site on the student’s experience.




Data were collected at the end of the 1996 spring semester from students enrolled in the undergraduate nursing program at the main campus of a large mid-Atlantic university. Student response consistency (Cronbach’s alpha coefficient) for the forced-choice inventory items was .897. The investigation of whether there were differences in student responses based on the clinical site being evaluated were conducted using ANOVA. Analysis of item responses demonstrated significant differences between groups for all scaled items (p < .05). Student responses to the open-ended inventory questions appeared to corroborate the ANOVA results, as the most frequently identified strengths and limitations had already been addressed in the Likert-based portion of the SECEE instrument.









22Peirce, 1991
23Windsor, A. (1987). Nursing students’ perceptions of clinical experience. Journal of Nursing Education, 26, 150-154.


Revised Instrument Administration

Several changes were made to the original instrument, based on analysis of student data as well as on a more extensive review of both theoretical literature and evaluation instruments, with a focus on encompassing more fully the wide range of environmental influences on learning22,23. Items addressing student opportunities for learning, instructor and resource nurse/preceptor availability, support, and feedback, and student-to-student interaction were added during instrument revision. Questionnaire brevity continued to be a primary concern.



Inventory items were formatted to fit the same five-point agreement scale (strongly agree to strongly disagree). A sixth option of “can’t answer” was also added, along with a request for students to provide an explanation for questions to which they responded “can’t answer.” In addition to demographic questions, the revised SECEE inventory contains 29 Likert-based items, four of which reflect a negative environmental characteristic. The items encompass four predetermined factors or scales: communication and feedback, learning opportunities, learning support and assistance, and department atmosphere (see Table 1 for a list of items within each sub-scale). Open-ended items asking students to identify aspects of the clinical setting that promoted learning and the aspects that hindered learning, as well as an item for additional student comments, remain in the instrument.






The study population for this investigation consisted of nursing students enrolled in clinical nursing courses in baccalaureate nursing programs at three institutions: a large mid-Atlantic university (LMA), a small liberal arts college in the mid-Atlantic States (SMA), and a small liberal arts university in the Midwest (SMW). The SECEE instrument was administered to all nursing students who were enrolled in a clinical nursing experience at each institution during the spring 1998 semester. Students were asked to evaluate their current clinical site. To the investigator’s knowledge, all students had at least three clinical sessions at the site being evaluated prior to data collection.










A sub-sample of sophomore and junior students at institution LMA completed the instrument twice during the semester, during two different clinical rotations. These data allowed comparison of individual student evaluations of distinct clinical sites during the same semester. Junior students at SMW and senior students at SMA also completed the inventory twice during the semester, but evaluated the same clinical sites for the purposes of test-retest reliability analysis. The second administration of the inventory to the above groups of students was between three and four weeks after the first administration. To compare the students’ pretest and end-of-semester inventories,students were requested to identify their inventories with the first two letters of their mother’s first name and the last two digits of their social security number.









A total of 319 sophomore, junior, and senior nursing students at the three identified institutions (LMA, SMW, and SMA) completed the end-of-semester SECEE inventory. In addition, 126 students completed pretest inventories. After reverse coding for items 9, 11, 21, and 26, respondents’ sub-scale scores were calculated by adding the scores for the items within each sub-scale. Open data cells and cells coded as “can’t answer” were replaced with the participant’s calculated mean scale score. No cases contained more than two calculated item scores per sub-scale.

Missing data


Two of the demographic items contained a frequency of missing data worthy of concern. Thirty-five students (8%) did not identify their inventories. However, this presented a problem only when attempting to match pretest and end-of-semester inventories for test-retest reliability determination. In addition, 66 respondents (21%) to the end-of-semester inventory and 22 respondents (17%) to the pretest inventory did not identify the clinical site they were evaluating. There was a higher frequency of missing data for both demographic items from SMA and LMA students.


There were very few instances of missing data for the inventory itself — only 17 “open” cells were identified. However, study participants responded “can’t answer” a total of 183 times, or 1.4%. Of the explanations provided for “can’t answer” responses, most students indicated that they were unable to answer particular questions because either there had been no preceptor or resource nurse assigned to the student, or there had been no other students at their particular clinical site.






Descriptive analysis revealed that students evaluated their clinical learning environments relatively positively, with overall item means varying from 1.70 to 2.62 on the end-of-semester inventory administration, (lower item or scale score indicates a positive evaluation of the clinical learning environment). Standard deviations ranged from 0.86 to 1.22.

Calculations for reliability of the scales and for the instrument as a whole were conducted separately for each institution as well as for all institutions together (Table 2). Reported internal reliability figures represent standardized coefficient alphas. Review of the analyses indicated that the overall alpha for the entire instrument was quite high: .94 for both LMA and SMA institutions and .89 for SMA. The reliability of the instrument as a whole was highest with all items included. In order to address test-retest reliability for the instrument, pretest and end-of-semester scale scores were compared for the sub-sample of SMW and SMA students who evaluated the same clinical site on each inventory (n = 46). The lack of student identification of their completed inventories resulted in the loss of 20 cases from the test-retest reliability analysis. Correlations between the pretest and end-of-semester inventory scale scores ranged from .50 to .61. This result indicates that student responded rather similarly to the same instrument evaluating the same clinical site at intervals three weeks apart, in spite of the fact that students had received additional learning experiences at the site within the three week interval. In addition, scale score correlations were determined for the sub-sample of LMA students who evaluated different clinical sites on the pretest and end-of-semester inventory (n = 60). These correlations were much lower, ranging from -.01 to .20, indicating that students evaluated distinct clinical sites differently.


The investigation of whether there were differences between SECEE inventory scale scores based on the clinical site being evaluated was conducted via ANOVA testing. Prior to this analysis, it was necessary to group some clinical sites together, as several community learning environments were evaluated by only one or two students. Site grouping was based upon the type of nursing care provided at the site (e.g. home care, clinic, etc), and resulted in reduction of the number of clinical groups from 56 to 25. The extent of site grouping was not similar across institutions, with grouping of only 2 identified sites at SMW, grouping of 6 sites at SMA, and the consolidation of 35 clinical sites at LMA into 10 site groups. ANOVA statistics are presented in Table 3.


Results indicated that there were significant differences in sub-scale scores according to clinical site groups. For the SMW institution, ANOVA results were significant for 3 of the 4 sub-scale scores (Communication/Feedback, Learning Support, and Department Atmosphere). At SMA, 2 of the 4 sub-scales were found to have significant differences according to site groups (Learning Support and Department Atmosphere). LMA institution data differed from the other institutions in that no significant differences between clinical site groups were found for any of the four sub-scales.



Analysis of Narrative Inventory Data


As noted in Methods, students were asked to identify aspects of the clinical setting that both promoted and hindered learning, and to make any other comments that they wished. Analysis revealed that the majority of student narrative comments (both positive and negative) directly reflected issues addressed by the forced-choice portion of the SECEE Inventory, lending support to the validity of inventory content.


Of student comments that were not addressed by inventory items, few issues were identified by greater than 5 of the 319 respondents. Of the issues raised, that of too high a student / instructor ratio was identified more than twice as often as any other issue (frequency = 19). Other comments addressed specific learning opportunities related to the site (frequency = 9), student comfort in asking questions (frequency = 8), one-on-one interaction with clients (frequency = 7), and adequate time in the clinical rotation (frequency = 7).







 24Fraser, 1991.


Discussion and Conclusions


The purpose of this research was to continue testing and refinement of the SECEE inventory. The revised SECEE inventory is brief in comparison to many learning environment inventories24. It also appears to be practical for use, as study data were collected with ease by both the researcher and nursing faculty at three different institutions. It reflects the critical aspects of the learning environment identified by nursing students, faculty, and the literature, as well as the reviewed samples of Nursing School - Agency contracts.


Reliability findings indicate that students responded consistently to the entire instrument with the coefficient alpha for LMA and SMA institutions being .94 and .89 for the SMW institution. Reliability for the scales appear acceptably high, except for the Department Atmosphere scale, which has the fewest items (six) and covers a relatively broad range of issues.










 25Kubiszyn, T. & Borich, G. Educational Testing and Measurement: Classroom Application and Practice (Scott Foresman and Co., Glenview, IL, 1987)


Test-retest correlations were found to be relatively strong at .50 to .61 for the four sub-scales, particularly given that the student respondents had experienced several additional clinical sessions at the clinical site prior to retest, which may have impacted their perception of the learning environment. In addition, test-retest reliability results are generally somewhat lower than internal consistency measures25.  Test-retest correlations also demonstrated that individual students evaluated distinct clinical sites differently when using the SECEE inventory.




Sub-scale score differences were found between students evaluating different clinical site groups at the two small institutions (SMA and SMW). No sub-scale score differences were found between site groups at LMA institution. However, due to the grouping of 35 different clinical sites into 10 site groups at LMA, the nature of the learning environment may have varied greatly within the LMA groups. In the future, it would be beneficial to limit grouping of clinical sites as much as possible.

Several inventory refinements have been proposed as a consequence of this study. Items 23 and 29, “the environment provided an atmosphere conducive to learning” and “I was successful in meeting most of my learning goals,” will be removed due to their broad nature in comparison with the other inventory items. The items relating to the student feeling overwhelmed, and the clear communication of the student’s patient care responsibilities, will be replaced with one item assessing the student feelings of preparedness for their clinical role.

Five items will be added to the inventory, covering issues of adequate student/faculty ratio, comfort in asking questions of both faculty and staff, the benefit of direct interaction with clients, and the adequacy of clinical rotation time. In addition, the item addressing support of students in attempts to apply new knowledge will be split into two items, specifically addressing faculty support and staff support. The above changes will result in a net effect of the addition of three items to the inventory.




26Fraser, 1987.


Research has demonstrated that student perceptions of the learning environment account for an appreciable variance in student learning outcomes26. The wide variety of clinical sites used for applied learning experiences may provide vastly different qualities of learning environments for nursing students. The SECEE inventory appears to provide an efficient means to begin to assess the quality of the clinical learning environment “through the students’ eyes.” Implementation of recommendations for additional revision should further improve both the reliability and validity of the instrument.

The goal for development of an instrument such as the SECEE is to provide accurate information to nursing faculty and administrators about the quality of the learning environment at all clinical sites used by a nursing program. Data from the SECEE inventory could be used together with information from faculty and learning outcome assessments to make decisions about taking action to improve the clinical learning environment and to monitor the success of actions taken to improve both the quality of the learning environment and student learning outcomes.


Table 1

SECEE Inventory Items Categorized by Scales


Inventory Scale

Item #

Abbreviated Content










Responsibilities clearly communicated

Preceptor/resource nurse communication re: pt. care

Instructor provided constructive feedback

Nursing staff served as positive role models

Instructor served as positive role model

Nursing staff positive about serving as student resource

Nursing staff provided constructive feedback


Learning Opportunities










Wide range of learning opportunities available at site

Encouraged to identify/pursue learning opportunities

Felt overwhelmed by demands of role (reverse coded)

Allowed more independence with increased skills

Nursing staff informed students of learning opportunities

Atmosphere conducive to learning

Allowed hands on to level of abilities

Was Successful in meeting most learning goals


Learning Support/Assistance











Preceptor/resource nurse available

Instructor available

Instructor provided adequate guidance with new skills

Nursing staff provided adequate guidance with new skills

Felt supported in attempts at learning new skills

Nursing students helped each other

Difficult to find help when needed (reverse coded)

Instructor encouraged students to help each other


Department Atmosphere











Adequately oriented to department

RN maintained responsibility for student assigned pt.

High RN workload negatively impacted exp. (reverse coded)

Adequate number and variety of patients available at agency

Needed equipment, supplies and resources were available

Competing for skills and resources negatively impacted exp. (reverse coded)

(back to text)


Table 2

Reliability Coefficients for Inventory and Individual Scales by Institution


Entire Instrument

Comm. / Feedback Scale

Learning Support Scale

Learning Opportunity Scale

Department Atmosphere Scale

LMA Instit.

n = 126







SMW Instit.

n = 69







SMA Instit.

n = 122







All Instit.

n = 318











(back to text)

Table 3

ANOVA Results, Inventory Scale Scores by Clinical site group within Institution



df  Between

Site Groups

df  Within


F ratio



Comm/ Feedback






Learning Opportunity






Learning Support






Department Atmosphere












Comm/ Feedback






Learning Opportunity






Learning Support






Department Atmosphere












Comm/ Feedback






Learning Opportunity






Learning Support






Department Atmosphere





(back to text)

Copyright, Southern Nursing Research Society, 2000

This is an interactive article. Here's how it works: Have a comment or question about this paper? Want to ask the author a question? Send your email to the Editor who will forward it to the author. The author then may choose to post your comments and her/his comments on the Comments page. If you do not want your comment posted here, please indicate so in your email. Otherwise we will assume that you have given permission for it to be posted.