Abstract
Assessment at North Dakota State University is considered to be a conversation about learning outcomes enriched by data with a goal of improving student learning. On the classroom level, this focuses on developing techniques to assess course-related knowledge and skills but may also include techniques to assess learner reactions to teaching and their course-related learning, study skills, and self-confidence. On the program level, this consists of an assessment plan and a corresponding assessment report. These assessment plans identify how the entire curriculum will be assessed over time, whereas the report documents plan implementation. The report consists of the activities designed to collect information on the success of each course. These activities may be direct, indirect, or non-measures of student learning. The direct measures along with a few indirect measures provide answers to the university assessment committee on student learning assessment questions: “what did you do?,” “what did you learn?,” and “what will you do differently as a result of what you learned?”
The North Central Association of Colleges and Schools mandated assessment plans by the end of 1995 (Higher Learning Commission, 2010). North Dakota State University (NDSU) formalized program assessment in the 1980s but did not initiate the university assessment committee until 1992 (R.L. Harrold, personal communication). This 19-member committee has representation from all colleges on campus. Members are asked to review two program assessment reports per year. Before each academic year begins, assessment guidelines are sent to department chairs, department heads, and program leaders. These guidelines include five documents that recommend the reporting format; provide reflection on assessment implementation; give examples of direct, indirect, and non-measures of student learning; and show how reviewers evaluate reports and provide feedback.
The self-reporting of levels of implementation has been modified from a document published by the North Central Association of Colleges and Schools–Higher Learning Commission (2003). It contains six categories: collective/shared understanding of assessment, mission and goals statements, faculty involvement, awareness of students, the assessment process–development and use of results, and proficiency in assessment by the department or unit (Fig. 1). Each category describes several levels of implementation or achievement in evaluating student learning. Each faculty member is asked to complete the self-reporting form, which is included in the program assessment report. The categories of faculty involvement and proficiency in assessment by the department or unit also become an assessment tool, suggesting if additional training activities on student learning assessment are needed.
Self-evaluation form used by faculty at North Dakota State University to report levels of implementation on assessment of student learning.
Citation: HortTechnology hortte 20, 4; 10.21273/HORTTECH.20.4.678
A rubric for evaluating assessment reports has been developed and used to acknowledge those departments and programs that have strengthened their activities to evaluate student learning. Scoring ranges from 0 to 10 with a score of 0 indicating that no report was received. A score of 8 to 10 is considered an exceptional report. These reports contain excellent descriptions of strong assessment activities conduced throughout the curriculum. General education courses offered by the program are included in the report and multiple measures of evaluating student learning are used with an emphasis on direct measures. Faculty involvement is evident and consistent with the “levels of implementation” departmental self-evaluation. Lastly, these reports provide evidence that assessment results have been used to strengthen individual courses and (or) to improve the curriculum. A score of 6 to 7 is considered above average with reports lacking two or more of the strengths described in the exceptional category or submitted late. Reports with a score of 4 to 5 have several areas for development and typically lack the majority of strengths identified for the top category of assessment reports. Reports with a score of 1 to 3 demonstrate a lack of understanding of assessment activities or a general lack of interest by the chair/head and (or) faculty and require strong corrective actions.
A new feature of the review of assessment reports is the one-page summary and a graph that illustrates the program's achievements in assessing what students know or can do in comparison with the top 25% of reports and the campus average (Fig. 2). Copies of the review letter in response to the program assessment report and the graph illustrating the program assessment report scoring for the past 10 years are shared with the Provost, Vice President for Academic Affairs, and the Dean of the College of Agriculture, Food Systems, and Natural Resources.
Assessment report evaluations for the horticulture and forestry program of the Department of Plant Sciences at North Dakota State University compared with the campus average and the average of the top 25% of reports for the academic years from 1998–1999 to 2007–2008.
Citation: HortTechnology hortte 20, 4; 10.21273/HORTTECH.20.4.678
The entire procedure appears straightforward and relatively simple; write a report and continue on with more important issues. Unfortunately, the process is anything but simple. To begin with, one needs to establish an assessment plan. This plan defines the program mission and goals as well as program student learning outcomes and how these outcomes are addressed in individual courses. In addition, the plan identifies the assessment techniques used and when the assessments are scheduled. For most programs at NDSU, a 3- to 5-year timeline is used (Table 1). All General Education courses are included in the report every academic year. The assessment plan has been described as a “road map” that identifies what students should know, or can do, by the time they graduate. It also describes how that set of knowledge and skills will be assessed. The assessment plan does not include program review or student ratings of instruction. Instead, the plan identifies the learning outcomes of the program, what courses address each learning outcome, when each course will be included in the assessment report, and which assessment techniques will be used. In addition, the program assessment report must include: 1) “what did you do?”; 2) “what did you learn?”; and 3) “what will you do differently as a result of what you learned?” These three questions are addressed by the instructor for each course included in the program assessment report. Lastly, each course in the report must also have multiple measures of assessment with at least one direct measurement.
Three-year timeline of courses included in the assessment report for the horticulture and forestry program at North Dakota State University.
The program mission, goals, and student learning outcomes should be periodically reviewed and discussed, especially as new faculty members come into the program. The NDSU horticulture and forestry program has four general goals or program learning outcomes that deal with technical knowledge and field skills, critical thinking and problem analysis as well as verbal and written communication skills and professional values. Because our program consists of five distinct options—horticulture biotechnology, horticulture science, landscape design, production—business, and urban forestry and parks—faculty members met during a curriculum review to identify student learning outcomes for each Department of Plant Sciences course in the program. In addition, each course learning outcome for the 16 courses was specifically linked to a horticulture and forestry program student learning outcome. Course learning outcomes are listed within each course syllabus, describing specific knowledge, abilities, values, and attitudes that each student should be able to demonstrate after completing the course.
So how does this assessment report get written? The key is faculty involvement. The horticulture and forestry program in the Department of Plant Sciences has nine faculty members teaching courses, which can be advantageous in comparison with larger departments for collaboration or participation with the assessment of student learning. However, faculty involvement, regardless of the department size, will be difficult unless incentives and training are provided and value is demonstrated. Faculty involvement can be mandated, but mandates are often for short-term solutions unless consequences are associated with the lack of participation. Some departments have incorporated course assessment into their annual report. This method illustrates how assessment can become a valuable aspect of the department, especially if there is a system that rewards faculty for their efforts in conducting assessments of student learning. Other departments have provided incentives such as no cost workshops on course assessment, free subscriptions to educational journals, or payment of association dues. The Provost and Vice President for Academic Affairs at NDSU has routinely sponsored pedagogical luncheons with a speaker providing information or training on some aspect of teaching or assessment. In addition, several members of the Department of Plant Sciences developed a teaching circle in which instructors and graduate students could gather over their lunch hour to discuss a pertinent issue related to teaching and student learning. All are good examples of ways to get faculty involved. Furthermore, the program assessment report is only as strong as the weakest link in the chain. A department with only a few faculty members may expose someone unwilling to contribute to program assessment sooner than a larger department, but both situations are difficult to deal with.
Once there is faculty involvement, program assessment begins with the development of a course assessment timeline (Table 1) and providing faculty members with classroom assessment technique (CAT) examples. These CATs should be considered formative “on-the-go” assessments that allow immediate changes in teaching that improve student learning. Numerous web sites contain examples of simple yet effective CATs derived from a handbook written by Angelo and Cross (1993). This handbook explains 50 different CATs along with an estimation of the time and energy required to implement each CAT. In the Department of Plant Sciences, the chair asked faculty members to include a pre-test/post-test as a direct assessment measure for each course. The pre-test/post-test requires some planning, but time and energy involvement would be considered low. Additionally, all instructors are familiar with testing and grading; thus, implementation was relatively easy. Instructors developed a set of questions addressing the course student outcomes and asked students to answer the questions before teaching and again after teaching the course. Every faculty member in the Department of Plant Sciences began their course with a set of problems/questions and then gave this same set to students at the end of the course.
Before implementation, several questions and issues arose. Some wanted to know if only one question should be used per course learning outcome. Others wanted to know how many questions were appropriate and the type of questions that should be used. At the end of the semester, more issues arose. One instructor encountered less than half of the enrolled students doing the post-test compared with the pre-test. Another instructor encountered a total lack of commitment when taking the post-test. All who administered the pre-test at the beginning of the course and the post-test at the end of the course found that it was too late to correct skill and knowledge deficiencies for the current class of students. As a result, they lost the “on-the-go” ability to adjust, but could use what they learned for the next time the course was taught.
The limited number of Horticulture and Forestry faculty enabled our group to meet several times to discuss these and other issues. One of the main issues discussed with the pre-test/post-test was the interpretation of the results. Students are often told that the pre-test is being used to understand what they already know about the subject matter and that it is not included as a course grade. This often results in very low student effort, just as when the post-test was given separately and not graded. Corresponding pre-test averages are often less than 30% with more than 70% for the post-test. The question that arises is whether this tool is assessing student learning or student effort. What would it mean if there was not a large percentage increase? How can simple multiple choice or true/false questions (low effort requirement) assess learning? Nartgün and Uluman (2009) used a pre-test/post-test to show that success levels of teacher candidates in the experimental group who were experienced in using CATs in classroom were higher than those of the control group who had no experience with CATs. Perhaps Nartgün and Uluman found a more appropriate use of the pre-test/post-test.
Our pre-test/post-test meetings resulted in three student learning assessment recommendations. We agreed that students needed to become more aware of the evaluation of student learning. The type of questions asked during the pre-test/post-test process did not matter, but the posttest questions had to be incorporated into tests or quizzes at appropriate times throughout the semester, enabling “on-the-go” assessment. Finally, we needed to provide the students with feedback about their general level of learning in each course. Unfortunately, this requires a much higher commitment of time and energy. Thus, some faculty members have incorporated all the recommendations, whereas others have incorporated one or two of the recommendations.
Another assessment issue faculty members faced was the documentation of assessment techniques. The Department of Plant Sciences chair instructed faculty members to incorporate the pre-test/post-test, but the program assessment report guidelines from the university assessment committee asks for multiple measures of student learning for each learning outcome and at least one direct measurement that identifies what students know or can do. Because there are only nine faculty members teaching horticulture and forestry courses, arranging a meeting to explain the program assessment report guidelines of incorporating at least two CATs for each course included in the report was relatively easy. Most faculty members have incorporated CATs such as “muddiest point, minute paper, and recall, summarize, question, comment, and connect” (RSQC2). However, some have developed and incorporated surveys on student engagement and student confidence. To help remind faculty members of their commitment to program assessment, a notification message is sent at the beginning of the semester to those teaching courses that will be included in the program assessment report. This message indicates the CATs that have been used in the past, suggests other CATs to explore, and reminds everyone that this is an “on-the-go” process that allows for a gradual improvement in the understanding of assessment. If a faculty member wants additional information, documents are provided and a follow-up meeting is arranged. When the program assessment report has been completed, all faculty members receive a copy. This process helps to close the loop and reinforces that everyone contributes to the assessment of student learning and educational improvement.
Literature cited
Angelo, T.A. & Cross, P.K. 1993 Classroom assessment techniques 2nd Ed Jossey-Bass San Francisco, CA
Higher Learning Commission 2010 Assessment and academy 19 Jan. 2010 <http://www.ncacihe.org/index.php?option=com_docman&task=cat_view&gid=79&Itemid=236>.
Nartgün, Z. & Uluman, M. 2009 Investigating the effects of teaching with classroom assessment techniques (CATs) on the success of teacher candidates Intl. J. Human Sci. 6 626 650
North Central Association–Higher Learning Commission 2003 Institutional accreditation: An overview 19 Jan. 2010 <http://www.ncahlc.org/download/2003Overview.pdf>.