Demystifying Learning Outcomes Assessment at the Program Level

in HortTechnology
View More View Less
  • 1 Professor and Dean, University of Tennessee, College of Agricultural Sciences and Natural Resources, 2621 Morgan Circle, Room 126, Knoxville, TN 37996-4500

In addition to being an essential part of the continuous cycle of improvement, program assessment helps provide for documented accountability, improved learning content, and enhanced pedagogy. The process of using descriptions of the ideal graduate, program descriptive material, faculty and student input, and overlapping course outcomes to develop meaningful program learning outcomes is described. Both direct and indirect assessment methods can be used to determine if the program is meeting its desired learning outcomes as well as using classroom-embedded assessment, capstone experiences, collective portfolios, standardized tests, pre- and post-tests, exit interviews, and various surveys. A program matrix can be used to track where various program learning outcomes are being addressed within individual courses. This article describes a fundamental first approach to assessing and documenting program learning.

Abstract

In addition to being an essential part of the continuous cycle of improvement, program assessment helps provide for documented accountability, improved learning content, and enhanced pedagogy. The process of using descriptions of the ideal graduate, program descriptive material, faculty and student input, and overlapping course outcomes to develop meaningful program learning outcomes is described. Both direct and indirect assessment methods can be used to determine if the program is meeting its desired learning outcomes as well as using classroom-embedded assessment, capstone experiences, collective portfolios, standardized tests, pre- and post-tests, exit interviews, and various surveys. A program matrix can be used to track where various program learning outcomes are being addressed within individual courses. This article describes a fundamental first approach to assessing and documenting program learning.

Programs are an aggregation of courses with a common mission whose assessment lies predominantly in the courses within the program curriculum. The product of the program is a graduate who is impacted by the learning as a result of the program curriculum and the holistic educational experience outside of the classroom that may accompany the program offerings such as internships, undergraduate research experiences, international travel, leadership training, and service learning. It is not enough to feel as if we are doing a good job educating the student. In today's climate of accountability, it is now important that we are able to provide evidence that learning has actually taken place.

Conversely, as university budgets experience greater and greater declines in the proportion of support coming from their states, the amount of oversight monitoring program quality appears to be increasing. More and more, programs are being asked to provide evidence of their effectiveness with respect to what students are actually learning as a result of the program rather than merely what is being taught. Program administrators must now document how they are assessing learning in their programs and this documentation is being used to satisfy requirements of accreditation and, in some cases, state scorecards for accountability. Many times, program assessment receives concentrated attention as a function of timing of regional accreditation cycles (Maki, 2002), when it should really be an ongoing and invested process. Middle States Association of Colleges and Schools (2006) makes the point, in Standard 14, that “Assessment is not an event but a process and should be an integral part of the life of an institution…” In the Principles of Accreditation: Foundations for Quality Enhancement published by the Southern Association of Colleges and Schools, Commission on Colleges (2009), Section 3.3.1 states that the institution “identifies expected outcomes, assesses the extent to which it achieves these outcomes, and provides evidence of improvement based on analysis of the results in each of the following areas,” the first of which is 3.3.1.1 “educational programs, to include student learning outcomes.” There is also a danger in imposing a model of assessment that is not customized to the individual program and assumes the business model of “education as a product” instead of “education as process toward intellectual independence” (Buckman, 2007). A program is not merely a collection of courses taken in a prescriptive manner. It is the holistic experience encompassing not only the learning that occurs within the classroom in the context of a course, but also the learning that results from the entire experience of interacting with peers and faculty, of being steeped in the university environment and culture, and engaging society from the standpoint of a changed and more educated perspective.

Implementation

So what does this really mean for the administrator of a program? The program should have a set of learning outcomes that are assessed and documented each year. These assessment findings should be evaluated with suggested improvements and, in return, those improvements assessed as well. This implies a minimum documentation of at least 3 years' worth of a cycle of assessment and improvement (1 year to collect data on the existing program, a second year to evaluate the results of that assessment and implement changes, and a third year for those changes in turn to be assessed). Assessment that is not documented does not exist! Departmental memory is fleeting and only as good typically as the recall of the faculty member with the most seniority. Systematic written records are far more reliable and if these are not maintained, “faculty are likely to lose track of the assessment program, forget what has been learned, and fail to follow through on agreed-upon changes” (Allen, 2004). An important facet of this assessment is “closing the loop,” in which an outcome is assessed, change is made to the curriculum based on that assessment, and that change is in turn assessed resulting in “continuous quality improvement” (Fig. 1).

Fig. 1.
Fig. 1.

Process of program assessment from its inception indicating the cyclic nature of assessment followed by improvement and illustrating the concept of “closing the loop.”

Citation: HortTechnology hortte 20, 4; 10.21273/HORTTECH.20.4.672

Program learning outcomes describe what the graduate of the program should know, be able to do, and feel about what they have learned. Many times, program learning outcomes are developed quickly in response to an urgent need or impending deadline by the program administrator instead of using a more inclusive and comprehensive process to develop them. The more widespread the involvement to develop program learning outcomes, the greater the buy-in from the faculty within the program. Who are the stakeholders of the program? Not only should faculty be involved in the process, but current students and alumni because they have expectations of what they will receive as a result of the instruction and an awareness of the value of the completed program. Alumni can contribute from the standpoint of career accomplishments, citizenship activities, and professional involvement. If the program has an advisory group or council, their input may be obtained as well. In “Transforming Agricultural Education for a Changing World” (National Research Council, 2009), there is a recommendation that academic institutions offering undergraduate education in agriculture engage in strategic planning that involves a broad representation of stakeholders. This same list of stakeholders should be involved in developing content and expectations for programs. These include faculty not only in the college, but across campus, current and former students, employers, disciplinary societies, commodity groups, local organizations focused on food and agriculture, farmers, and representatives of the public.

If program learning outcomes have not yet been developed for the program, an opportunity exists to conduct a brainstorming workshop session to develop them involving as many of the stakeholders as possible. Setting the stage for a successful workshop means finding an environment that allows ready discussion and a place where draft outcomes can be displayed for all to see. Additional material that helps to stimulate ideas can be found in brochures about the program, accreditation reports, and even recruitment material. Sometimes, there is useful material that can be obtained from the professional association that describes persons proficient in the discipline.

A first step in the process of developing program learning outcomes should be a discussion of what characteristics the graduating student is expected to have relative to the incoming student. The worksheet depicted in Figure 2 divides these characteristics into the three domains of learning. The affective domain (Krathwohl et al., 1973) describes what the student should feel, appreciate, or value as a result of the program, for example, a heightened awareness of professional ethics associated with conducting research. The psychomotor domain (Simpson, 1972) describes what the student should be able to do in the realm of physical movement, motor skills, and eye–hand coordination as a result of the program, for example, a competently executed landscape design drawing. The last domain and often the easiest to work with is the cognitive domain (Bloom, 1956), which describes what the student should know as a result of the program, for example, the anatomical parts of a plant. Each of these domains is further subdivided into a hierarchy of increasing more complex learning levels around which learning objectives can be designed.

Fig. 2.
Fig. 2.

Brainstorming exercise designed for departmental faculty to identify what successful graduates of the program should feel, be able to do, and know as a result of completing the program successfully. The three domains of learning (affective, psychomotor, and cognitive) are depicted.

Citation: HortTechnology hortte 20, 4; 10.21273/HORTTECH.20.4.672

Overlapping learning outcomes from courses within the program are another potential source for the program learning outcomes (Fig. 3). Syllabi from the courses in the program curriculum can or should be able to provide individual course learning outcomes or learner objectives (Albrecht, 2009). In the example illustrated in Figure 3, each of the course learning outcomes relates to a common theme of the ability to communicate either orally or in writing. This commonality can then be developed into a corresponding program learning outcome. In the brainstorming session, it is tempting to develop too many program learning outcomes. It is far better to develop six to 10, at the most, and do a good job assessing these than to develop more than 10 and be so overwhelmed by the process that they are never adequately assessed. Another tendency is to develop learning outcomes that are too detailed or to become bogged down by exact wording of the learning outcome. Assessment and improvement are continual processes and consistent with this idea, learning outcomes can be refined and revised over time. The idea that the outcomes are not set in concrete tends to help the discussion from becoming too restrictive.

Fig. 3.
Fig. 3.

Exercise that depicts how the commonality among overlapping course student learning outcomes becomes the overarching themes for program learning outcomes. In this case, each of the course learning outcomes deals with some aspect of communicating effectively whether the mode is writing informally, writing in scientific style in IMRAD (introduction, materials and methods, results, and discussion) format, responding to queries orally, presenting in public, or successfully carrying on a debate.

Citation: HortTechnology hortte 20, 4; 10.21273/HORTTECH.20.4.672

At the very least, certain information about the program should be documented in some kind of format that allows easy comparison from year to year. The means of documentation may be as simple as Word (Microsoft, Redmond, WA) tables (Fig. 4) or the information can be placed online for ready sharing. For each program, information should provide the mission of the program, the learning outcomes, assessment methods, results of the assessment, and use of the results to improve the program. The last component is extremely important because it provides evidence of “closing the assessment loop.”

Fig. 4.
Fig. 4.

Simple Word (Microsoft, Redmond, WA) form used to document learning outcomes for degree programs that captures the name of the program, its classification of instructional programs (CIP) code, the academic year, the program's mission, each learning outcome, the accompanying assessment measures used, results from that assessment, and how they have been used to improve the program (closing the loop).

Citation: HortTechnology hortte 20, 4; 10.21273/HORTTECH.20.4.672

Many strategies exist for assessing the effectiveness of a program, including both direct and indirect measures. Direct measures are based on what the student has produced such as the actual student work, examination results, laboratory reports, oral presentation performances, final designs, etc. Indirect measures are based on perceptions of what the student has learned such as surveys of employers, surveys of the students themselves (satisfaction surveys, focus groups, exit interviews), or indirect data on quality such as placement rates. One of the strongest direct measures of student learning occurs during classroom evaluation and this type of assessment can easily be documented using an assessment matrix that links individual program learning outcomes with specific courses in the curriculum. Another beneficial feature of the assessment matrix is that its use makes it very easy to see where there may be “holes” in the curriculum in which a particular program learning outcome is not addressed by any department course in the curriculum. A simple form of assessment matrix merely lists the courses in the curriculum and then matches them with the specific program learning outcomes that are satisfied within each course. This could be indicated with a check mark or elaborating on the theme somewhat; each intersection could be identified with a L, M, or H to indicate low, medium, or high emphasis on that particular outcome (Fig. 5). In “Assessing Academic Programs in Higher Education,” Allen (2004) illustrates a “curriculum alignment matrix” that identifies intersections between program objectives and courses with an “I,” “P,” or “D” to indicate whether the concept or objective is “introduced,” “practiced,” or “demonstrated,” respectively.

Fig. 5.
Fig. 5.

Examples of two different table formats that can be used for an assessment matrix that links individual program learning outcomes with specific courses in the curriculum. The one depicted on the top lists the learning outcomes for the Bachelor of Science (BS) degree in Environmental Sciences and shows courses in the Department of Environmental Science (ES) that place high (H), medium (M), or low (L) emphasis on that outcome within the course. The table format to the lower right depicts which courses in the Department of Plant Science (PLSC) satisfy the program learning outcomes (identified with a check mark) and what assessment tool or project is being used to evaluate achievement of the learning outcome.

Citation: HortTechnology hortte 20, 4; 10.21273/HORTTECH.20.4.672

Assessment matrices also allow documentation of competencies that are desired outcomes of the educational experience that are not part of mastery of content knowledge. Some of these desired competencies are teamwork; ability to work in diverse communities; working across disciplines; ability to think critically, analyze, and communicate in a variety of formats; make decisions in an ethical manner; and lead/manage effectively (National Research Council, 2009).

With respect to the various methods to assess program effectiveness, use of capstone experiences, collective portfolios, pre- and post-test evaluation, student satisfaction surveys, exit interviews, and alumni and employer surveys are among the most often used. Other methods of assessment, both direct and indirect, include things such as internships and practica, state, and national examinations; locally developed proficiency examinations; direct observation of student behavior; essays and research papers; senior assignments; results from certification or licensure examinations; retention and completion rates; placement success; focus group results; case studies; and course evaluations.

Capstone courses build on the knowledge gained from the entire course curriculum and usually represent a culminating experience that gives the students an opportunity to use their knowledge and skills to solve a real problem or to produce a tangible product of that knowledge. Ideally, the experience should be broadening and integrate the whole experience of the major, both inside and outside the classroom. Because successfully working with peers is a largely valued skill in the workplace, capstone experiences that provide an opportunity for students to collaborate offer additional value. The capstone experience should also be a natural transition to either the workplace or to graduate school.

A capstone experience for an undergraduate horticulture major whose emphasis is pomology could be development of an orchard management plan for a specific piece of property. Guidelines could be imposed requiring base budget, scale implementation map, and year-round pesticide management plan, among others.

Collective portfolios represent an amalgamation of sample work and projects produced by the student. Discipline areas that have substantial hands-on and aesthetic components lend themselves well to such an assessment method. For example, landscape design students could use collective portfolios to document the progression of their skills from early graphics and lettering to implementing the entire design process from bubble diagram and concept plan to finished design and even sections, elevations, three-dimensional drawings, and use of color rendering to heighten the design impact. This type of assessment is not only an excellent tool for assessment of learning, but also can be useful in helping the student showcase talent for later job placement.

Pre- and post-tests are a useful way of documenting the state of a student's knowledge on entry into the program and the progress made on its completion. Locally developed tests have the advantage that they can be customized, but if they are to be used to assess program progress longitudinally across several years, they should be standardized with respect to content and rigor.

Although student satisfaction surveys are not a direct measurement of learning, they can be used to determine the students' perceptions of how effective the program was in providing the information that they needed to be proficient in their discipline. These are simple to administer and can be done in conjunction with events such as the exit interview or completion of a capstone course.

Exit interviews allow the students to explore issues that are not addressed in more formal methods of indirect assessment such as the student satisfaction survey. Setting an environment where the student feels it is safe to share observations is a very important component of the exit interview. These can be very time-consuming to conduct but surprisingly valuable information may come out of them that can be used to revise the curriculum.

Alumni and employer surveys are both indirect assessment methods, but they can yield useful information about the perceived effectiveness of the program, its curriculum, and its faculty. Once the student enters the job market and gains a real-world perspective, the student's attitude about aspects of the program and how well it prepared him or her for a career in the discipline may change. Feedback is more immediate than that attained from employer surveys, but it is also sometimes difficult to track and reach alumni, let alone get them to respond to survey requests. Unless a large enough sample responds, the response cohort may even represent a biased sample.

Employer surveys can include questions on preparedness with respect to the discipline area, characteristics valued in the workplace, and willingness to hire future graduates of the program. Employers also make good members of advisory groups and many times can relate anecdotal information about performance of the program's graduates that gives clues about where changes need to be made or what aspects of the program are working well and should be retained.

Programs are dynamic entities consisting of far more than just a list of courses. They are driven by a mission that is many times also responsive to the changing needs of the student, current state of knowledge in the field, and demand for the program's graduates. Content is always being driven by research findings and contributed to by new and emerging discipline areas. Change can be positive or negative. Consistent, thoughtful assessment is one way to optimize the likelihood that change will result in a program that is current, cutting edge, effective, and responsive to the progression of research and society.

Literature cited

  • Albrecht, M.L. 2009 Getting started: Writing the course syllabus HortTechnology 19 240 246

  • Allen, M.J. 2004 Assessing academic programs in higher education Anker Publishing Bolton, MA

    • Export Citation
  • Bloom, B.S. 1956 Taxonomy of educational objectives, Handbook I: The cognitive domain David McKay New York, NY

    • Export Citation
  • Buckman, K. 2007 What counts as assessment in the 21st century? Thought Action 23 29 37

  • Krathwohl, D.R., Bloom, B.S. & Masaia, B.B. 1973 Taxonomy of educational objectives, the classification of educational goals, Handbook II: Affective domain David McKay New York, NY

    • Export Citation
  • Maki, P. 2002 Moving from paperwork to pedagogy: Channeling intellectual curiosity into a commitment to assessment Amer. Assn. Higher Educ. Bul. 54 3 5

    • Search Google Scholar
    • Export Citation
  • Middle States Association of Colleges and Schools 2006 Characteristics of excellence in higher education: Eligibility requirements and standards for accreditation Middle States Association of Colleges and Schools Philadelphia, PA

    • Export Citation
  • National Research Council 2009 Transforming agricultural education for a changing world Committee on a leadership summit to effect change in teaching and learning National Academy Press Washington, DC

    • Search Google Scholar
    • Export Citation
  • Simpson, E.J. 1972 The classification of educational objectives in the psychomotor domain Gryphon House Washington, DC

    • Export Citation
  • Southern Association of Colleges and Schools, Commission on Colleges 2009 Principles of accreditation: Foundations for quality enhancement Southern Association of Colleges and Schools, Commission on Colleges Decatur, GA

    • Export Citation

If the inline PDF is not rendering correctly, you can download the PDF file here.

Contributor Notes

Corresponding author. E-mail address: cbeyl@utk.edu.

  • View in gallery

    Process of program assessment from its inception indicating the cyclic nature of assessment followed by improvement and illustrating the concept of “closing the loop.”

  • View in gallery

    Brainstorming exercise designed for departmental faculty to identify what successful graduates of the program should feel, be able to do, and know as a result of completing the program successfully. The three domains of learning (affective, psychomotor, and cognitive) are depicted.

  • View in gallery

    Exercise that depicts how the commonality among overlapping course student learning outcomes becomes the overarching themes for program learning outcomes. In this case, each of the course learning outcomes deals with some aspect of communicating effectively whether the mode is writing informally, writing in scientific style in IMRAD (introduction, materials and methods, results, and discussion) format, responding to queries orally, presenting in public, or successfully carrying on a debate.

  • View in gallery

    Simple Word (Microsoft, Redmond, WA) form used to document learning outcomes for degree programs that captures the name of the program, its classification of instructional programs (CIP) code, the academic year, the program's mission, each learning outcome, the accompanying assessment measures used, results from that assessment, and how they have been used to improve the program (closing the loop).

  • View in gallery

    Examples of two different table formats that can be used for an assessment matrix that links individual program learning outcomes with specific courses in the curriculum. The one depicted on the top lists the learning outcomes for the Bachelor of Science (BS) degree in Environmental Sciences and shows courses in the Department of Environmental Science (ES) that place high (H), medium (M), or low (L) emphasis on that outcome within the course. The table format to the lower right depicts which courses in the Department of Plant Science (PLSC) satisfy the program learning outcomes (identified with a check mark) and what assessment tool or project is being used to evaluate achievement of the learning outcome.

  • Albrecht, M.L. 2009 Getting started: Writing the course syllabus HortTechnology 19 240 246

  • Allen, M.J. 2004 Assessing academic programs in higher education Anker Publishing Bolton, MA

    • Export Citation
  • Bloom, B.S. 1956 Taxonomy of educational objectives, Handbook I: The cognitive domain David McKay New York, NY

    • Export Citation
  • Buckman, K. 2007 What counts as assessment in the 21st century? Thought Action 23 29 37

  • Krathwohl, D.R., Bloom, B.S. & Masaia, B.B. 1973 Taxonomy of educational objectives, the classification of educational goals, Handbook II: Affective domain David McKay New York, NY

    • Export Citation
  • Maki, P. 2002 Moving from paperwork to pedagogy: Channeling intellectual curiosity into a commitment to assessment Amer. Assn. Higher Educ. Bul. 54 3 5

    • Search Google Scholar
    • Export Citation
  • Middle States Association of Colleges and Schools 2006 Characteristics of excellence in higher education: Eligibility requirements and standards for accreditation Middle States Association of Colleges and Schools Philadelphia, PA

    • Export Citation
  • National Research Council 2009 Transforming agricultural education for a changing world Committee on a leadership summit to effect change in teaching and learning National Academy Press Washington, DC

    • Search Google Scholar
    • Export Citation
  • Simpson, E.J. 1972 The classification of educational objectives in the psychomotor domain Gryphon House Washington, DC

    • Export Citation
  • Southern Association of Colleges and Schools, Commission on Colleges 2009 Principles of accreditation: Foundations for quality enhancement Southern Association of Colleges and Schools, Commission on Colleges Decatur, GA

    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 235 67 3
PDF Downloads 71 27 2