Improving the Evaluation of Public Garden Educational Programs

Authors:
Aaron Steil Former Graduate Student, Longwood Graduate Program in Public Horticulture, University of Delaware, Department of Plant and Soil Sciences, 125 Townsend Hall, Newark, DE 19716 and Education and Plant Collections Coordinator, Reiman Gardens, IA State University, 1407 University Boulevard, Ames, IA 50011

Search for other papers by Aaron Steil in
This Site
Google Scholar
Close
and
Robert E. Lyons Professor and Director, Longwood Graduate Program in Public Horticulture, University of Delaware, Department of Plant and Soil Sciences, 126 Townsend Hall, Newark, DE 19716

Search for other papers by Robert E. Lyons in
This Site
Google Scholar
Close

Click on author name to view affiliation information

Abstract

Professional staff at public gardens often overlook educational program evaluation for a variety of reasons, but it remains important for program funding and development. This study developed an original, six-step evaluation approach specific to educational programs at public gardens. Interviews subsequently were conducted with 11 executive directors and/or directors of education at 10 public gardens in the United States with proven, high-quality educational programs. Interviews examined the feasibility, practicality, perceived effectiveness, and merits of the original evaluation approach developed in this study. Interview data added clarification to what is known about the current state of educational program evaluation at public gardens and supported and further improved the original evaluation approach to create an improved version.

Public gardens evaluate a wide variety of internal activities, including job performance, fundraising, and education. As observed elsewhere, these evaluations can conjure up a preponderance of negative emotions, which, when coupled with other factors such as time, money, and knowledge, may lead many professionals to avoid evaluation entirely. While the evaluation process is important for developing and conducting educational programs at public gardens, it may not be done as often as need be, and if so, may be done informally.

Education is a key component in nearly all public gardens' mission statements; they are educational by nature and are often defined as places where a wide variety of plants are cultivated for scientific, educational, and ornamental purposes.

Quality educational programs are important and foremost on the minds of most public garden staff (Lewis, 2004; Olien and Hoff, 2004; White, 2004). Education at public gardens is hinged on community attendance and support; demonstrating and improving the quality of that education is important to the success of the organization. Relf and Lohr (2003) state, “Education of the public or amateur gardener tends to be diverse and unstructured with opportunities taking many forms including the Internet, magazines, and courses at nurseries and botanic gardens. There is little documentation on this form of horticulture education.” More research needs to be done to improve education programs at public gardens. Focusing on evaluation is an important component of that initiative.

Assessment, review, study, analysis, social audit, or performance measurement—despite the terms used, have all been used to refer to the process of evaluation. Many definitions for evaluation have been proposed and they all contain similar components (Madaus et al., 1983; Patton, 2002; Rossi et al., 2004; Scriven, 1981; Stufflebeam, 2001; Weiss, 1998). However, for the following study, the following definition for program evaluation will be used: “the systematic assessment of the operation and/or the outcomes of a program or policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy” (Weiss, 1998).

Current state of evaluation at public gardens

While much literature exists about effective evaluation in public schools and universities, little published scientific research has addressed educational program evaluation at places of informal learning, such as public gardens (Coker and Van Dyke, 2005; Kirkwood, 1998; Saunders, 1992; Scantlebury et al., 2001; Schneider and Renner, 1980). Phibbs and Relf (2005) reported, “Many studies to date have been inconclusive, and some are essentially anecdotal, thus lacking the scientific rigor to substantiate the suggested benefits.”

Evaluation among public garden professionals is recognized as being important (Colón and Rothman, 2004; Lewis, 2004). Evaluation, especially when it is a part of program development, can help further define broad program goals. This process often requires research that can help support program methods and link goals, outcomes, and impacts (Wiltz, 2005). To improve the effectiveness of educational programs, evaluation is key to understanding the popularity and level of audience learning of a public gardens' programs. Evaluating program popularity is relatively straightforward; however, evaluating the educational value is decidedly more difficult (Eberbach and Crowley, 2004; Hamilton and DeMarrais 2001).

The value of program evaluation may not only be for self-improvement; it is often required by granting agencies. These funders want accountability and a measure of the funded program's worth, value, and effectiveness. Even programs not beholden to granting agencies often require evaluation; their participants are public school students from programs that must demonstrate achievement of federal- or state-mandated standards and outcomes (Dirks and Orvis, 2005; Wells and Butler, 2004). Unfortunately, little is known about the methods used by public gardens to get this information (Addelson, 2004, Eberbach and Crowley, 2004; Tanck, 2004).

What is known is that for most public gardens, program justification to schools and granting agencies is the primary use for evaluation. Summative evaluation produces “report cards” in hopes of proving program success and is, therefore, the most prevalent form of evaluation (Wiltz, 2005). With summative evaluation, public gardens often gather information to justify their programs to funding agencies, board members, and other stakeholders. Hence, many education program managers regularly record head-counts and classroom hours as simple numbers to demonstrate the impact of specific education programs (Balick, 1986). However, the number of students in a garden classroom is not the only important measure. Looking for evidence of learning, short and long term, is also valuable, not only for programs funded by grants and/or contracts, but for overall accountability of all informal education (Wells and Butler, 2004).

Many public garden educators see the value of evaluation but have limited time, money, and expertise to conduct evaluations adequately (Klemmer, 2004; Korn, 2004). “The reality for many non-formal education organizations is that they are implementing low-budget programs and lower-budget evaluations” (Norland, 2005). Furthermore, evaluation expertise, skills, and knowledge are often lacking in public garden educators (Olien and Hoff, 2004; White, 2004; Wiltz, 2005).

When published, the type and extent of evaluation used for educational programming at public gardens is highly variable (Eberbach and Crowley, 2004; Greenstein, 2004; Klemmer, 2004). Few studies conclusively state what is being used on a national or regional scale. In addition, of those published reports that outline evaluation procedures at specific institutions, none can be applied to a larger population because it remains unclear if there is any “generalizability” from one public garden to another (Addelson, 2004; Colón and Rothman, 2004; Hamilton and DeMarrais, 2001; Haynes and Trexler, 2003; Kneebone, 2004; Phibbs and Relf, 2005; Shoemaker et al., 2000; Tanck, 2004).

There is also little published evidence for the reasoning that supports the choice of one type of evaluation method or practice over another (Hamilton and DeMarrais, 2001; Mundy et al., 1999). Many nonformal educational organizations, like public gardens, have trouble identifying appropriate measurement criteria for evaluation due to the variability that exists across educational programs at public gardens and the differences those programs have from similar organizations, such as zoos and museums (Wiltz, 2005).

Workable, applicable, and universal evaluation tools and approaches are needed for educational professionals at all public gardens. Given this information, staff can improve the quality of their educational programming and more fully reach their stated missions (Relf and Lohr, 2003). Defining a practical and useful tool to help public garden educators more fully evaluate their work would be of great use to the profession, particularly if the tool considers factors such as attitude, time, money, and knowledge.

Methodology

Approach development.

An evaluation approach (EA) for educational programs at public gardens was developed following an in-depth survey of a wide range of resources. Several scholars within the discipline of program evaluation were used to create an EA that was specific to educational programs at public gardens and their unique considerations and needs (Fig. 1). Special attention was given to those fields closely related to public gardens such as museums, science centers, and/or nature centers, as well as organizations such as 4-H, because of their educational programming strength in nonschool environments.

Fig. 1.
Fig. 1.

An evaluation approach for educational programs at public gardens schematic.

Citation: HortTechnology 19, 3; 10.21273/HORTTECH.19.3.601

The term “model” could easily be used instead of the term “approach” in this article. Early in this investigation, however, it was clear to us that the term “model” was frequently misunderstood and was often viewed in the public garden realm as a specific tool that can be applied directly to specific educational program evaluation. In reality, this expectation is too demanding. A model is designed to provide a framework for evaluation and can be defined as a conception or approach or even a method of doing evaluation (Scriven, 1981). To improve clarity, the term “approach” has been used to further accentuate the supportive and guiding nature of the evaluation model.

This approach was designed to be highly adaptable and based on the many factors that influence educational programming at public gardens, such as funding, audience size, instructor experience, time for evaluation, and others. The EA was created to be universal in nature and therefore applicable to all programs whether new or well-established, simple or complex, intended for young or old, or conducted at small or large public gardens.

The EA was developed to give public garden professionals confidence to conduct useful evaluations. This guide was created to be consulted during program conception and development through to program conclusion and follow-up; it is also intended to help education professionals begin the process by asking the right questions, understanding all the factors that affect the evaluation process, and helping to avoid missing any steps.

The EA was intentionally designed to pose questions, not to provide answers, because educational programs at public gardens vary widely in their target audience, topic, size, and scope. Each public garden has unique facilities, staff, landscapes, funding sources, and other perceived limitations, and for the evaluation to be effective, the EA serves as a guide and framework, not a recipe for tailoring to each specific program.

Interviews

Interview participant selection.

Ten public garden participants were purposefully selected from the membership list of the American Public Garden Association (APGA), using criteria that is consistent with the level of educational programming that exists at public gardens with highly respected educational programs (Exhibits 1 and 2)

Exhibit 1.

Criteria used for the selection of the 10 public gardens that were asked to participate in the interviews about the evaluation approach for educational programs at public gardens.

Exhibit 1.
Exhibit 2.

Organizations in which executive director and education director were asked to participate in an interview about the evaluation approach for educational programs at public gardens.

Exhibit 2.

The executive director and education director from each organization were invited to be interviewed for this research; two executive directors and nine education directors agreed. Ten organizations and 20 individuals were originally asked to participate to maintain statistical validity for the size and scope of this study. Nine individuals were not comfortable participating or were unavailable.

Each selected garden was classified as a “large garden” by APGA definition and was chosen for several reasons, despite the fact they only represent ≈12% of the public gardens in the United States (APGA, 2005). These large gardens offer a diverse selection of educational programs. They have the financial resources to develop and evaluate these programs, and typically have a larger, more highly trained staff dedicated specifically to educational programming and evaluation. While this alone will not guarantee better evaluation, evaluation validity and reliability can suffer under constrained timelines and budgets (Bamberger et al., 2004), and these individuals and organizations could contribute the greatest amount of knowledge to improving evaluations in the context of public gardens.

Interview question development.

Interview questions focused on the participant's perceptions of the EA, its potential usefulness for their organization, the current state of educational program evaluation at their organization, and their attitudes and experience with evaluation.

An interview guide consisting of several specific open-ended questions allowing for unstructured follow-up questions based on the participant's response was used to gain more information. Two versions of the interview guides were developed, one for executive directors and one for education directors, to allow the interview to speak more directly to the participant and his/her related position (Exhibits 3 and 4). Interview participants were encouraged to share related stories or instances that further supported their viewpoint or answer to each question.

Exhibit 3.

Open-ended interview questions about the evaluation approach for educational program at public gardens asked of education directors.

Exhibit 3.
Exhibit 4.

Open-ended interview questions about the evaluation approach for educational program at public gardens asked of executive directors.

Exhibit 4.

The interview protocol and process.

A condensed interview protocol that did not include follow-up questions was sent to participants 1 week before the interview to brief participants about what they would be asked without allowing them to prepare “canned” answers (Johnson and Christensen, 2004). A copy of the EA and an informed consent form were also sent via e-mail, and participants were instructed to review the information before the subsequent 30- to 60-min telephone interview.

Each interview was audio recorded and written notes were taken. Audio files were subsequently reviewed to verify the accuracy of the hand notes and a second, more detailed set of notes with direct transcription was created. The text selected for direct transcription was chosen based on the analytical contribution it would provide to the overall study. The purpose of these interviews was to understand opinions, thoughts, general themes, and patterns, therefore, a full transcription would not complement the level of analysis and was not necessary (McLellan et al., 2003). These two sets of interview notes from each interview served as the data for analysis. The second set of notes was created soon after each interview to maintain accuracy (Patton, 2002). All interviews and pilot interviews were conducted within a 4-week period to minimize the threats to internal validity caused by attrition or attitude change based on the time of year when the survey was given.

A pilot interview was conducted in person with one education director and one executive director to more effectively gain immediate feedback on the interview questions. All other aspects of the pilot interviews were identical to the other interviews. Upon the completion of these pilot interviews, minor changes to the interview questions were made to clarify meaning, and small formatting changes were made to the EA diagram to make it easier to read and comprehend. Because the questions used for the pilot interview were essentially the same as those used in the subsequent interviews, the pilot interviews were analyzed and included in the reported results.

Analysis of interviews.

Interview analysis focused on identifying themes and important topics about the evaluation of educational programs at public gardens. Interview data were listened to twice; during the second review, the emphasis focused on capturing quotes and identifying preliminary themes (Patton, 2002). The two sets of interview notes were further reviewed to identify different categories that emerged repeatedly. Care was always taken to recognize and eliminate any bias toward the preliminary model created by the researcher. The resulting dominant themes were then used to modify the EA (Hamilton and DeMarrais, 2001; LeCompte, 2000). The recommended changes resulting from the data analysis were used to modify the EA and to create an improved version.

Interview findings, discussion, and recommendations

Interview findings.

Several themes emerged from the 11 interviews. These interviews illuminated characteristics of the evaluation of educational programs at public gardens and collected comments and recommendations on the EA. These findings are grouped by findings related to the evaluation of educational programs at public gardens (Exhibit 5), findings related to the characteristics of the evaluation process at public gardens (Exhibit 6), or comments about the EA directly (Exhibit 7). Within each group, the themes are categorized by similarity and are listed in order of most to least popular as determined by the number of interviews that supported the theme.

Exhibit 5.

Major themes that emerged from the interviews conducted with public garden executive directors and education directors about the evaluation of educational programs at public gardens.

Exhibit 5.
Exhibit 6.

Major themes that emerged from the interviews conducted with public garden executive directors and education directors about the characteristics of the evaluation process at public gardens.

Exhibit 6.
Exhibit 7.

Major themes that emerged from interviews conducted with public garden executive directors and education directors about the evaluation approach for educational programs at public gardens.

Exhibit 7.

Discussion

The goal of this research was to develop a useful approach for public garden educators to use for evaluation of their educational programs. This research gave a broader understanding of educational program evaluation at public gardens and highlighted how evaluation is perceived and used at each of the interviewee's organizations, all contributing to the improvement of the EA. While this research engaged 11 interviewees and only pertains to the 10 represented organizations, these findings may have implications for educational program evaluations at all public gardens.

Interviewees valued evaluation of educational programs.

Interviewees placed worth on educational program evaluation to improve program content, efficiency, and effectiveness. Conducting legitimate educational programs that the public wants and needs is vital to the success of educating the public—a key mission component for all interviewees. Interviewees said that evaluations are critical to meeting the needs of program participants and legitimizing the program to stakeholders and audiences. Interviewees valued evaluation because it helps them to keep programs within the organization's mission and in line with the program's goals.

The favorable opinion for evaluation is in large part due to its value to their organizations. The education and executive directors also agreed that program staff must see the importance of evaluation and often work hard to convey this point to their staff. Frequently, all persons involved in the evaluation process react positively to evaluation.

Interviewees knew what a good evaluation looks like.

After completing the interviews, it was apparent that the interviewees understood the importance of setting goals and useful benchmarks to measure those goals. They discussed why identifying and involving all relevant stakeholders was necessary for a successful evaluation. Many discussed, at length, the importance of utilization, and several commented on the value of determining why an evaluation is being conducted. All of these statements are in line with what many experts say evaluation should resemble (Patton, 2002; Rossi et al., 2004; Stake, 2004; Weiss, 1998).

Many factors limit evaluation of educational programs at public gardens.

Despite the value and knowledge interviewees have about evaluation, admittedly, evaluations are still underused, perhaps in part due to acknowledged limitations. One primary limitation is the experience level of many public garden educational program staff and managers. While all interviewees have successful programs, they all also recognize that the proper use of evaluation could make the programs better. Determining proper benchmarks to measure success is just one of several challenges public garden educators face when doing evaluations. Many do not have the knowledge or the background to effectively use evaluations to help them in this capacity.

This lack of experience at the highest levels within the organization may also reduce the pressure to conduct evaluations, as stated by three interviewees saying that in their organization, there is no clear sense of who is responsible for initiating evaluation. These weak management stances may play a part in limiting evaluations done at public gardens.

Additionally, educational programs at public gardens cover a wide range of topics and serve a wide range of audiences. It is not possible to have one evaluation method work for all programs at a public garden. The extra time and knowledge needed to effectively conduct evaluations is also difficult to find. These factors all limit evaluation.

Interviewees found the evaluation approach useful.

It is because of these limiting factors that all 11 interviewees thought the EA would be useful for their organization and for other public gardens. Interviewees stated the EA was comprehensive, and the low number of changes suggested by the interviewees speaks to that point. Interviewees specifically liked the cyclic nature of the EA and the inclusion of goal setting. Some identified time as a limiting factor, and practicality issues were an especially salient point.

While the use of external evaluators is common among interviewees, many like to evaluate educational programs internally. The EA can effectively encourage more public gardens to conduct evaluations internally, particularly for mandatory evaluations required by donors and granting agencies.

Recommendations

Ten changes were made to the original EA, creating the improved evaluation approach (IEA) for educational programs in public gardens (Fig. 2). These changes are based on specific recommendations made by interviewees, additional published research, and the new knowledge gained during the interviews about what interviewee's value in evaluation, what they find difficult with evaluation, and the organizational context in which evaluations at public gardens are conducted (Exhibit 8).

Fig. 2.
Fig. 2.

An improved evaluation approach for educational programs at public gardens schematic.

Citation: HortTechnology 19, 3; 10.21273/HORTTECH.19.3.601

Exhibit 8.

Recommended changes made to the evaluation approach for educational programs at public garden based on interviewee comments and further research to create an improved version.

Exhibit 8.

Conclusion

The EA combines several ideas, theories, and models suggested by experts in program evaluation. The intent of this thesis research was to make the EA a framework that can be used by any public garden for any educational program. The final IEA is a modification of the original EA based on the opinions of 11 education directors and executive directors from 10 gardens across the United States. While it is likely that the responses from the 11 interviewees are representative of responses that would be given by any public garden educator, it cannot be directly stated that this is true (Patton, 2002). Furthermore, it is important to remember that the interviewees are not necessarily experts in educational program evaluation. However, it can be assumed that the interviewees are experienced in the development and implementation of educational programs at public gardens and are pragmatically qualified to comment on evaluation with respect to program development and implementation.

All 11 interviewees stated that the IEA would be useful to them and to other gardens, and it is reasonable to consider that the IEA will be beneficial for the evaluation of all educational programs at public gardens.

Literature cited

  • Addelson, B. 2004 Teacher professional development at Missouri Public Garden 19 30 31

  • American Public Garden Association 2005 American Public Garden Association 2006 membership directory Amer. Public Garden Assn Wilmington, DE

    • Search Google Scholar
    • Export Citation
  • Balick M.J. 1986 Botanical gardens and arboreta: Future directions New York Botanical Garden and Amer. Assn. Botanic Gardens Arboreta Swarthmore, PA

    • Search Google Scholar
    • Export Citation
  • Bamberger, M., Rugh, J., Church, M. & Fort, L. 2004 Shoestring evaluation: Designing impact evaluations under budget, time and data constraints Amer. J. Eval. 25 5 37

    • Search Google Scholar
    • Export Citation
  • Coker, J.S. & Van Dyke, C.G. 2005 Evaluation of teaching and research experiences undertaken by botany majors at N.C. State University North Amer. Colleges Teachers Agr. J. 49 14 19

    • Search Google Scholar
    • Export Citation
  • Colón, C.P. & Rothman, J. 2004 NYBG preschool programs Public Garden 19 28 29

  • Dirks, A.E. & Orvis, K. 2005 An evaluation of the junior master gardener program in third grade classrooms HortTechnology 15 433 447

  • Eberbach, C. & Crowley, K. 2004 Learning research in public gardens Public Garden 19 14 16

  • Greenstein, S.T. 2004 Who goes there? The importance of doing and using audience research Public Garden 19 37 39

  • Hamilton, S.L. & DeMarrais, K. 2001 Visits to public gardens: Their meaning for avid gardeners HortTechnology 11 209 215

  • Haynes, C. & Trexler, C.J. 2003 The perceptions and needs of volunteers at a university-affiliated public garden HortTechnology 13 552 556

  • Johnson, B. & Christensen, L. 2004 Educational research. Quantitative, qualitative, and mixed approaches 2nd ed Pearson Boston

  • Kirkwood, H.P. 1998 Beyond evaluation: A model for cooperative evaluation of internet resources Info. Technol. 22 66 72

  • Klemmer, C.D. 2004 An evaluation primer Public Garden 19 8 10

  • Kneebone, S. 2004 Eden's environmental education outcomes Public Garden 19 31 33

  • Korn, R. 2004 Nonprofits, foundations, and evaluators or, where's the Advil? Public Garden 19 17 18, 39–40

  • LeCompte, M.D. 2000 Getting good qualitative data to improve educational practice Theory Into Practice 39 146 154

  • Lewis, C. 2004 Youth programs: The Fairchild challenge Public Garden 19 18 20

  • Madaus, G.F., Scriven, M.S. & Stufflebeam, D.L. 1983 Evaluation models. Viewpoints on educational and human services evaluation Kluwer-Nijhoff Boston

    • Search Google Scholar
    • Export Citation
  • McLellan, E., MacQueen, K.M. & Neidig, J.L. 2003 Beyond the qualitative interview: Data preparation and transcription Field Methods 15 63 84

  • Mundy, V., Grabau, L.J. & Smith, M.S. 1999 Teaching assessment in plant and soil science and agricultural economics departments J. Natural Resources Life Sci. Educ. 28 26 30

    • Search Google Scholar
    • Export Citation
  • Norland, E. 2005 The nuances of being “non”: Evaluating nonformal education programs and settings N. Dir. Eval. 108 5 12

  • Olien, M.E. & Hoff, J. 2004 Orland E. White school program Public Garden 19 20 22

  • Patton, M.Q. 2002 Qualitative research and evaluation methods Sage Publications Thousand Oaks, CA

  • Phibbs, E.J. & Relf, D. 2005 Improving research on youth gardening HortTechnology 15 425 428

  • Relf, P.D. & Lohr, V.I. 2003 Human issues in horticulture HortScience 38 984 993

  • Rossi, P.H., Lipsey, M.W. & Freeman, H.E. 2004 Evaluation. A systematic approach 7th ed Sage Publications Thousand Oaks, CA

  • Saunders, W.L. 1992 The constructivist perspective: Implications and teaching strategies for science Sch. Sci. Math. 92 136 141

  • Scantlebury, K., Boone, W., Kahle, J.B. & Fraser, B.J. 2001 Design, validation, and use of an evaluation instrument for monitoring systemic reform J. Res. Sci. Teach. 38 646 662

    • Search Google Scholar
    • Export Citation
  • Schneider, L.S. & Renner, J.W. 1980 Concrete and formal teaching J. Res. Sci. Teach. 17 503 517

  • Scriven, M. 1981 Evaluation thesaurus Edge Press Point Reyes, CA

  • Shoemaker, C.A., Relf, P.D. & Lohr, V.I. 2000 Social science methodologies for studying individuals' responses in human issues in horticulture research HortTechnology 10 87 93

    • Search Google Scholar
    • Export Citation
  • Stake, R.E. 2004 Standards-based and responsive evaluation Sage Publications Thousand Oaks, CA

  • Stufflebeam, D.L. 2001 Evaluation models N. Dir. Eval. 89 1 106

  • Tanck, S. 2004 Minnesota school programs Public Garden 19 22 23

  • Weiss, C.H. 1998 Evaluation: Methods for studying programs and policies 2nd ed Prentice-Hall Upper Saddle River, NJ

  • Wells, M. & Butler, B.H. 2004 A visitor-centered evaluation hierarchy. Helpful hints for understanding the effects of botanical garden programs Public Garden 19 11 13

    • Search Google Scholar
    • Export Citation
  • White, J.M. 2004 UCBG peer evaluation Public Garden 19 26 28

  • Wiltz, L.K. 2005 I need a bigger suitcase: The evaluator role in nonformal education N. Dir. Eval. 108 5 12

  • Fig. 1.

    An evaluation approach for educational programs at public gardens schematic.

  • Fig. 2.

    An improved evaluation approach for educational programs at public gardens schematic.

  • Addelson, B. 2004 Teacher professional development at Missouri Public Garden 19 30 31

  • American Public Garden Association 2005 American Public Garden Association 2006 membership directory Amer. Public Garden Assn Wilmington, DE

    • Search Google Scholar
    • Export Citation
  • Balick M.J. 1986 Botanical gardens and arboreta: Future directions New York Botanical Garden and Amer. Assn. Botanic Gardens Arboreta Swarthmore, PA

    • Search Google Scholar
    • Export Citation
  • Bamberger, M., Rugh, J., Church, M. & Fort, L. 2004 Shoestring evaluation: Designing impact evaluations under budget, time and data constraints Amer. J. Eval. 25 5 37

    • Search Google Scholar
    • Export Citation
  • Coker, J.S. & Van Dyke, C.G. 2005 Evaluation of teaching and research experiences undertaken by botany majors at N.C. State University North Amer. Colleges Teachers Agr. J. 49 14 19

    • Search Google Scholar
    • Export Citation
  • Colón, C.P. & Rothman, J. 2004 NYBG preschool programs Public Garden 19 28 29

  • Dirks, A.E. & Orvis, K. 2005 An evaluation of the junior master gardener program in third grade classrooms HortTechnology 15 433 447

  • Eberbach, C. & Crowley, K. 2004 Learning research in public gardens Public Garden 19 14 16

  • Greenstein, S.T. 2004 Who goes there? The importance of doing and using audience research Public Garden 19 37 39

  • Hamilton, S.L. & DeMarrais, K. 2001 Visits to public gardens: Their meaning for avid gardeners HortTechnology 11 209 215

  • Haynes, C. & Trexler, C.J. 2003 The perceptions and needs of volunteers at a university-affiliated public garden HortTechnology 13 552 556

  • Johnson, B. & Christensen, L. 2004 Educational research. Quantitative, qualitative, and mixed approaches 2nd ed Pearson Boston

  • Kirkwood, H.P. 1998 Beyond evaluation: A model for cooperative evaluation of internet resources Info. Technol. 22 66 72

  • Klemmer, C.D. 2004 An evaluation primer Public Garden 19 8 10

  • Kneebone, S. 2004 Eden's environmental education outcomes Public Garden 19 31 33

  • Korn, R. 2004 Nonprofits, foundations, and evaluators or, where's the Advil? Public Garden 19 17 18, 39–40

  • LeCompte, M.D. 2000 Getting good qualitative data to improve educational practice Theory Into Practice 39 146 154

  • Lewis, C. 2004 Youth programs: The Fairchild challenge Public Garden 19 18 20

  • Madaus, G.F., Scriven, M.S. & Stufflebeam, D.L. 1983 Evaluation models. Viewpoints on educational and human services evaluation Kluwer-Nijhoff Boston

    • Search Google Scholar
    • Export Citation
  • McLellan, E., MacQueen, K.M. & Neidig, J.L. 2003 Beyond the qualitative interview: Data preparation and transcription Field Methods 15 63 84

  • Mundy, V., Grabau, L.J. & Smith, M.S. 1999 Teaching assessment in plant and soil science and agricultural economics departments J. Natural Resources Life Sci. Educ. 28 26 30

    • Search Google Scholar
    • Export Citation
  • Norland, E. 2005 The nuances of being “non”: Evaluating nonformal education programs and settings N. Dir. Eval. 108 5 12

  • Olien, M.E. & Hoff, J. 2004 Orland E. White school program Public Garden 19 20 22

  • Patton, M.Q. 2002 Qualitative research and evaluation methods Sage Publications Thousand Oaks, CA

  • Phibbs, E.J. & Relf, D. 2005 Improving research on youth gardening HortTechnology 15 425 428

  • Relf, P.D. & Lohr, V.I. 2003 Human issues in horticulture HortScience 38 984 993

  • Rossi, P.H., Lipsey, M.W. & Freeman, H.E. 2004 Evaluation. A systematic approach 7th ed Sage Publications Thousand Oaks, CA

  • Saunders, W.L. 1992 The constructivist perspective: Implications and teaching strategies for science Sch. Sci. Math. 92 136 141

  • Scantlebury, K., Boone, W., Kahle, J.B. & Fraser, B.J. 2001 Design, validation, and use of an evaluation instrument for monitoring systemic reform J. Res. Sci. Teach. 38 646 662

    • Search Google Scholar
    • Export Citation
  • Schneider, L.S. & Renner, J.W. 1980 Concrete and formal teaching J. Res. Sci. Teach. 17 503 517

  • Scriven, M. 1981 Evaluation thesaurus Edge Press Point Reyes, CA

  • Shoemaker, C.A., Relf, P.D. & Lohr, V.I. 2000 Social science methodologies for studying individuals' responses in human issues in horticulture research HortTechnology 10 87 93

    • Search Google Scholar
    • Export Citation
  • Stake, R.E. 2004 Standards-based and responsive evaluation Sage Publications Thousand Oaks, CA

  • Stufflebeam, D.L. 2001 Evaluation models N. Dir. Eval. 89 1 106

  • Tanck, S. 2004 Minnesota school programs Public Garden 19 22 23

  • Weiss, C.H. 1998 Evaluation: Methods for studying programs and policies 2nd ed Prentice-Hall Upper Saddle River, NJ

  • Wells, M. & Butler, B.H. 2004 A visitor-centered evaluation hierarchy. Helpful hints for understanding the effects of botanical garden programs Public Garden 19 11 13

    • Search Google Scholar
    • Export Citation
  • White, J.M. 2004 UCBG peer evaluation Public Garden 19 26 28

  • Wiltz, L.K. 2005 I need a bigger suitcase: The evaluator role in nonformal education N. Dir. Eval. 108 5 12

Aaron Steil Former Graduate Student, Longwood Graduate Program in Public Horticulture, University of Delaware, Department of Plant and Soil Sciences, 125 Townsend Hall, Newark, DE 19716 and Education and Plant Collections Coordinator, Reiman Gardens, IA State University, 1407 University Boulevard, Ames, IA 50011

Search for other papers by Aaron Steil in
Google Scholar
Close
and
Robert E. Lyons Professor and Director, Longwood Graduate Program in Public Horticulture, University of Delaware, Department of Plant and Soil Sciences, 126 Townsend Hall, Newark, DE 19716

Search for other papers by Robert E. Lyons in
Google Scholar
Close

Contributor Notes

Corresponding author. E-mail: ajsteil@iastate.edu.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 356 219 6
PDF Downloads 225 86 12
  • Fig. 1.

    An evaluation approach for educational programs at public gardens schematic.

  • Fig. 2.

    An improved evaluation approach for educational programs at public gardens schematic.

Advertisement
PP Systems Measuring Far Red Advert

 

Advertisement
Save