In the 2015 Delaware (DE) Master Gardener training, instructors synchronously delivered content to two trainee cohorts (Cohorts A and B) who met at three locations (Sites 1, 2, and 3) via video web conferencing (VWC). This reduced instructor delivery and travel time but warranted close examination of trainee learning outcomes and experiences. To evaluate the pilot implementation of remote delivery, trainees [number of trainees (N) = 30] answered two open-ended application questions after 11 instructional sessions. One cohort received instruction face-to-face, while the other cohort synchronously received instruction via remote delivery [number of participants in cohort 1 (n1) = 17; number of participants in cohort 2 (n2) = 13]; each cohort was remote for about half of the sessions. The overall average face-to-face score assessing session content mastery was higher than the overall average remote score by 0.1, a 5% difference given the possible scores range of 0 to 2.0. When we grouped sessions by remote delivery site, delivery mode only significantly predicted average session scores for those sessions delivered remotely to Site 2 and not those delivered remotely to either Site 1 or Site 3. When we considered each session individually, delivery mode significantly predicted session scores for 2 of the 11 sessions, both broadcast remotely to trainees at Site 2, where the bandwidth was 10% of those at Sites 1 and 3. We suggest the VWC system performed particularly poorly for these sessions due to limited bandwidth. Posttraining survey results suggest the VWC system did not function well enough to approximate face-to-face instruction. The overall educational rating of the training was significantly higher than the media naturalness rating suggesting poor technical functionality did not substantially undermine trainees’ perception of the education they received. This study indicates remote delivery is a viable strategy for improving the efficiency of training programs if it is consistently implemented with the appropriate technical infrastructure.
In an operating mode of shrinking resources and expanding clientele, the Cooperative Extension System is continually asked to do more with fewer funds (Lyons et al., 2008), and the Master Gardener training program directly experiences these budget cuts (Young, 2007). In accommodating financial limitations, it is important to identify cost-saving measures that do not compromise the mission of the Master Gardener program to develop horticultural experts for the community. One strategy employed by several states to respond to budget reductions is the use of distance education to reduce the number of face-to-face trainings across the state (McGinnis, 2015; Stack, 1997; Warmund and Schrock, 1999). Upon the introduction of new innovations such as VWC , it is crucial for training standards to be maintained given both their value to trainees (Schrock et al., 2000a, 2000b) and importance for preparing volunteers to provide agricultural education in their communities (Doerfert, 2011).
Although there is not a universal definition, we operationalize distance education as, “institution-based, formal education where the learning group is separated, and where interactive telecommunications systems are used to connect learners, resources, and instructors” (Simonson et al., 2009). This definition highlights three key features of distance education, which must be explicitly addressed in the instructional design and delivery: 1) learners are physically separate from one another and/or the instructor, 2) some type of digital technology is mediating communication, and 3) the learner is interacting with other learners, the content (resources), and the instructor (Moore, 1989). Learner interactions are a critical component of distance education positively associated with student learning outcomes (Bernard et al., 2009). Decreased interaction can lead to learner or instructor misunderstandings (Moore, 1993) and negatively influence students’ perceptions of their own learning gains (Chen and Willits, 1998), one measure of perceived course quality. Thus, course designers should consider the impact of digital technology choices on interaction.
One way to consider how a specific digital technology might impact a learning experience is to evaluate it in terms of its media naturalness. Media naturalness theory suggests face-to-face interaction is biologically conditioned in humans, and digital communication media should provide for the same elements of communication that are available in face-to-face-communication (Kock, 2005). These include colocation (sharing the same space), synchronicity (quick exchange of communication), access to speech, access to facial expressions, and access to body language. If these elements are limited, such as an inability to clearly hear speech, the technology-mediated communication between the learner and the instructor or other learners moves away from face-to-face interaction, potentially introducing ambiguity and the need for greater cognitive effort (Kock and Garza, 2011). Distance education includes synchronous instruction, occurring for all learners at the same time, or asynchronous instruction, occurring at different, learner-chosen times (Hrastinski, 2008; Offir et al., 2008; Skylar, 2009). Although both can be effective (Skylar, 2009), VWC is typically used for synchronous instruction, and, when functioning properly, provides relatively high media naturalness. Learners and instructors have access to speech, facial expressions, and body language in real time through the combined video and audio channels. However, design specifications of each platform and the system infrastructure (e.g., wireless Internet access) can impact these.
Previous investigations of distance education strategies for Master Gardener training identified equivalent learning outcomes when compared with an in-person comparison group (Jeannette and Meyer, 2002; Stack, 1997; VanDerZanden et al., 2002; Warmund and Schrock, 1999), in alignment with the well-established, no-significant-difference phenomenon in online learning (Russell, 2010). This work began in the 1990s with researchers studying the use of interactive televisions (ITV) to increase Master Gardener training reach (Stack, 1997), finding that distance learners receiving ITV instruction performed as well as local learners on weekly quizzes. More recent investigations align with the development of new technologies such as VWC (McGinnis, 2015) and online modules (Jeannette and Meyer, 2002; Meyer et al., 2012; VanDerZanden et al., 2002). Jeannette and Meyer (2002) found similarly significant increases in horticultural knowledge for both classroom instruction and self-guided modules. On self-reported measures of perceived learning, Meyer et al. (2012) found participants were more comfortable with the module topics, but this was not in comparison with a face-to-face group. Although there is typically greater learner interactivity in online modules as compared with VWC (Moore et al., 2011), Master Gardener trainees did not perceive the interactive features of modules as useful and instead identified flexibility as the primary advantage of modules (VanDerZanden et al., 2002).
Although there has been no significant difference between measured learning outcomes from remote and face-to-face Master Gardener training, researchers have identified concerns with volunteer retention (Stack, 1997), and technical limitations (McGinnis, 2015; Warmund and Schrock, 1999) associated with distance learning. If Master Gardeners do not complete volunteer commitments following training, their reduced retention may negate the cost savings of remote training due to the need for more frequent trainings. Rohs et al. (2002) suggested prioritizing the trainees’ perceived benefits of the program to increase volunteer retention as one strategy for cost savings. Master Gardener programs across the country see a variety of volunteer commitment levels. For example, the Oregon Master Gardener program reported 5-year retention rates across 20 counties ranging from 31% to 79%, with a mean of 50% (Langellotto, 2013). Researchers have hypothesized reduced retention may be a product of dissatisfaction with the education volunteers received (Strong and Harder, 2011). Education has been identified as one of the leading motivational factors for volunteering and the greatest benefit of the program (Boyer et al., 2002; Rohs et al., 2002; Schrock et al., 2000a, 2000b). The opportunity for informative and engaging horticultural education draws in and provides personal enhancement for volunteers. Thus, it is critical to maintain standards of training quality to not only ensure Master Gardeners are prepared to engage with community clientele (e.g., Meyer and Jarvis, 2003) but also to deliver an educational experience that provides value in exchange for volunteers’ commitment to the program.
Delaware Cooperative Extension is committed to maintaining the high 5-year Delaware Master Gardener retention rate of 84% for 2011–16 (R.A. Pelly, unpublished data). Close examination of a new training innovation on both volunteer learning outcomes and training perceptions is required to accomplish this goal.
In this study, we explored two research questions: “What differences, if any, exist between Master Gardener trainees’ learning when they receive instruction via face-to-face vs. VWC remote delivery,” and “How do Master Gardener trainees perceive the overall training when VWC was used for remote delivery?” Learning is operationalized as the trainees’ ability to use content from the session to answer application questions similar to those they encounter as a Master Gardener. Based on previous findings of equal learning in remote and face-to-face instruction (Nguyen, 2015; Storck and Sproull, 1994), we hypothesized there would be no significant difference between learning outcomes for participants in the remote vs. face-to-face sessions. The only variation between the remote and face-to-face conditions was delivery format; the session content and instructional strategies remained constant. We hypothesized that Master Gardeners’ perceptions of the overall training would align with the technical functionality of the system, based on learners’ ability or inability to access the elements of face-to-face communication (Kock, 2005). The findings from this project will inform the scale-up of remote delivery with VWC as an instructional mode for training a wide variety of extension volunteers.
Materials and methods
Delaware Master Gardener training includes 12 weeks (22 sessions) of lectures, garden tours, hands-on workshops, and trainee-led sessions. Instructors, university faculty and extension professionals present on horticultural topics (Table 1) for ≈3 h per lecture session with text and images for visual support (Fig. 1). Trainees receive a physical copy of the lecture slides as a handout. In former trainings, lecturers presented the same material twice to two training cohorts in their respective counties. In Fall 2015, to reduce the time demands on these educators and overall cost of training, DE combined the previously county-based local lectures into a series of statewide training sessions, rotating the site of the live presentations and using Zoom VWC (Zoom Video Communications, San Jose, CA) to connect the other site. The new training model met the definition of distance education. The remote cohort was physically separate from the instructor and learners in the live cohort, with all communication mediated by the Zoom VWC interface.
Differences between face-to-face and remote session scores by session for all training sessions using video web conferencing in the Fall 2015 Delaware Master Gardener training with indication of which site and cohort received remote instruction.
In this study, we evaluate learning outcomes from 11 of the lecture training sessions delivered synchronously via Zoom VWC. Master Gardener trainees were grouped in two cohorts (A and B); one cohort received face-to-face instruction while the other received remote instruction, alternating by session such that all trainees experienced both conditions. The location of the live instructor rotated, such that Cohort A received remote instruction for six sessions (Sessions 1, 3, 4, 6, 8, and 10), and Cohort B received remote instruction for five sessions (Sessions 2, 5, 7, 9, and 11). Before instructional delivery, instructors each watched a study-created instructional video on best practices for remote delivery using VWC.
Cohort A represented one county, so participants in Cohort A attended all trainings at one site (Site 1). Cohort B represented two adjacent counties, so participants in Cohort B attended trainings at two sites (six at Site 2, five at Site 3). VWC requires both video and audio streams, placing a toll on Internet bandwidth, which is the amount of data that can be transmitted in a set time. Sites 1 and 3 had substantially higher bandwidth capacities [≈100 megabits (Mb) per second] than Site 2 (≈10 Mb per second).
The participants for this study were Master Gardener volunteer trainees (N = 30; Cohort A, n2 = 13; Cohort B, n1 = 17). Master Gardener candidates apply and are selected by Cooperate Extension staff based on their gardening experience and willingness to volunteer. Participants completed a pretraining survey, questions after each of the 11 sessions, and a posttraining survey. The University of Delaware and University of Virginia Institutional Review Board approved all measures. We administered all measures via e-mailed links through Qualtrics (Provo, UT), an online survey platform. To be included in the final sample, participants had to answer session questions within 1 week of the session. Out of 33 potential participants in the training class, two trainees from Cohort B declined to participate in the study, and one from Cohort B consented for participation but did not answer session questions within the required 1 week.
We emailed the pretraining survey link following the orientation meeting and participant consent. The survey included questions about demographics and trainees’ gardening frequency as measured by the number of days per week they work in their own or others’ gardens. The survey also included an eight-item comfort/anxiety subscale of the attitudes toward computer technologies survey [ACTS (Kinzie et al., 1994)]. Questions such as “Computer technologies are confusing to me,” gauged participants’ comfort with computer technologies.
After each of the 11 sessions, participants responded to two open-ended questions based on the material covered that day. Questions required participants to apply content knowledge to a horticultural situation they might encounter. For example, a question following a session on landscape management was, “You are on the tree committee for your community, which is planning a large tree planting project. Someone suggested using bare root trees for the project. Is that a good idea? Why or why not?” At the end of each session, volunteers received questions via an online link in an e-mail. To standardize the amount of time between training and responding to the questions, we asked trainees to answer questions within 48 h. Any responses submitted more than 1 week after the corresponding training session were removed from the analyses, as we felt those became knowledge-retention measures. To most accurately simulate providing horticultural advice in their future role, we instructed participants to answer questions without consulting additional resources.
To arrive at two questions per session, we wrote four questions, based on instructor-provided session content. Instructors evaluated their questions for content validity. Then, we pilot-tested the questions with 23 University of Delaware Botanic Garden volunteers and existing Master Gardeners who experienced similar training in previous years. We coded the pilot-test responses to develop a coding manual with model responses and point indicators for each question. The total possible points for each question ranged from three to five, depending on the number of details that could be included in a complete response. We chose two questions for each session for which there was sufficient consistency in interpretation and range in accuracy of responses during pilot testing. Session instructors reviewed the selected questions and coding manual model responses a final time to confirm content validity.
Two coders, blind to site and delivery mode, coded all responses to open-ended questions using point indicators identified in the coding manual (1 point for each correct detail in the answer, ranging from 3 to 5 total points). Coders included a primary author, who was not an instructor but contributed to writing the original questions and coding manual, and a horticultural expert, holding a master’s degree in horticulture. After coders independently scored responses, they discussed their codes and came to 100% agreement on discrepancies for final scores.
To equate questions with different numbers of possible points for analyses, we normalized each session question based on the potential range of scores for that question [(value − minimum)/(maximum − minimum)] to make all scores 0–1. We summed the two questions for each session for a total session score between 0 and 2 and averaged all session scores to find each participant’s overall session score. Additionally, based on his or her cohort, we averaged a remote and face-to-face score for each participant. After training completion, we provided an optional session and reviewed answers for all session questions to address any misunderstandings.
Participants completed a posttraining survey on the entire training experience after graduation from Master Gardener training. We included a combination of four Likert-scaled questions [possible responses ranged from 1.0 (not at all) to 5.0 (very much)] and four open-ended items to evaluate their overall satisfactions with the educational experience of the training program. We averaged responses to the four-scaled questions to create a perceived educational quality rating score. The open-ended questions asked 1) about the best part of the training, 2) how to improve sessions, 3) which sessions or activities should be added or removed, and 4) additional comments. The first and second authors independently coded all responses and came to 100% consensus on any code applications that did not match. To specifically evaluate participants’ remote delivery experience, the survey also included five-Likert-scaled questions [e.g., To what extent were you able to hear the instructor’s speech?; response options ranged from 1.0 (not at all) to 5.0 (the same as if it were in-person instruction)] and one open-ended question about participants’ perceptions of the face-to-face communication elements described by media naturalness theory (Kock, 2005) during remote delivery training sessions. We aggregated the five media naturalness items to create one mean technical functionality score with moderate internal reliability (α = 0.74).
To investigate Master Gardeners’ learning in face-to-face vs. remote sessions, we considered the difference between each participant’s remote and face-to-face average score for analyses in which each participant has two dependent scores (a remote and face-to-face score). We considered the significance of delivery mode as a predictor of outcomes for analyses in which each participant has only one score (a remote or face-to-face score).
We used dependent samples t tests to compare each participant’s remote and face-to-face score both within cohort and overall. This method accounts for differences in individuals’ prior knowledge by pairing each person’s scores, taking the difference between them, and evaluating if the absolute value of the average difference is significantly greater than zero. We used linear regression to examine the influence of delivery mode for each training site when instruction was remotely delivered there and to examine individual sessions. We regressed each participant’s score on a binary dummy variable for delivery mode (remote = 0, face-to-face = 1) with education, gardening frequency, and technology comfort as covariates to control for differences between individuals. When separated by site or sessions, each individual has only one score, meeting the independence assumption of regression.
We compared the means for participants’ perceptions of the technical aspects and educational aspects of the program with a dependent sample t test. To explore the relationship between VWC and trainee perceptions of the overall training, we focused primarily on open-ended questions that did not directly ask about the technology. First, we identified all statements referring to VWC. Then, we deductively identified themes across these responses (Boyatzis, 1998) and came to 100% consensus on the application of four themes. We used SPSS Statistics (version 23.0; IBM Corp., Armonk, NY) for all quantitative analyses and Dedoose (version 7.0.16, 2016; Dedoose, Los Angeles, CA) for qualitative analyses.
The sample demographics of this study are similar to those of previous Master Gardener studies (Jeannette and Meyer, 2002; Schrock et al., 2000a). Chi-square tests of independence indicated Cohorts A and B had no significant difference between gender, education, age, and ethnicity. Participants were predominantly female (n = 18, 60.0%) and white (n = 27, 90.0%; n = 3, 10.0% other). The most frequent level of education was a master’s degree (n = 14, 46.7%), followed by a 4-year college degree (n = 7, 23.3%), some college or a 2-year degree (n = 6, 20.0%), and a doctoral degree (n = 3, 10.0%). On the technology comfort scale out of 32, trainees scored an average of 25.7 (sd = 6.0), indicating relatively high comfort with technology. An independent samples t test supports no significant difference between cohorts.
The majority of trainees reported gardening 4–6 times per week (n = 14, 46.7%), followed by 2–3 times per week (n = 9, 30.0%), 7 times per week (n = 4, 13.3%) and 1 time per week (n = 3, 10.0%). A chi-square test of independence supports no significant difference between cohorts.
The overall average face-to-face score (1.13) was higher than the overall average remote score (1.03). This result was statistically significant [P = 0.019 (Table 2)]. The difference between the two delivery modes of 0.1 is 5% of the scale from 0 to 2.0. The mean scores within each session were not consistently higher for the face-to-face or remote conditions. For several sessions, the mean scores differed by 0.03 or less (Sessions 3, 4, and 6). When we grouped participants by cohort, there was no significant difference between remote and face-to-face scores (Table 2). When we grouped sessions by remote delivery site, delivery mode significantly predicted scores only for those sessions delivered remotely to Site 2 (mean = 1.03, sd = 0.34) (Table 3). Trainees receiving face-to-face instruction when the remote group was at Site 2 scored 0.37 points higher than those receiving remote instruction, ≈1 sd higher. Delivery mode did not predict the average session scores when the remote delivery site was either Site 1 (mean = 1.08, sd = 0.22) or Site 3 (mean = 1.19, sd = 0.37).
Comparison of remote and face-to-face mean scores for Cohorts A and B in the Fall 2015 Delaware Master Gardener program.
Regression results of delivery mode as a predictor of session scores at the Fall 2015 Delaware Master Gardener training program’s three training sites.
Results of the regression analyses on each session suggest delivery mode significantly predicted session scores for 2 of the 11 sessions: Botany 1 (Session 2) and Integrated Pest Management (IPM) 2 (Session 9) (Table 4). The adjusted coefficient (β) represents the difference between delivery modes; a positive value indicates face to face was higher. Botany 1 session scores for trainees in the face-to-face condition were 0.32 points higher than those of trainees in the remote condition, about three-quarters of a standard deviation higher, and the model explained 19.4% of the observed score variance. Integrated Pest Management 2 scores for trainees in the face-to-face condition were 0.60 points higher than those of trainees in the remote condition, about two-thirds of a standard deviation higher, and the model explained 40.6% of the observed score variability for IPM 2 scores.
Regression results of delivery mode as a predictor of session scores at the Fall 2015 Delaware Master Gardener training program by individual sessions.
When asked about their perceptions of the educational quality of the training (Table 5), participants were most positive when rating the training overall and their increase in horticultural knowledge. While still generally positive, trainees were less confident when asked about using acquired knowledge to find accurate information to address horticultural questions and teach gardening practices to others. The range for trainees’ confidence in teaching gardening practices was the only item to include a 1, indicating a trainee was “not at all” comfortable teaching and had a mode of 3 as opposed to that of 4 for other items. Participants rated the VWC system poorly when they were specifically asked to evaluate the technical functionality of VWC during remote sessions in terms of its similarity to in-person instruction, referred to as media naturalness (Kock, 2005). Results of the dependent t test indicate the mean score for technical functionality, or media naturalness, of the remote delivery sessions (1.88) was significantly less (P < 0.001) than the mean rating of the perceived educational quality of the overall training (4.02) (Table 5). The Zoom web conferencing environment provides an image of the instructor (Fig. 1) in the bottom right hand corner of the screen. The small size of this image could reduce access to facial expressions and body language.
Descriptive results of scaled questions about perceptions of training (respondents, N = 25) from posttraining survey administered at the end of the Fall 2015 Delaware Master Gardener training.
Out of 25 trainees who responded to the exit survey, 9 trainees (36%) explicitly referenced the VWC system in at least one of four open-ended questions about the overall training (described above) that did not specifically ask about the distance experience. Out of 100 responses (four for each trainee), 12 included references to the VWC system. Several themes emerged including poor audio quality, poor visual access to the instructor/material, the instructors’ novice use of VWC, and a general dissatisfaction with the system (Table 6). Instructors’ novice use of the system included both references to a lack of rapport when the instructor was remote and the instructors’ discomfort lecturing via VWC.
Themes about video web conferencing from open-ended questions on the posttraining survey administered at the end of the Fall 2015 Delaware Master Gardener training with supporting examples of each theme.
The intensity of negative comments ranged significantly from extremely critical to somewhat constructive (Table 6). Additionally, several trainees made clear positive statements coupled with indication that the VWC system was not entirely successful (“I thoroughly enjoyed the program and learning, dispite [sic] some difficulty with sound. I wish we could continue meeting to learn”), suggesting a net positive feeling as opposed to a net negative feeling about the training.
Overall, evaluation of VWC in the 2015 Master Gardener training suggests trainees were more able to use content from the sessions to answer application questions when they received face-to-face instruction as opposed to remote instruction. While we hypothesized there would be no significant difference between learning outcomes from participants in the remote vs. face-to-face sessions based on earlier investigations of distance education for Master Gardener training (Jeannette and Meyer, 2002; Stack, 1997; VanDerZanden et al., 2002; Warmund and Schrock, 1999), the results indicate there was a reduction in learning outcomes for participants in the remote sessions. It is important to consider both the format of distance education in studies and the technical implementation quality. Previous studies measuring learning outcomes investigated ITVs and online modules, while this study investigated a VWC environment. Additionally, technical implementation quality may contribute to the discrepancy found between the first hypothesis and previous research.
There was not a consistent pattern of lower remote delivery scores across the sessions. In fact, only two sessions (Botany 1 and IPM 2) reflected the overall result of lower scores following VWC, so there may have been something different about those sessions. Botany 1 and IPM 2 were taught by different instructors and were one of two-part lectures, providing some evidence that instructor and the session content were not likely responsible for the difference. Although both sessions were remote for Cohort B, Cohort B did not perform significantly worse overall on remote sessions, suggesting there may be an alternative explanation. We identified one key similarity between Botany 1 and IPM 2; both sessions were broadcast remotely to trainees at Site 2. The bandwidth at Site 2 was 10 Mb per second, 10% of the bandwidth at Sites 1 and 3 (100 Mb per second). Further, when we examined session scores by remote delivery site, trainees performed significantly worse when they were remote at Site 2, but not at Sites 1 and 3. We suggest the VWC system likely performed particularly poorly for these sessions due to limited bandwidth. Although participants’ low evaluation of media naturalness for the entire VWC supports generally low technical functionality, we did not measure this for each session to link low media naturalness to specific sessions.
We also hypothesized Master Gardeners’ perceptions of the overall training would align with the technical functionality of the system. Our results did not support this hypothesis. In fact, despite a clear reduction in media naturalness, trainees’ perceptions of the training were predominantly positive, suggesting the low technical functionality of the environment did not significantly impact their overall opinions. The trainees’ open-ended responses expressed dissatisfaction when the technical system malfunctioned. However, it is likely positive aspects of the training outweighed any negative influences of VWC. The items with the highest responses were those pertaining to the overall training and increase in horticultural knowledge. As horticultural knowledge is consistently identified as the primary benefit of Master Gardener participation (Boyer et al., 2002; Rohs et al., 2002; Schrock et al., 2000a, 2000b), it seems the Master Gardener training met this desire.
The items with the lowest responses were those pertaining to trainee ability to perform their job tasks: answering gardening questions and teaching gardening practices. High instructional self-efficacy, the belief one is able to effectively teach others, has been positively related to Master Gardeners’ retention (Strong and Harder, 2011), suggesting future training should explicitly foster teaching practices. Additionally, there was some indication trainees felt distance from the instructors, expressed as a lack of rapport, and low interaction between learner and instructor can reduce perceived learning (Chen and Willits, 1998). This may have been the case for trainees’ perceptions of their instructional skills. As distance education literature supports the possibility of high-quality interaction between learners and instructors in VWC environments (Anderson, 2003), future implementation of VWC should provide practice sessions for instructors to increase their overall comfort engaging with remote learners to achieve this quality interaction. However, trainees’ uneasiness about their ability to perform instructional tasks may also be associated with other factors such as a lack of previous teaching experience, as is the case with classroom teachers (Wolters and Daugherty, 2007).
Some of the limitations to this study beyond technical functionality include a small convenience sample; differences between the delivery sites, instructors, and session content; and the frequency with which we measured perceptions of quality. Each cohort received remote delivery in different locations and worked with county-specific extension agents to facilitate sessions. The physical resources for VWC may have differed in quality between sites beyond the documented differences in bandwidth (e.g., speaker systems). Personnel may have also had different levels of experience and training with the VWC system. Four instructors, likely having different comfort with and beliefs about distance learning, taught the 11 sessions. Although instructors watched an instructional video about remote delivery best practice, they did not have training and/or planning support for lectures delivered remotely. While mitigated by the design of the study in which the exact same content was presented to both the remote and face-to-face cohorts, there is also some possibility the content of sessions for which there were mode discrepancies (Botany 1 and IPM 2) was particularly challenging for a remote delivery format. Finally, future research and trainings should measure trainees’ perceptions of technical and educational quality more frequently such that a session-specific relationship can be established.
In addition to measurement of trainee perceptions, Cooperative Extension staff should continue to objectively monitor session quality both in terms of the VWC system’s technical performance and participant learning. This could inform targeted review for content that may have been particularly impacted by poor delivery. Master Gardeners have identified a desire for continued education and training (Moravec, 2006), suggesting they would be amenable to review and further educational support. It is critical for future Master Gardener training programs both in DE and across the country to ensure that any new technical innovation is sufficiently supported in terms of infrastructure and staff to prevent significant reductions in functionality, a persistent problem in online or remote Master Gardener trainings (McGinnis, 2015; VanDerZanden et al., 2002; Warmund and Schrock, 1999). As previous investigation identified reduced retention for remote Master Gardener learners (Stack, 1997), there may be distal impacts of remote training on volunteer participation. Remote delivery is a viable strategy for improving the efficiency of training programs if it is consistently implemented with the appropriate technical infrastructure.
AndersonT.2003Modes of interaction in distance education p. 129–144. In: M.G. Moore and W.G. Anderson (eds.). Handbook of distance education. Lawrence Erlbaum Assoc. Mahwah NJ
BoyatzisR.E.1998Transforming qualitative information: Thematic analysis and code development. Sage Publ. Thousand Oaks CA
DoerfertD.L.2011National research agenda: American Association for Agricultural Education’s research priority areas for 2011-2015. Amer. Assn. Agr. Res. Lubbock TX
LangellottoG.A.2013Annual report of the Oregon State University Master Gardener program. 10 Mar. 2016. <http://extension.oregonstate.edu/mg/sites/default/files/2012_mg_annual_report_final.pdf>
MooreM.G.1993Theory of transactional distance p. 22–38. In: D. Keegan (ed.). Theoretical principles of distance education. Routledge New York NY
RussellT.L.2010No significant difference. 20 Sept. 2015. <http://nosignificantdifference.org/>
SimonsonM.SmaldinoS.AlbrightM.ZvacekS.2009Teaching and learning at a distance: Foundations of distance education. 4th ed. Prentice Hall Englewood Cliffs NJ