Literature Review

Background

In the United States, utilities (privately-owned or public) generally operate as regulated natural monopolies. In this business and regulation model, government grants a utility a franchise to operate a monopoly in a particular (generally geographically-delineated) market, but exerts regulatory control over the operation and pricing of the utility. (Posner, 1968, p. 52) One of the key factors considered by state regulatory agencies when setting the rate of return for electric utilities is reliability, which can be crudely expressed as the percentage of the time that the utility’s customers, considered as a whole, experience normal service. This includes both routine maintenance and operation of the system, and recovery after outages. (New York Public Service Commission, p. 10, 99) This gives electric utilities a strong and direct financial motivation to restore service to customers quickly after a major event. In these cases, personnel not normally assigned to field work on the electric system are assigned to perform damage survey on the distribution system, freeing trained and licensed electrical workers to concentrate on repairs. Training for these damage assessors is critical. This role is being enhanced with various forms of technology, ranging from handheld mobile devices to remotely-controlled drones. (Freeman, et al., p. 5-6) (Kullman, 2013) (Cadman, 2015) (Mena Report, 2016) The effectiveness of the assessors is often rated as inadequate. (Kullman, 2013)

Justification for Project Direction

There are multiple reasons why this specific training requirement can best be fulfilled using virtual reality.

Cost

Use of virtual reality can result in cost savings over conducting the same training using instructor-led courses, both because of reduced labor costs for instructors and reduced or eliminated travel time. (Tretsiakova-McNally, 20-17, pp. 7506, 7512) (Rovaglio, 2011) (Lloréns 2015) Since the majority of such training will be conducted by or on behalf of for-profit businesses, cost is likely to be a significant factor in choice of modality.

Safety

Virtual reality training environments (and simulations in general) can clearly improve safety for students by permitting them to train in situations that only simulate real-world dangers. In this particular case, it is to be expected that damage assessors will encounter downed and potentially live high-voltage electric lines, chemical spills, and other hazards. The use of VR can improve their skills before they are exposed to these hazards in reality. (Barilla, 2011) (Rovaglio, 2011) This can be expected to not just reduce the incidence of injury during training itself, but also during the actual performance of the task, because the realistic training in (simulated) dangerous situations will improve performance.

Availability/Convenience

It is the nature of major outage events that they are often hard to predict accurately more than a few days in advance. For instance, hurricane warnings are typically received approximately 24 hours before gale-force winds are felt (Regnier, 2008) and even less warning may be received for other hazards such as tornadoes. Damage assessors may need to be trained or retrained on extremely short notice to be effective (and safe). A distributed VR training system would enable training “on demand” and as needed.

General Applicability Across Geographies and Utilities

The product would be in principle usable anywhere in the world that uses above-ground electric distribution. Since the skills being taught are not dependent on technical knowledge of, or operation of, the equipment in use, customizing the VR environment to match a given utility’s portfolio of equipment could be done simply by swapping 3D models—behavior and instructional content would not change. The final product could be used/purchased at the local utility level, by government agencies such as the US Federal Emergency Management Agency (FEMA), or even by international agencies or NGOs, such as the Electric Power Research Institute (EPRI).

Learner Selection

As mentioned above, the author has defined the learners as those damage assessors who do not perform this task in their day-to-day work and who may require additional training with very little warning before performing this critical task in the field, in a potentially hazardous environment.
Kullman (2013) reports that of his survey respondents only 19% were able to complete damage assessments on time, and furthermore that 83% of respondents are not satisfied with the current operation of their damage assessment system.

Given the importance and need for training demonstrated by Kullman, learner selection becomes very straightforward.

Training Modality and Technology Selection

There are several potential ways the required training can be conducted. Instructor-led classroom and workshop training would be possible, as would asynchronous “static” training (verbal directions [spoken and/or written], static images, verbally-given responses), video-based training, the provision of job aids or performance support software, and virtual-reality based online training (which could be synchronous or asynchronous and individual or group).

Instructor-led courses were eliminated from consideration because skilled instructors are precisely the electric system experts who will be needed for other roles in storm restoration. (Jump, 2013))

“Static” (to coin a term) eLearning involving verbal directions and information and assessment is certainly plausible and might be valuable, but since the course is meant to inculcate performance of specific tasks involving non-verbal actions, a more directly applicable method was sought.

A video-based course using recorded video would be a plausible method, but interaction with a virtual environment can build engagement (Dalgarno and Lee, 2010). Kopp and Burkle (2010) also make the point that a VR environment actually makes assessment easier than in a recorded-video course, in that assessment can be an organic part of the course’s functioning instead of a separate “evaluation instrument” as would be required in a non-interactive video course.

Of course, there are gradations between each of these Platonic ideals of course types (for instance, a static eLearning could include video clips). The above is necessarily simplified.

For these reasons a VR course was selected. As mentioned, only a pilot will be attempted in this project.

Assessment

Given the critical nature of the task being trained, damage assessment after major damage to the electric distribution system, it is extremely important that students completing the proposed course be demonstrably capable of carrying out the tasks which are the learning objectives, which at their highest level would be: correctly assess and report damage, and perform their assignments while protecting both their own and the public’s safety.

The prototype/pilot version which will be created for this phase of the course (Advanced Design Portfolio) will consist of Virtual Reality (VR) training in a shared environment, with an instructor guiding and assessing the students. The proposed final form of the course would be a self-guided VR e-learning (simulation), with assessment also being automated.

General Considerations and Assessment Design

Vaughan et al. (2016) provide a useful overview of issues involved in assessing VR learning. The current author gives limited credence to their reliance on “learning styles” and the Myers-Briggs system of personality typing. Please see e. g. Allcock and Hulme(2010), Choi, Lee and Kang (2009), Rohrer and Pashler (2012), and Gardner and Martinko (1996). However, their analysis of and experience with adaptive systems and instant feedback to learners during the learning process are both apropos and valuable.

Ojados Gonzalez et al. (2017) report on a project with conceptual and practical similarities to the author’s current work, pilot testing of a simulator for training operators of large agricultural vehicles, with an emphasis on safety. Their paper is of special interest because it specifically deals with the assessment of the lesson’s success. In this particular case, the test audience were asked to complete a post-lesson questionnaire after completing a simulated test on tractor equipment safety. The questionnaire gathered both demographic data and reactions, including both closed-end and open-end questions (free comments). Ojados Gonzalez and collaborators were evaluating student reactions (Kirkpatrick’s level 1) to the lesson rather than the more challenging tasks of assessing whether the students actually operated the equipment in accordance with procedure, or whether safety was actually improved. (Kirkpatrick and Kirkpatrick, 2005)

Another example of investigators using surveys to evaluate a VR training environment can be found in Vora et al. (2002). This study of aircraft inspection training, the earliest of those considered in this writing, was largely concerned with the phenomena and principles inherent in VR learning and teaching, as opposed to simply studying the effectiveness of the lessons in increasing student proficiency. As in the Ojados Gonzalez paper, all data collection was via questionnaires (surveys) given after completion of the lessons.

The current author will be using a questionnaire to assess reactions as above, but will also attempt to assess knowledge transfer, behavior change, and effectiveness of the lesson being developed.

Yin et al. (2018) describe a technique the current author plans to use in the final (not the pilot) version of the course: automated scoring. In their dental-surgery VR education simulator, they score their students’ success based on spatial positioning within the VR environment, with voxel-level discrimination. Their system is capable of discriminating between merely acceptable and optimal shapes and quantities of tissue removal. On the level of learning theory, their work involves (as they say) both cognitive and motor proficiency, again making it highly comparable to the current author’s project. They report that their automated assessment system is able to very closely match human experts’ assessment of the students’ work.

The application to the current project can be quite direct, since moving the VR avatar into danger zones would be assessed as failing a critical safety task.

Another medical study was performed by Middleton et al. (2017). In this case, they were comparing the effectiveness of VR training vs. benchtop for arthroscopic surgeons. Proficiency was measured by motion analysis – that is, wireless sensors attached to the surgeons’ elbows were monitored for 3D position and used to track how many movements, of what length, were made in performing a simulated surgery. A secondary measure was a standard surgical assessment tool, in which trained surgeons rated video recordings of each student’s performance. While this level of detailed assessment is not required or possible in the current author’s project, the principles involved (performing a simulated task with automated behavior tracking, or human raters, used to determine proficiency) would certainly apply.

Their results indicate that benchtop training had substantially greater benefit, but as the authors point out, there several limitations to the power of their study. It had a very small sample group (17 students divided into two subgroups), and of course it only studied one specific simulation vs. one specific benchtop practice design. They also remind their readers that the results of the two training modes have not been compared in the actual surgical theater.

Selection of delivery modality

In the absence of substantial published literature on the topic, the author’s experience was used to identify several criteria for damage assessment training: cost, flexibility, learner engagement, effectiveness (Kirkpatrick’s Level 4, see Kirkpatrick and Kirkpatrick, 2005) of the training, flexibility and adaptability, and acceptability to the management of the hypothetical utility customer. In many of these categories, virtual reality (VR) technology was determined to provide significant benefit.

Several authors (Tretsiakova-McNally, 20-17, pp. 7506, 7512) (Rovaglio, 2011) (Lloréns 2015) have determined that VR courses offer reductions in both labor and travel costs of education and training. The reduced travel time also indicates “flexibility” in the sense intended: that a student can take the course at the time and in the location required, with no or minimal hindrances other than access to the required technology.

There is extensive literature indicating that (properly scaffolded) VR learning can increase learner engagement, e. g. O’Connor (2014), (Dalgarno and Lee, 2010) and Hsu (2013).

In terms of training effectiveness, Hsu (2014) along with Dalgarno and Lee (2010) argue that VR can be at least as effective as classroom training, and in some cases moreso. Bertram et al. (2015) report on a study comparing instructor-led classroom training with virtual simulation training, with a control group of no training, showing equal effectiveness for both intervention groups.

Ayala Garcia et al. (2016) found that in VR training of electric lineworkers (highly applicable to the current project), VR training was measurably more effective than classroom and field training, resulting in both higher test scores and greater retention. Unfortunately the authors of this study did not calculate p-values or otherwise perform statistical analysis beyond averaging of scores.

Accepting this “at least equal effectiveness” conclusion, virtual reality at the very least is not disadvantaged vs. live classroom training.

The area in which VR training might be considered inferior to classroom (for this specific application) is user acceptance. In the above-mentioned study by Bertram et al., while performance improvement was equal for in-person and virtual training, learner subjective reactions showed that they preferred the classroom intervention. (Bertram, 2015) This represents Kirkpatrick’s Level 1 of assessment (Kirkpatrick and Kirkpatrick, 2005). Another level of the word “user” can be taken to mean the funding organization, in this case proposed to be a privately-held utility company. Hsu (2014) mentions the likely perception among organizational leadership that a VR environment might not be taken seriously, due to its similarity to a video game. They also say that, while in reality likely to result in cost savings, leaders may perceive development costs as high.

Despite this potential disadvantage, the VR technology was selected due to its several advantages. The decision was made to create at least the pilot using OpenSimulator (“OpenSim”) because it is Open Source software (which prevents the lesson being locked into a single, proprietary software or hosting vendor), and to host the pilot at Kitely because of its extremely open policy of rejecting ownership of any user content (Terms of Service, 2015)

Assessment Design

It is worth mentioning that for business training, the overwhelmingly most influential assessment model is that of Donald Kirkpatrick as developed (Eseryel, 2002), (Kirkpatrick, 2005). As mentioned above, the author in this project has primarily considered Kirkpatrick’s Levels 3 and 4 assessment, behavior change of the student (to the designer’s desired behavior) and benefit to the business or organization.

Vaughan et al. (2016) provided a most useful overview of issues involved in assessing VR learning. The current author is highly skeptical of “learning styles” and the Myers-Briggs system of personality typing. Please see e. g. Allcock and Hulme (2010), Choi, Lee and Kang (2009), Rohrer and Pashler (2012), and Gardner and Martinko (1996). Vaughn and co-authors describe adaptive systems and instant feedback to learners during the learning process, both of which are both potentially highly valuable and beyond the scope of the current project.

Ojados Gonzalez et al. (2017) reported on a project with conceptual and practical similarities to the author’s current work, the pilot testing of a simulator of large agricultural vehicles, with an emphasis on safety training. The pilot audience completed a post-lesson questionnaire after completing a simulated exercise demonstrating their ability to operate tractors safely. The questionnaire gathered both demographic data and reactions, including both closed-end and open-end questions (free comments). Ojados Gonzalez and collaborators were evaluating student reactions to the lesson rather than the more challenging tasks of assessing whether the students actually operated the equipment in accordance with procedure, or whether safety was actually improved.

Another example of investigators using surveys to evaluate a VR training environment can be found in Vora et al. (2002). This study of aircraft inspection training, the earliest of those considered in this writing, was largely concerned with the phenomena and principles inherent in VR learning and teaching, as opposed to simply studying the effectiveness of the lessons in increasing student proficiency. As in the Ojados Gonzalez paper, all data collection was via questionnaires (surveys) given after completion of the lessons.

The current author will administer a questionnaire to assess subjective student reactions , but will also assess actual student performance via instructor observation using a rubric to define criteria for success, specifically not violating safety rules and correctly completing damage reports.

Research was also done toward the final version of the lesson (as opposed to the pilot that will be the result of this semester’s work).

Yin et al. (2018) and Middleton (2017) both report on automated scoring of surgical skill. Students are scored based on spatial positioning within the VR environment. Both report that their automated assessment system is able to very closely match human experts’ assessment of the students’ work. The application to the current project can be quite direct, since moving the VR avatar into danger zones would be assessed as failing a critical safety task.

Ayala Garcia et al. (2016) created a VR training system for the electric power industry. Their “evaluation” (assessment) system involved a written test as well as a practical exam (performing the cognitive/motor task for which they had been trained, in physical as opposed to virtual reality).

It is important to emphasize that assessment plans must be centered in a specific environment and context. (O’Connor et al., 2014) The “organizational audience” of the course (to coin a phrase) is businesses, and secondarily government agencies. Potentially, this can include electric utilities throughout the United States (or internationally) as well as governmental agencies that handle disasters, such as FEMA. The “direct” audience would be the students, who are in general utility employees who work in every role except electric line maintenance, repair and construction. Those experts will have other, higher-priority roles during a system damage event. (Kullman, 2013, Donald A. Stuart, personal communication, October, 2017)

REFERENCES

Allcock, S. J., & Hulme, J. A. (2010). Learning styles in the classroom: Educational benefit or planning exercise? Psychology Teaching Review, 16(2), 67-79.

Ayala Garcia, Andres, Israel Galvan Bobadilla, Gustavo Arroyo Figueroa, Miguel Perez Ramirez, and Javier Munoz Roman. 2016. “Virtual reality training system for maintenance and operation of high-voltage overhead power lines.” Virtual Reality no. 1: 27. Academic OneFile, EBSCOhost (accessed December 14, 2017).

Barilli, E. (2012). Virtual Reality Technology as an Didactical and Pedagogical Resource in Distance Education for Professional Training. Distance Education. London: IntechOpen. Retrieved from https://www.intechopen.com/books/distance-education/the-technology-of-virtual-reality-as-a-pedagogical-resource-for-professional-formation-in-the-distan

Bertram, J., Moskaliuk, J., & Cress, U. (2015). Virtual training: Making reality work?. Computers In Human Behavior, 43284-292. doi:10.1016/j.chb.2014.10.032

Cadman, J. (2015). Learning from Sandy. Public Utilities Fortnightly, 153(9), 52-54.

Choi, I., Lee, S. J., & Kang, J. (2009). Implementing a case-based e-learning environment in a lecture-oriented anesthesiology class: Do learning styles matter in complex problem solving over time? British Journal of Educational Technology, 40(5), 933-947.

Gardner, W. L., & Martinko, M. J. (1996). Using the Myers-Briggs Type Indicator

Dalgarno, B., & Lee, M. W. (2010). What are the learning affordances of 3-D virtual environments?. British Journal Of Educational Technology, 41(1), 10-32. doi:10.1111/j.1467-8535.2009.01038.x

Freeman, L. A.; Stano, G. J.; and Gordon, M. E. (2010). Best Practices for Storm Response on U.S. Distribution Systems. Proc. 2010 DistribuTech, Mar. 23, 2010.

Gardner, W. L., & Martinko, M. J. (1996). Using the Myers-Briggs Type Indicator to study managers: a literature review and research agenda. Journal of Management, (1). 45.

Hsu EB, Li Y, Bayram JD, Levinson D, Yang S, Monahan C. State of Virtual Reality Based Disaster Preparedness and Response Training. PLOS Currents Disasters. 2013 Apr 24 . Edition 1. doi: 10.1371/currents.dis.1ea2b2e71237d5337fa53982a38b2aff.

Jump, P. (2003). The response factor: getting the lights back on is not a matter of good fortune. It takes preparation, technology, and commitment. Electric Perspectives, (3). 22.

Kirkpatrick, D. L., & Kirkpatrick, J. D. (2005). Transferring learning to behavior. [electronic resource] : using the four levels to improve performance. San Francisco, CA : Berrett-Koehler Publishers, c2005.

Kopp, G., & Burkle, M. (2010). Using Second Life for Just-in-Time Training: Building Teaching Frameworks in Virtual Worlds. International Journal Of Advanced Corporate Learning, 3(3), 19-25. doi:10.3991/ijac.v3i3.1373

Kullmann, J. (2013). Survey: damage assessment key to effective outage restoration. Electric Light & Power, (1). 51.

Lloréns, R., Noé, E., Colomer, C., & Alcañiz, M. (2015). Effectiveness, Usability, and Cost-Benefit of a Virtual Reality–Based Telerehabilitation Program for Balance Recovery After Stroke: A Randomized Controlled Trial. Archives Of Physical Medicine & Rehabilitation, 96(3), 418-425.e2. doi:10.1016/j.apmr.2014.10.019

Middleton, R. M., Alvand, A., Garfjeld Roberts, P., Hargrove, C., Kirby, G., & Rees, J. L. (2017). Simulation-Based Training Platforms for Arthroscopy: A Randomized Comparison of Virtual Reality Learning to Benchtop Learning. Arthroscopy: The Journal Of Arthroscopic And Related Surgery, 33996-1003. doi:10.1016/j.arthro.2016.10.021

O’Connor, E. A. & Domingo, J. (2017). A Practical Guide, with Theoretical Underpinnings, for Creating Effective Virtual Reality Learning Environments. Journal of Educational Technology Systems. 45(3), 343 – 364

O’Connor, E., McDonald, F., & Ruggiero, M. (2014). Scaffolding Complex Learning: Integrating 21st Century Thinking, Emerging Technologies, and Dynamic Design and Assessment to Expand Learning and Communication Opportunities. Journal Of Educational Technology Systems, 43(2), 199-226.

Ojados Gonzalez, D., Martin-Gorriz, B., Ibarra Berrocal, I., Macian Morales, A., Adolfo Salcedo, G., & Miguel Hernandez, B. (2017). Original papers: Development and assessment of a tractor driving simulator with immersive virtual reality for training to avoid occupational hazards. Computers And Electronics In Agriculture, 143111-118. doi:10.1016/j.compag.2017.10.008

Order Adopting A Ratemaking And Utility Revenue Model Policy Framework. (2016, May 19) Retrieved from http://documents.dps.ny.gov/public/Common/ViewDoc.aspx?DocRefId=%7BD6EC8F0B-6141-4A82-A857-B79CF0A71BF0%7D

Ozkeskin, E. E., & Tunc, T. (2010). Spherical Video Recording and Possible Interactive Educational Uses. International Journal on New Trends in Education and Their Implications, 1(1), 69-79.

PG&E Testing Safety Drones to Inspect Electric and Gas Infrastructure. (2016) Mena Report.

Posner, R. (1968). Natural Monopoly and Its Regulation. 21 Stanford Law Review 548 (1968).

Regnier, E. (2008). Public Evacuation Decisions and Hurricane Track Uncertainty. Management Science, (1), 16. doi:10.1287/mnsc.1070.0764

Rohrer, D. Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education, 46(7), 634-635.

Rovaglio, M. and Scheele, T. (2011). Virtual reality improves training in process industries. Automation IT, (2011, May/June)

Terms of Service | Kitely. (2015, June 1) Retrieved from https://www.kitely.com/terms

Tretsiakova-McNally, S., Maranne, E., Verbecke, F., & Molkov, V. (2017). Mixed e-learning and virtual reality pedagogical approach for innovative hydrogen safety training of first responders. International Journal Of Hydrogen Energy, 42 (Special issue on The 6th International Conference on Hydrogen Safety (ICHS 2015), 19-21 October 2015, Yokohama, Japan), 7504-7512. doi:10.1016/j.ijhydene.2016.03.175

Vora, J., Nair, S., Gramopadadhye, A., Duchowski, A. T., Melloy, B. J., and Kanki, B. Using virtual reality technology for aircraft visual inspection training: presence and comparison studies. Applied Ergonomics 33 (2002) 559–570.

Yin, M. S., Haddawy, P., Suebnukarn, S., & Rhienmora, P. (2018). Automated outcome scoring in a virtual reality simulator for endodontic surgery. Computer Methods And Programs In Biomedicine, 15353-59. doi:10.1016/j.cmpb.2017.10.001