ON-POINT INSTRUCTOR COURSE PHASE-1 EVALUATION
Note:
This is information from an actual evaluation project. The Corporation, OP, OPIC and related terms are pseudonyms used to protect the privacy of the client and to safeguard sensitive information. No associations are claimed, nor should be assumed, from any images, data or information herein.
The Organization
The On-Point Instructor Corporation (OPIC) trains a highly selected group of instructors focused on producing instructors of instructors who will be charged with training professionals to execute critical thinking and problem solving in the most extreme situations for our nation's security. Their course is 5.5 months in length and is only ran twice a year. The maximum number of students for each course ranges from 4-6 out of a total applicant pool of over 300. In the winter of 2020, the OPIC received 17 applications for their limited seats but had no process developed to reduce their applicants to a manageable number of in-person interviews. They chose to quickly develop a scoresheet for the application and used the sheet to determine who was most eligible to attend the in-person selection.
Because OPIC is resource-limited, they can only invite 10-12 applicants to interview and compete for the extremely limited seats, in-person. The implementation of the scoresheet created a two-phase application process:
-
Phase-I – Formal, standardized application process built around the newly developed scoresheet
-
Phase-II – Invitation-only (based on Phase-I) in-person selection process
BACKGROUND
After the initial use of the scoresheet, the OPIC cadre took feedback from the assessment and selection board to make some improvements on the scoresheet that is tailored to their standard application. The focus of this evaluation is on Phase-I of the OPIC application process.
Stakeholders
OPIC leadership (client) are the primary upstream stakeholders for this evaluation. Their main concern is to determine the effectiveness of the process and they were open to recommendations for improvement of their entire process.
Future OPIC applicants and current OPIC employees were the direct impactees of this evaluation. These personnel had a professional career stake in the effectiveness of this process. At the time of this evaluation there were only 76 total graduates from OPIC and this pool will only grow on average by 12 maximum graduates per year.
The graduates from OPIC interact on a global scale with security professionals around the world. Those other corporations and professionals that the direct impactees interact with are the indirect impactees for this evaluation.
The client requested an evaluation on the Phase-I scoresheet in order to assess the effectiveness of their tool. The client's desire was to consistently select the top candidates for an in-person interview and assessment. The client desired to have a fairly objective methodology and scoring system in order to perform a subjective assessment during Phase-II.
The evaluation team consisted of three Master's of Science in Organizational Performance and Workplace Learning (OPWL), Candidates from Boise State University. This evaluation was completed as part of the culminating activity for OPWL-530 Evaluation Course.
The client intends to use the evaluation findings to recognize areas for improvement, therefore the evaluation team identified a formative goal-based evaluation with a goal-free approach. This is the appropriate methodology to determine how well Phase-I is designed for selecting the candidates with the highest potential to succeed in the course. The evaluation team also incorporated a goal-free evaluation approach to find out if there have been any unexpected results from the recent change of adding a scoresheet to Phase-I of the assessment.
EVALUATION REQUEST & PURPOSE
METHODOLOGY
The evaluation team utilized a 10-step evaluation methodology (Chyung, 2019) in order to guide them through the three key phases of identification, planning, and implementation (see figure 1)
Figure 1. 10-Step Evaluation Process and Associated Phases
Dimensions
In consultation with the client, the most pertinent program dimensions were identified for investigation. The evaluation team proposed three key dimensions to analyze, and based on conversations with the client, importance weighting to each dimension was assigned. (See figure 2):
Data Collection Instruments
The evaluation approach conformed with the evidence-based processes of Chyung’s (2019) 10-step evaluation procedure. The three dimensions investigated in this evaluation were most closely associated with the Resources and Activities categories of the Program Logic Model (PLM).
The data sources selected by the evaluation team to investigate these dimensions are depicted in table 1
Table 1 - Evaluation Dimensions
EVALUATION RESULTS
Dimension 1 - Applicant Characteristics
To evaluate Phase-I with respect to the three identified dimensions, the evaluation team conducted extant data reviews, surveys of 21 OPIC graduates and 2 non-graduates (67.6% response rate), and in-person interviews of 3 current OPIC cadre. Each dimensional question, data collected, analysis rubric, and overall dimensional quality that was used, is included in this section. For additional information, see the full report link at the bottom of this page.
Review of prior applicants’ applications
The evaluation team performed an extant data review of applications from prior applicants. Relationships between prior applicants’ application content, relative to the selection or non-selection of each applicant and the subsequent OPIC performance of selectees, were scored as a measurement of the recommended reliance on the content to predict applicant success.
These results met the criteria of the Do Not Recommend category of the rubric (Table 2), as the checklist scores were less than eight, and data did not indicate high quality content.
Analysis of applicants’ characteristics per commanders’ letters and ranking
The evaluation team performed an extant data review and analysis of applicants’ characteristics based on commanders’ letters and wing ranking. Relationships between the content of commanders’ recommendation letters for each applicant, relative to the selection or non-selection of each applicant and the subsequent OPIC performance of selectees, were scored as a measurement of the recommended endorsement of the use of such letters (in their current form) to predict applicant success.
These results met the criteria of the Weak Endorsement category of the rubric (Table 2), as the checklist scores were less than eight, and data indicated mostly negative comments.
Score of Dimension 1
Based on the two sources of data for this dimension, the evaluation team’s scoring of the ability of Phase-I to identify relevant applicant characteristics for success in the OPIC program led the team to a Do Not Recommend evaluation for this dimension, as indicated by the rubric in (Table 4).
Interviews of OPIC cadre
The evaluation team performed in-person interviews with current OPIC cadre who have been a part of Phase-I and Phase-II selection in the past. The interview was conducted in order to ascertain the level of familiarity the cadre had with the entire assessment and selection process. This was performed through a semi-scripted interview.
The evaluation team performed in-person interviews with an experimental application evaluation exercise with current OPIC cadre. The purpose of this experiment was to assess the process of current cadre in ranking applicants without any job-aid or tool to allow them to objectively rank the candidates. The process included four applicant applications which varied in status with OPIC itself (e.g. graduate, non-graduate, non-select, distinguished graduate).
The evaluation team performed in-person interviews and experimental application evaluation exercises with current OPIC cadre, using the current scoresheet to rank applications. Much like the former experiment this experiment was used to assess the process of the cadre in ranking applicants based on the application. The difference with this experiment was the use of a job aid that was previously developed by OPIC cadre. This process included the same four applications from the previous experiment.
These results met the criteria of the Did Not Meet Expectations category of the rubric (Table 5), as the checklist scores were less than eight, and data indicated mostly negative comments.
Review of Recent Phase-1 Scoresheets
The evaluation team performed an extant data review of data from the most recent scoresheets used for Phase-I. The purpose of this review was to assess if the scores the cadre assessed for the applicants were in alignment with the intent of the scoresheet and that the applicants that were selected for Phase-II were appropriate. The evaluation team was provided 17 scoresheets that aligned with the 17 applicants for OPIC class 21B. The evaluation team scored the 17 applications with the scoresheet that was developed and then compared those grades to the scoresheets provided to them from the client.
These results met the criteria of the Do Not Recommend category of the rubric (Table 6), as the checklist scores were less than eight and the data did not indicate high quality content. The reasoning behind this is that out of the 10 applicants that were selected two of them received a do not recommend during Phase-II portion of the assessment. Those two individuals received a score of 3 on the scoresheet while there were at least two other applicants that were not selected for Phase-II that received a score higher than 3.
Score of Dimension 2
While evaluating the data received for dimension 2, “Application Interpretation,” the evaluation team discovered that there was not consistency in how the applications are graded across the cadre without a scoresheet. This met the criteria of the Needs to Improve category (Table 7). Additionally, when presented with the scoresheet, the cadre agreed that it was a great job aid but was still confusing with the directions of how to score individual areas. When reviewing the scoresheets that were used for the winter 2020 applicants there were two applicants out of the 10 selected for Phase-II that received a score below their peers that were not selected and subsequently were recommended to not return to the assessment in the future.
Web-Based Survey of Past Applicants
The evaluation team administered a web-based self-administered survey to former OPIC applicants regarding their perceptions of the application process and their experience with OPIC. Responses were received from 11 current OPIC cadre, nine OPIC graduates, and two OPIC non-graduate attendees. No responses were received from OPIC applicants who did not make it past Phase-I or Phase-II (See survey item #11, Appendix E). The general and professional demographics, and the observations, perceptions, and opinions of OPIC applicants about Phase-I and the selection process as a whole were collected. Specific survey items were scored as a measurement of the perceived clarity of the application process.
These results met the criteria of the Clear category of the rubric (Table 8), as the survey item #9 (Appendix E) and other survey data indicated.
Semi-Structured Interviews of Applicants
The evaluation team conducted in-person interviews of OPIC graduates. Qualitative data regarding the observations, perceptions, and opinions of OPIC graduates about Phase-I and the selection process as a whole were collected. This data was analyzed and scored as a measurement of the application and selection process.
These results met the criteria of the Do Not Recommend category of the rubric (Table 9), as the checklist scores were less than eight, and data indicated mostly negative comments.
Score of Dimension 3
The evaluation team’s scoring of the clarity of the application process led the team to an Unclear Process evaluation for this dimension, as indicated by the rubric (Table 10).
Overall
OPIC's Phase-I process needs improvement in all dimensions that were evaluated in order to meet the expectations of the client for the process itself.
Dimension 1: If the client desires to assess the five macro characteristics of applicants during Phase-I they should consider using other sources of data to gain more in-depth information such as: ASVAB scores for assessing cognitive capability; candidate surveys from direct supervisors and mid-level leadership in lieu of a Wing Commander’s letter; and personality tests to determine character traits that are appropriate for a OPIC graduate.
Dimension 2: If the client desires for the grading members of Phase-I to interpret the application objectively and score each application similarly then this process must be improved as well. The current scoresheet does not provide a clear guide to score accurately in each area of the application. In addition, it does not follow the format of the application and causes confusion.
An interesting finding while performing the extant data review of the applications revealed that the service members requiring rank waivers were junior officers who were previously enlisted in the same career field so while the waiver may be required by regulation the awarding of negative points does not seem practical. As for security clearance, a waiver may require additional work and prove an inconvenience, but it does not seem to be a relevant predictor of performance.
Dimension 3: If the client desires for everyone applying for or participating in the Phase-I assessment to understand the process in its entirety, there is need for improvement. Marketing the process to leadership, graduates, and applicants will be the best way to allow individuals to understand the process. This marketing campaign can be presented in whatever format is the easiest to disseminate to the entire community such as e-mail, PowerPoint presentations, or in-person recruiting briefings.
Assessed Utility of OPIC Phase-I is depicted in table 11 below:
American Evaluation Association. (2018). American Evaluation Association guiding principles for evaluators. Retrieved March 14, 2021, from https://www.eval.org/About/Guiding-Principles
Chyung, S. Y. (2019). 10-Step evaluation for training and performance improvement. Sage.
Joint Committee on Standards for Educational Evaluation. (n.d.). Program evaluation standards. https://evaluationstandards.org/program/
Robertson, J., Blanchard, M., & McCraw, V. (2021). ‘OnPoint’ Instructor Course (OPWL 530 Evaluation Project Spring 2021) [MS Word].