SG1 I Release 2000/08/10 CIA- P9 ROOMORWOR+A SUMMARY REPORT STAR GATE OPERATIONAL TASKING AND EVALUATION 1.0 EXECUTIVE SUMMARY From 1986 to the first quarter of FY 1995, the DoD paranormal psychology program received more than 200 tasks from operational military organizations requesting that the program staff apply a paranormal psychological technique know as "remote viewing" (RV) to attain information unavailable from other sources. The operational tasking comprised "targets~' identified with as little specificity as possible to avoid "telegraphing" the desired response. In 1994, the DIA Star Gate program office created a methodology for obtaining numerical evaluations from the operational tasking organizations of the accuracy and value of the products provided by the Star Gate program. By May 1, 1995, the three remote viewers assigned to the program office had responded, i.e., provided RV product, to 40 tasks from five operational organizations. Normally, RV product was provided by at least two viewers for each task. Ninety-nine accuracy scores and 100 value scores resulted from these product evaluations by the operational users. On a 6-point basis where "1" is the most accurate, accuracy scores cluster around "2's" and "3's" (55 of the entries) with 13 scores of 1 ". Value scores, on a 5-point basis with 1 " the highest, cluster around "3's" and "4's" (80 of the entries); there are no 1's" and 11 scores of "2". After careful study of the RV products and detailed analysis of the resulting product evaluations for the 40 operational tasks, we conclude that the utility of RV for operational intelligence collection cannot be substantiated. The conclusion results from the fact that the operational utility to the Intelligence Community of the information provided by this paranormal RV process simply cannot be discerned. Furthermore, this conclusion is sopported by the results of interviews conducted with representatives of the operational organizations that provided tasking to the program. The ambiguous and subjective nature of the process actually creates a need for additional efforts of questionable operational return on the part of the intelligence analyst. Assuming that the subjective nature of the psychic process cannot be eliminated, one must determine whether the informSOMI provided justifies the required resource investment. 1 2.0 GENERIC DESCRIPTION OF OPERATIONAL TASKING Over the period from 1986 to first quarter of FY 1995, the Star Gate program received more than 200 tasks from operational military organizations. These tasks requested that the program staff apply their paranormal psychological technique know as "remote Page I Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 viewing" (RV) in the hope of attaining information unavailable from other sources. The operational tasking comprised "targets" which were "identified" in some manner, normally with as little specificity as possible (see discussion below) to avoid excessively "telegraphing" the desired response. However, until 1994, the results from this tasking were not evaluated by the tasking organizations by any numerical method that would identify the accuracy and value of the provided information (for a few cases in prior years narrative comments were provided by some organizations). In 1994, this situation changed when the Program Off ice developed a methodology for obtaining numerical evaluations from the tasking organizations of the Star Gate inputs; this methodology is described briefly in Section 3.0. By May 1, 1995, 40 tasks assigned by five operational organizations had been evaluated under this process.' Section 4.0 describes the numerical evaluations performed by evaluators from the tasking organizations. The descriptions presented below regarding the tasking and the related targets refer principally to the operational tasks that were numerically evaluated. The process for a typical tasking, RV response and subsequent evaluation is as follows: - The tasking organization provides information to the Star Gate Program Manager (PM) describing the problem to be addressed. - The PM provides a Tasking Form delineating only the most rudimentary information to one or more of the three Star-Gate RV'S2 for their use during the RV session (a typical Tasking Form is presented in Figure 2-1). In addition, the RV's are appraised of the identity of the tasking organization. - Subsequently the RV's hold individual "viewing" sessions recording their comments, observations, feelings, etc. and including line drawings or sketches of things, places, or other items "observed" during the session. - The individual RV inputs are collected and provided to the tasking organization for their review with a request for completing a numerical evaluation of the individual RV inputs for accuracy and for value. - Finally, for those organization who comply with the request, the evaluation scores are returned to the Star Gate Program Off ice. Evaluation of additional 1994-95 tasks continued after 511/95; three tasks since evaluated were reviewed. They caused only insignificant changes to the statistical information provided in Table 4-1 and did not alter any of the Conclusions and Recommendations in Section 7.0 2 (U) All three RVs were full time government employees. Page 2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 CIA-RDP96-00791 R000200300002-2 FIGURE 2-1 TASKING SHEET SOURCE NO: 079 DATE: 18 Jul 94 SUSPENSE: 18 Jul 94 1600 Hrs 1. PROJECT NUMBER: 94-252-0 2. METHOD/TECHNIQUE: Method of Choice 3. BACKGROUND: 4. ESSENTIAL ELEMENTS OF INFORMATION: Access and describe target. 5. COMMENTS: Page 3' Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Twenty-six (26) of the 40 operational tasks originated from DIA in support of two joint Task Forces, Org. B and Org. C, (see Section 4.0). Typical tasking targets for these organizations comprised the name of a person or thing (e.g., vessel) with a generic request to describe the target, his/her/its activities, location, associations, etc. as appropriate. No specific information (e.g., what is the height/Weight/age of the target?) was requested in the tasking. As noted above, the identity of the supported organizations also was provided. For these tasks that identification provides the RV's with knowledge regarding the specific operational interests of these organizations. Thus, any information provided by the RVs which describes or relates to those interests "could be" relevant; and, therefore, could be interpreted by the evaluators as having some level of "accuracy' and "value" depending upon the information described and the evaluator's interests and beliefs. The tasking provided by the organization denoted as Org. A comprised targets that were "places" visited by "beacons", i.e., an individual from Org. A who visited and "viewed" the site of interest to assist the RV in "visualizing" and describing the site. Targets could be a general vista in or around a particular location, a particular facility at a selected location or, perhaps, a particular item at a location (in the one case where this type of target was used, the item was a particular kind of boat). Usually, no specifics regarding the type of target or its location were provided. Tasking by Org. D comprised two generic types of targets that related to military interests/concerns current at the time of the tasking, e.g., North Korean (NK) capabilities and leadership. The first type of target focused upon then-current military concerns while the second type required "precognitive" (predictive) capabilities since it required a prognoses of future intentions and actions.3 The tasking from Org. E was similar in scope, albeit quite different in context, from the tasks noted earlier for Org. B and Org. C , i.e., describe a person, his activities, location, etc.. SG1 B 3 Some operational tasks from the period Oct. 1990 to Jan 1991 regarding Middle East issues were of a similar types, albeit these were not numerically evaluated. They would provide some data for an after-the- fact check of the accuracy of the RV predictions - see Section 7.0 for a discussion of this possibility. Page 4 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 3.0 EVALUATION MEASURES The numerical evaluation measures that were given to the evaluators of the tasking organizations to score the accuracy and value of the Star Gate inputs were extracted from the Defense Intelligence Agency Manual (DIAM) 58-13. These measures are shown in Table 3-1. Most of the stipulated measures include modifiers such as "may', "possibly", "high", "low", etc. which are subjective and open to individual interpretation by each evaluator. The DIAM 58-13 definitions for the ratings under "Value" are presented in Table 3-2; whether the individual evaluators reviewed these definitions prior to their scoring is unknown. There was no clarification of what was intended by the generic headings of "Accuracy' and "Value", e.g., in the evaluator's estimation how much of the RV's response to the tasking had to qualify for a particular measure, 1 10%, 90%, to be granted the related score? Table 3-1 Numerical Evaluation Measures Category Score Accuracy - Is the information accurate? Yes (true) 1 May be true 2 Possibly true 3 No 4 Possibly not true4 5 Unsure 6 Value - what is the value of the sources' information? Major significance 1 High value 2 Of value 3 Low value 4 No value 5 As noted in Section 2.0, one series of tasks were evaluated by a narrative discussion only. While much of the final narrative evaluation for this series was complimentary, it lacked any real specifics regarding the usefulness or relevance of the Star Gate inputs and much of the narrative was replete with modifiers and other hedges. A sanitized extract from the final evaluation report for these tasks is presented in Appendix A illustrating the subjective, "uncertain" nature of the comments. 4 Note that Accuracy scores 5 and 6 actually rank "higher"than 4 since both imply that there may be something accurate in the information. Changing the scoring order to accommodate this observation causes insignificant changes to both the averages and the standard deviations shown on Table 4-1. Page 5 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Table 3-2 - Value Ratinq Definitions from DIAM 58-13 MAJOR SIGNIFICANCE - Intelligence Information Report (11R) provided information which will alter or significantly influence national policy, perceptions, or analysis; or provided unique or timely indications and warning of impending significant foreign military or political actions having a national impact. HIGH VALUE - IIR(s) was best report to date or first report on this important topic, but did not significantly influence policy or change analyses. OF VALUE - IIR(s) provided information which supplements, updates, confirms, or aids in the interpretation of information in data bases, intelligence production, policy research and analysis, or military operations and plans; most DoD HUMINT System reporting falls into this category. LOW VALUE - [IR was not a good report because the information was not reported in a timely manner, or was of poor quality/of little substance. Nevertheless, it satisfied some of the consumer's informational needs. NO VALUE - IIR provided no worthwhile information to support data base maintenance, intelligence production, policy research and analysis, or military operations and planning; or its information had no utility, was erroneous, or misleading. 4.0 EVALUATION SUMMARY AND COMMENTS Thirty-nine (39) of the 40 numerically evaluated, operational tasks were performed in 1994 and one in 1995. The information provided by the Star Gate RV's for each task was evaluated by staff of the tasking organization. The complete compilation of evaluated scores is presented in Table 4-1 which includes a designation of the tasking organization and, where known, a numerical designator for the individual from that organization who signed the response to the evaluation request (in some instances, this was also an evaluator). Also presented are the individual and collective scores for Accuracy (A) and Value (V) for each of the three RV's and the related average and standard deviations for the compiled scores. (Note that the total number of scoring entries for either Accuracy or Value is not equal to the maximum of 120, i.e., 3x40, since all three RV's did not participated in all tasks). Table 4-2 presents the same scoring data by tasking organization. Histograms of the scores from Table 4-1 are shown below. Note that "Accuracy" scores tend to cluster around 2's and 3's (55 of the 99 entries) while "Value" scores cluster around 3's and 4s (80 of the 100 entries). This is not too surprising as the nonspecific, nebulous nature of the individual task/target requests permits the RV to "free associate" and permits the evaluator to pick and choose from the RV commentary Page 6 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved r__VflF%ejj fj faf OTf -A7 -2 ru kt*§W f lflWW300002 NUMERICAL EVALUATIONS 1 Doc. Date- Tasking Evaluator Remote res Totals - ' '-"'--- Viewer '*--" . & . Rico 2 . . 1A 1V 2A 2V 3A 3V A V Uir g ........ 'N6 ... 3 250 7/13/94 1 3.0 3.0 2.0 3.0 4.0 5.0 _ .............. 4 2(~~ 9/6/94 ~.A 2 2.0 3.0 5.0 4.0 5 270 11/3/94 3 5.0 _~.O 4.0 .--. -_ _......... - _- . . . ... ... .. 6 ~7 ,,~iii4 . . .................. 1-__.-..I. Eq. B . 3.0 4.0 . . 4.0 .. _P . __ ..... . 3 - ..... 5.0 7 273 11/3/94 0 _B 3 ~.O ~. 5.0 4. 4.0 5.0 ... ... .. ..............-- 0 . _ . ...... ... --- . - .................... I.-, ...-............................... 8 267 11/3/94 _919. . _ 4.0 ... -- ......4.0 B . ... ................. . ... 3.0 - 3.0 3 9 268 11/3/94 3 3.0 4.0 4.0 3.0 5.0 4.0 ............... ............. .. . ... __ . ... -...,___ . .. 10 2-69 11/3/94 B 3 . ......-.. 5.0 5.0 .. 3.0 . ......... 3.0 1 272 11/3/94 3 3.0 1 .... .......... 1 258 8/3/94 ~-c 4 1.0 3.0 . ..... 2 .. 3.0 -- . 2.0 13 257 8/1/94 4 3.0 5.0 3.0 5.0 14 256 7/28/94 _C 4 2.0 3.0 5.0 4.0 1 249 7/11/94 D 5 1.0 - 2.0 2.0 2.0 4.0 5 ... .......... ................ 4.0 .......... .. .......................... 1 7/6/94_ 5 . 6 48 . D .0 .0 .0 2.0 .0 .......... ___2r ... 4.0 17 245 6/24/94 5 3.0 3.0 1.0 4.0 - " - .......... 18 2 5 7 1 ~.C 4 4.0 4.0 2.0 3.0 ........ 8/ 9 4 4 19 251 7/15/94 0 2.0 0 1 3.0 2.0 _ . ......... ......... 0 ... . .. _ __ . . ...... . .............. 20 243 5/31/94 __iT .. 3.0 3.0 5.0 4.0 1.0 4.0 _ - -C ... _ 4 21 2 5/25/94 0 C 4 T. 3. -- 1.5 3.0 ___ ._ _ . ~9 . _5 0 ..... . .. . ........ ... . .. . .................... .. ... ......... 22 244 -6/6/94 A ...... ..... ................3.0 ...................... 1 . ... 2.0 .............. 4.0 ........ 1.0 2.0 5.0 23 239 6/12/94 6 2.0 2.0 1.0 2.0 2.0 2.0 239 _ , , - ..............__ ................ 24 Ti T A 7 2.0 2.0 2.0 2.0 __-, . T 4 1.0 . 2 0 /9 . . .... . 2.0 25 240 5/17/94 2.0 3.0 3.0 4.0 0 _ ~24 26 235 4/18/94 Onq. 8 3.0-4.0 3.0 3.0 3.0 4.0 23 C 5 27 234 4/14/94 8 2.0 3.0 5.0 3.0 6.0 5.0 . ............. ..... ..... .............. ..................... 28 __233 4/11/94 -2a -C 8 3.0 3.0 3.0 3.0 3.0 .. . .. . . 3.0 29 229 3/29/94 4 4.0 5.0 4.0 .......... ___ __.-...... 30 228 3/28/94 C 4 1.0 2.0 3.0 .......... 3.0 . 4.0 . . ........ 3.0 31 227 3/24/94 ~O 4 3.0 4.0 5.0 3.0 3.0 .... ........ q -1 .............. ....... 32 2 2 2 4 5.0 4.0 5.0 . 2.0 3.0 _ _ - ........ _ 4.0 3 ~ 3/21/94 4 2 3-0 __~_.O_3-0 2-0 3.0 3 25 0 ... . . .... 34 232 4/11/94 Or 2.0 4.0 5.6-- _q. E 9 4.0 5.0 4.0 35 236 4/26/94 9 6.0 4.0 6.0 2.0 36 237 4/26/94 Or q. 9 5.0 4.0 5.0 4.0 E 37 241 4/27/94 Or - 9 3.0 4.0 2.0 4.0 E 38 2-47 6/29/94 D 10/11 1.0 3.0 3.0 3.0 3.0 3.0 39 265 7/6/94 0/ 11 1.0 3.0 2.0 3.0 2.0 4.0 ........ ........ . ......................... ....... .. .... .. . . .................. .......... .......... 40 259 7/15/94 ~.C 4 5.0 4.0 --- ............. -1 ......... . ...........2.0 ... 2.0 41 262 8/23/94 6.0 4.0 4.0 5.0 , ......... . 42 287 4/3/95 C 12 2.0 4.0 1.0 4.0 43 Score sums=106.5130 76.083.0113.5135.0296 348 44 Nu m ber of e 37 37 25 26 37 37 100 n1ies 45 Avq score 2.9 3.5 2.9 3.2 .6 3.0 3.5 1 ......... 4 Std.Deviation1.4 0.8 1.3 . 1.6 0.9 , 0.8 6 0.7 1.4 TABLE 4-1 Approved For Release 2000/08/1 ga~CIA-RDP96-00791 R000200300002-2 Approved Fjffftft"fjNqfL10rR§jZr"fA7f IRWW300002-2 NUMERICAL EVALUATIONS 1 Doc. Date Tasker Evaluator Remote ITotals Viewer & Scores ....... 2 ... . 1A . 1V V 2A I 2V 3A 3V 3A 3 ~ _ 4 y Tasking Organization t -2! T ....... .......... ...................... ............ L-' . ..... ...... - --c . ...... 6 258 -8/3/9.4---ga. 4 1.0 3.0 2.0 3.0 - --- ... - - . . ...... - - . - - 7 257 8/1/94 C 4 3.0 -- 3.0 5.0 . - . 5.0 - --- --- ------- 256 8 7/28/94 4 3.0 5.0 4.0 . - . - . - .. --- 9 252 7/18/94 -C 4 . - 2.0 3.0 . - . 4.0 . 4.0 10 251 -7/15/94 C . ..... ... -3~.O--1,0. -1.0.-- ... -2..0-- -3 ... 0 . . .. - .. 1 243 5/31/94 Or 4 3.0 3.0 5.0 . 1.0 4 . 1 ' 4.0 .. 12 242 i/25/94 4 1.5 3.0 1.5 3.0 1 240 5/17/94 Orq. 2.0 3.0 3.0 4.0 3 C 14 235 4/18/94 8 3.0 4.0 3.0 3.0 3.0 4.0 1 1 234 4/14/94 8 2.0 3.0 5.0 3.0 6.0 5.0 5 1 233 4/11/94 8 3.0 3.0 3.0 3.0 3.0 3.0 6 - - ..................... ..... . .. . I .......... I. ......II .... . . .. ... . ..................... .... .. 17 229 3/29/94 C 4 2.0 4.0 2.0 .. .. 4.0 . ........... .. . 4.0 5.0 .. . 18 228 3/28/94 0~9_._g_4 1.0 2.0 3.0 4.0 3.0 3.0 2 .19 3/24/94 4 3.0 3.0 4.0 5.0 3.0 3.0 2 2 7 22-7-- L 20 226 3/22/94.__._ . ...........45.0 4.0 5.0 4.0 2.0 3.0 . . 226 . . 21 225 3/21/94 C 4 .... ... 3.0 . .... 3.0 225 . 3.0 - 3.0 2.0 - . --- - 2.0 - 22 259 7/15/94 4 5. ~ 2. 2 0 0 0 0 .. 25 ..... 9 .. 23 262 8/23/94 Org. ? .0 4.0 4.0 5.0 . .. C --------------------------------- . 262 24 287 4/3/95 Org. 12 2.0 4.0 1.0 4.0 287 C 25 Score sums-52 65.036.039.051.5 65.0140 169 Score 5 S su s~ c m - - 26 No. of 1 9 11 11 18 48 48 entri P N I of ent ries= .......... 27 Av score - 4 3.3 3.5 2.9 3.6 2.9 3.5 P 28 r 1 2 270 11/3/94 0 rq. 3 5.0 4.0 5.0 4.0 9 B 3 271 11/3/94 3 -0 4. -5.0 -4 0 0 .. .. . 0 . . .. ... .... . . . 3 273 11/3/94 0 rg. 3 4.0 5.0 . . ......... .. 1 B 5.0 ............. .. .. .. . 5.0 . 4.0 4.0 3 267 11/3/94 0 3 3.0 4.0 3.0 4.0 2 .. . .................. ......- . . ... -- .... . . . . . . 3 268 11/3/94 B 3 3.0 4.0 4.0 ........ 3 3.0 .... .... . . 5.0 34 269 3 / 9 B 3.0 3.0 5.0 . . ..... 3 272 11/3/94 B 3 .......... 5 __Pj 3.0 8 2 12.013.02 2 57 60 3 Score sums=0 1.0 7.0 6. 6 0 5 5 3 1 5 3 4 6 6 14 o - of entries -~. T 381 ore= 3.6 4 ,0t 73i 4.3 4.0 Avg sc t2 t4 A-9 Approved For Release 2000/01V1914-ftP96-00791 R000200300002-2 Approved 0TA18Xqj"A7P1fjqQW0300002-2 NUMERICAL EVALUATIONS i Doc. Date J Task7e~Evaluator ote Viewer & Totals bcores 2 org- ....... 1A 1V 2A -2V-3A- 3V A V 9 3 4 y Tasking anization Org 5 ...... ............. . ...................... 6 249 -- .. D 5 1.0 ............2.02.0 4.0 . ........ . .. 7 -- . . 2.0 -7/11/94 . 4.0 7 248 7/6/94 5 3.0 3.0 2.0 2.01.0 4.0 ...... ........... 8 245 6/24/94 D . 3.0 3.0 1.0 4.0 5 -A- 9 24 6/29/94 10/11 1.0 1P 2.9_1O 3. 3.0 , 0 ~ 10 265 7/6/94 Orq. D 10/11 1.0 3.0 2.0 , 2.0 4.0 1 . 3.0 11 Score sums=9.0 16.09.0 10.09.0 19.027 45 .......... 1 s= 5 5 4 4 5 5 14 14 2 No. of entrie 1 ------- . 1.8 3.2 2.2 2.51.8 3.8 1.9 3.2 3 c2re= . . ............ 14 , ............. . 15 orq.- 16 102 7/13/94 A 1 3.0 3.0 2.0 3.04.0 5.0 - - - 17 101 9/6/94 2 E~ -3.5.0 4 0 0 18 82 ~7~7 9 1 .0 2.0 3 1 4 A 40 0 0 2.0 19 81 6/12/94 Or A 6 2.0 2.0 1.0 2.02.0 2.0 ............... . . .. .... ... .........- 20 79 1 4/1/94 Orq. A 7 ....... 2.0 2.01.0 2.0 2.0 2.0 1 21 Score sums=11.012.09.0 13.013.0 15.033 40 ............. . . . . .. 22 No. of entries=4 ... 5 5 5 5 14 14 . 4 23 Avg score= 2.8 3.0 1.8 2.62.6 3.0 2.4 2.9 .. . . ....... 24 25 26 232 4/11/94 grR. E 9 2.0 4.0 5.0 4.05.0 4. 0 _, ........... 27 236 4/26/94 E 9 6.0 4.0 6.0 2.0 ...... ..... ... . .. ................ 28 237 4/26/94 Orq. E 9 5.0 4.0 4.0 5.0 29 241 4/27/94 E 9 3.0 4.0 2.0 4.0 30 Score sums=16.016.0.0 8.013.0 10.039 34 0"o 8 0 L 3 No. of entries=4 4 2 3 3 9 9 1 2 32 Avg score= 4.0 4.0 5.0 4. 4.3 3.8 0 33 . .... . ....... . -3.4 ......... ..... t ........... 35 .......... ---------------------------- - - n y r za gani io 37 Organization Average res Sco ....... 38 0 2.8 3.4 3.3 3.52.9 3.6 ........... -C .... 39 ~.6 4.2 4.0 3.24.5 4.3 .. . . ... .. . . 40 Or2. Dy 1.8 3.2 2.2 2.51.8 3.8 . .. ... 41 2.8 3.0 1.8 2.62.6 3.0 ......... ..... . ......-. . ... . . . . . 42 Org. E 1 4.0 4.0 . ..... 3.3 . 1 1 .... 1 .. 5.0 4.04.31 .. 1 1 .... Table 4-2 Approved For Release 2000/08/1 Pa~LC14k-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 CIA-RDP96-00791 R000200300002-2 30- so- ...... 40- N20--- N U U 30- M M b b20- ejo- e r r 10-- 0. 0- 1 2 3 4 5 6 1 2 3 4 5 Scoring Factor ScoringFactor Histograms of Evaluator Scoring anything that he thinks "may" or "possibly" is related to his problem (and score accordingly) regardless of how much of the RV commentary may satisfy the particular measure. If the Accuracy of the information is somewhat uncertain, its Value must be vaguer still, i.e., scored lower. This presumption is supported by review of the scored "pairs" for all cases, e.g., 1 A and 1 V; only rarely does the W" score equal or exceed the "A" score for a specific RV and target. Note further that of the 100 W" scores shown on Table 4-1, there are no 1" scores5, while the 99 "A" scores include 13 1's". Regarding the latter, a detailed review of the evaluator comments and/or the tasking suggests that the importance of these 1's is less than the score would imply in all but four cases since: - the evaluator of Document 243 stated that the RV 3A score "...though vague, is probably correct." - the tasking and targets for Documents 245,247, 248, 249 and 2656 concern topics widely publicized in the open media during the same. period, hence the Hsource?' of the RV 1 A and 3A comments, intended or not, is suspect, and - for Documents 230, 239 and 244, the evaluator's supporting narrative is 5 The significance of this omission is further enhanced if one assumes that the evaluators were familiar with the definitions in Table 3-2 since even those 11 instances scored as #2 ("High value") merely require that the input be the "best report to date or first report on this important topic, but [it] did not significantly influence policy or change analyses." 6 (U) The evaluation of Document 265 is actually a second evaluation of the same RV inputs provided many months after the first evaluation for Document 248 and probably done by a different evaluator. Page 10 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 between the RV(s) and the evaluator: - has a very narrow information bandwidth, i.e., the RV-derived information cannot be embellished by a dialogue with the evaluator without substantially telegraphing the evaluator's needs and interests, thereby biasing any RV information subsequently derived , and - is extremely "noisy" as a result of the unidentifiable beliefs, intentions, knowledge, biases, etc. that reside in the subconsciousness of the RV(s) and/or the evaluator. As a result, the potential for self-deception on the part of the evaluator exists, Le, he/she "reads" into the RV information a degree of validity that in truth is based upon fragmentary, generalized information and which may have little real applicability to his/her problem. The relevant question in the overall evaluation process is who and what is being evaluated, i.e., is the score a measure of the RV's paranormal capabilities or of the evaluators views, beliefs and concepts? One of the RV's expressed a concern to the author that the protocols that were followed in conducting the RV process in response to the operational tasking were not consistent with those that are generally specified for the study of paranormal phenomena. Whether the claimed discrepancy was detrimental to the information derived by the RV's,or to its subsequent evaluation or use cannot be determined from the available data. The operational tasking noted earlier concerning activities in North Korea which required precognitive abilities on the part of the RV's provides an opportunity for a post-analysis by comparing the RV predictions against subsequent realities. Additional comparative data of this type is available from operational tasking during the period 11/90 through 1/91 regarding the Middle East situation (this tasking was not numerically evaluated). 6.0 SUMMARY FROM USER INTERVIEWS (U) Subsequent to the review and analysis of the numerically scored tasking described in the previous sections of this report, the author participated in interviews with representatives of all of the tasking organizations presented in Table 4-1 except Org. D. Only a brief summary of the results from those interviews is presented here; more detailed synopses are presented in Appendix B. In all cases except for Org. C, the interviewees were the actual personnel who had participated directly in the tasking and evaluation of the Star Gate program. For Org. C, the sole interviewee was the Chief of the Analysis Branch; the staff who defined the tasking and performed the evaluations was comprised of his lead analysts. Page 13 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 A brief summary of the salient points which appeared consistently throughout these interviews follows: ~ the principal motivation for using Star Gate services was the hope that something useful might result; the problems being addressed were very difficult and the users were justifiably (and admittedly) "grasping at straws" for anything that might be beneficial. ~ the information provided by the Star Gate program was never specific enough to cause any operational user to task other intelligence assets to specifically corroborate the Star Gate information. ~ while information that was provided did occasionally contain portions that were accurate, albeit general, it was - without exception - never specific enough to offer substantial intelligence value for the problem at hand. ~ two of the operational user organizations would be willing to pay for this service if that was required and if it was not too expensive (although one user noted that his organization head would not agree). However, the fact that Star Gate service was free acted as an incentive to obtain "it might be useful - who knows" support for the program from the user organizations. The reader is referred to Appendix B for additional information resulting from these interviews. However, two inconsistencies noted during the discussion of the numerical evaluations in Section 4.0 were supported by information obtained from the interviews. On the average, the Org. C evaluators scored higher that those of Org. B. One cause for this discrepancy may be due to the fact that the Org. 6 evaluators were, in general, skeptical of the process while the lead person at Org. C claimed to be a believer in parapsychology and, in addition, had the last say in any evaluations that were promulgated back to the Star Gate PM. This comment is in no way intended to impugn the honesty or motivation of any of these personnel, merely to point out that this difference in the belief-structure of the staff at these two organizations may have resulted in the percbived scoring bias. As noted above, the subjectivity inherent in the entire process is impossible to eliminate or to account for in the results. The higher average scoring, especially Accuracy scores, from the Org. A evaluators appears to be explained by the procedure they used to task and evaluate the experiments they were performing with the Star Gate program. Namely, they used a staff member as a "beacon" to "assist" the RVs in "viewing" the beacon's location. Subsequently, the same Org. A staff member evaluated the RV inputs. However, since he/she had been at the site, he/she could interpret anything the appeared to be related to the actual site as accurate. When asked if the information from the multiple RV's was sufficiently accurate and consistent such that a "blind" evaluator, i.e., one who did not know the characteristics of the site, would have been able to identify information from the RV inputs that they could interpret to be accurate, they all answered in the negative and agreed that the score would have been lower. Again the subjectivity of Page 14 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000108110 : CIA-RDP96-00791 R000200300002-2 the process appears - the evaluator could interpret the admittedly general comments from any RV that seemed to relate to the actual site as "accurate", e.g., consider an RV input "there is water nearby', the evaluator knows this it true of almost anyplace especially if one does not or cannot define what kind of water, i.e., is it a lake, a water line, a commode, a puddle? 7.0 CONCLUSIONS AND RECOMMENDATIONS 7.1 Conclusions The single conclusion that can be drawn from an evaluation of the 40 operational tasks is that the value and utility to the Intelligence Community of the information provided by the process cannot be readily discerned. This conclusion was initially based solely upon the analysis of the numerical evaluations presented in Section 4.0, but strong confirmation was provided by the results of the subsequent interviews with the tasking organizations (Ref. Section 6.0 and Appendix B). While, if one believes the validity of parapsychological phenomena, the potential for value exists in principal, there is, none-the-less, an alternative view of the phenomenology that would disavow any such value and, in fact, could claim that the ambiguous and subjective nature of the process actually creates a need for additional efforts with questionable operational return on the part of the intelligence analyst. Normally, much of the data provided by the RV(s) is either wrong or irrelevant although one cannot always tell which is which without further investigation. Whether this reality reduces or eliminates the overall value of the totality of the information can only be assessed by the intelligence analyst. It clearly complicates his/her problem in two ways: 1) it adds to the overburden of unrelated data which every analyst already receives on a daily basis, i.e., the receipt of information of dubious authenticity and accuracy is not an uncommon occurrence for intelligence analysts, and 2) since the analyst does not ndrmally know which information is wrong or irrelevant, some of it is actually "disinformation" and can result in wasted effort as the analyst attempts to verify or discount those data from other sources. The review of the operational tasking and its subsequent evaluation does not provide any succinct conclusions regarding the validity of the process (or the information provided by it). First and foremost, as discussed in Section 5.0, the entire process, from beginning to end, is highly subjective. Further, as noted in Section 3.0, the degree of consistency in applying the scoring measures, any guidance or training provided to the evaluators by any of the tasking organizations and/or the motivation of the evaluators are either unknown or, in the case of the latter, may be highly polarized (see Appendix B) The lack of information regarding these items could account for some of the variability in the scores across organizations noted in Table 4-2, but this cannot be certified and is, at most, a suspicion. Page 15 Approved For Release 2000108110 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Whether the information provided by the Star Gate source is of sufficient value to overcome the obvious detriment of accommodating the irrelevant information included therein is an open question? More precisely, whether the Star Gate information is of sufficient value to continue this program - vis-a-vis other sources of information and other uses of resources - is an important question for the Intelligence Community to address, irrespective of one's personal views and/or beliefs regarding this field of endeavor, i.e., does the information provided justify the required resource investment? One method that might assist this evaluation is to develop a means for scoring the complete input from the RV process, i.e., evaluate all information and determine how much is truly relevant, how much is of undeterminable value and how much is completely irrelevant. One could then analyze how much information is being handled to achieve the relevant information (along with some measure of the relevancy) and make judgments on its value vis-a-vis the investment in time and money. Other, less technical methods, for adjudicating this issue also exist. 7.2 Recommendations Considering the statements above, the only sensible recommendation in this author's mind is to bring some "scientific method" into this process (if it is continued). As evidenced by more than 20 years of research into paranormal psychology, much of it done by institutions of higher education or others with excellent credentials in related fields, validation of parapsychological phenomena may never be accredited in the sense that is understood in other scientific and technical fields of endeavor . Control in any rigorous scientific sense of the multitude of human and physical variables which could, and probably do, influence this process is difficult - perhaps impossible - for any except the most mundane types of experiments, e.g., blind "reading" of playing cards. Even these restricted experiments have led to controversy among those schooled in the related arts. One of the foundation precepts of scientific endeavor is the ability to obtain repeatable data from independent researchers. Given the subjective nature of RV activities, it is difficult to believe that this aspect of parapsychology will ever be achieved. As an admitted neophyte in this area -of endeavor, I categorize the field as a kind of religion, i.e., you either have "faith" that it indeed is something real, albeit fleeting and unique, or you "disbelieve" and attribute all positive results to either chicanery or pure chance'O. Thus, one must recognize at the start that any attempt to bring scientific method into the operational tasking aspects of this project may not succeed. Others with serious 10 Practitioners in the field, including those funded under government contracts, would argue with these observations, perhaps vehemently; some would argue further that the phenomenology has been verified beyond question already. This reviewer disagrees; albeit, these observations are not intended to discard the possibility of such phenomena. Page 16 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 motives and intentions have attempted to do this with the results noted above. However, as a minimum, one could try to assure that the scoring measures are succinctly defined and promulgated such that different organizations and evaluators would have a better understanding of what is intended and, perhaps could be more consistent in their scoring. The use of independent, multiple evaluators on each task could aid in reducing some of the effects of the subjective nature of the evaluation process and the possible personal biases (intentional or otherwise) of the evaluators. Since, according to some parapsychologists, the time of the remote viewing is not relevant to the attainment of the desired information, controlled "blind tests" could be run by requesting tasking for which the accurate and valuable information is already known to determine statistics on RV performance (clearly one key issue in such tests is what information is given to the RV in the task description to avoid any semblance of compromise, not a casual problem). Controlled laboratory experiments of parapsychology have done this type of testing and the results, usually expressed in terms of probability numbers that claim to validate the parapsychological results, have done little to quell the controversy that surrounds this field. Thus it may be naive and optimistic to believe that such additional testing would help resolve the question of the"value of the process" (or its utility for operational intelligence applications), but it might assist in either developing "faith" in those who use it, or conversely "disbelief". Before additional operational tasks are conceived, some thought could be given to how and what one defines as a "target". Broad generic target descriptions permit unstructured discourse by the RV which - especially if there is a knowledge (or even a hint) of the general area of interest - leads to data-open to very subjective, perhaps illusionary, interpretation regarding both accuracy and value. If some specificity regarding the target could be defined such that the relevance and accuracy of the RV- derived data could be evaluated more readily, some of the uncertainties might be eliminated. In this context, note that the cases where targets were more specific, e.g., the North Korean targets , the resulting scores were generally higher. Finally, it was noted in Section 5.0 that some of the RV information obtained from operational tasks regarding North Korea (and others concerning the Middle East) depended upon the precognitive ability of the RV's in predicting events yet to occur. These data provide an opportunity for a post-analysis of the accuracy of these predictions by making a comparison with subsequent information regarding actual events (some data for this comparison might require access to classified information from other sources). Such a post-analysis would provide data for evaluating the ability of the RV's to perform precognitive tasks and of the related operational value of the predictions. Performance of this post-analysis lies beyond the scope of this paper, but is a topic for a subsequent study if any sponsor is interested. Page 17 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 SG1 B Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Next 1 Page(s) In Document Exempt Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 APPENDIX B STAR GATE OPERATIONAL USER INTERVIEWS Page B-1 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 STAR GATE OPERATIONAL USER INTERVIEW ORGANIZATION: A USER POC: #7 DATE: 3'August 1995 Operational Task: SG was asked to participate in a series of experiments to determine if their paranormal service could assist in locating someone who was at an unknown location and had no radio or other conventional method for communicating. Members of the user organization acted as "beacons" for the RV's by visiting sites unknown to the RV's at specified times. The RVs were requested to identify any information that would assist in determining the site location by "envisioning" what the beacons were seeing. Motivation for Employing Star Gate: The previous head of the user's group was aware of the program from other sources and requested that SG participate in these experiments in the hopes that some information might be obtained to assist in locating the sites and/or people given the scenario above This situation is similar to that noted from other user interviews, namely, the difficulty of obtaining relevant information from any other source renders the use of the paranormal approach as a worthwhile endeavor from the user's perspective "just in case" it provides something of value User Attitude: All of the interviewees were positive regarding the application of this phenomenology to their problem, albeit they all agreed that the RV information provided from the experiments performed to date were inadequate to define the utility of the phenomena and that additional experiments were needed. Results - Value/Utility: For each user task, the evaluator was the same individual who had acted as the beacon, i.e., the person who had actually been at the candidate location. Each evaluator noted that some of the information provided by the RV's could be considered to be accurate. When asked if the accuracy of the information would be ranked as high if the evaluator did not know the specifics of the site, i.,e., had not be the "beacon, which is the real "operational situation", all answered in the negative. Several interviewees indicated that their interpretation of the RV data led them to believe that the RVs had witnessed other items or actions the beacon was engaged in but not related to the site of interest. As a result of the experiments done to date, the user decided that the approach being pursued was not providing information of operational utility since it was too general. However, the user was convinced of the possible value of the paranormal phenomena and was planning a new set of experiments using a substantially modified approach in the hope of obtaining useful results. Future Use of SG Services: As inferred above, the user would continue to use SG-type services, albeit in a new set of experiments. The user would be willing to pay for this service if it was not too expensive and requested that they be contacted if the program was reinitiated. When advised that they could obtain services of this type from commercial sources, they noted that this would be difficult due to the highly classified nature of some of their activities. Page B-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 STAR GATE OPERATIONAL USER INTERVIEW ORGANIZATION: B USER POC: #3, et al DATE: 14 July 1995 Operational Task: Most tasking requested information about future events, usually the time and/or place (or location) of a meeting. Some tasking requested additional information describing a person or a thing, e.g., a vessel. In one instance, after previous "blind" requests had yielded no useful information, the user met with the RV's and provided a picture and other relevant information about an individual in hope of obtaining useful information about his activities. Motivation for Employing Star Gate: SG PM briefed RV activities and his desire to expand customer base. User was willing to "try" using SG capabilities since there was no cost to the user and, given the very difficult nature of user business, "grasping at straws" in the hope of receiving some help is not unreasonable. Note that this organization had tasked the program in the '91 time frame but had not continued tasking in '92-'93 until briefed by the new Star Gate PM. User Attitude: DIA POC was openly skeptical, but was willing to try objectively. Members of the organization he supports (Org. B) had varied levels of belief, one individual appear very supportive noting the successful use of psychics by law enforcement groups (based upon media reporting). Evaluation of the tasking was accomplished collectively by the DIA POC and three other Org. B members. Results - Value/Utility: None of the information provided in response to any of the tasks was specific enough to be of value or to warrant tasking other assets. SG data was too vague and generic, information from individual RV's regarding the same task were conflicting, contained many known inaccuracies and required too much personal interpretation to warrant subsequent action. User would be more supportive of process if data provided was more specific and/or closely identified with known information. In one instance, a drawing was provided which appeared to have similarity with a known vessel, but information was not adequate to act on. Future Use of SG Services: User would be willing to use SG-type services in future. However, in current budget environment, demonstrated value and utility are not adequate to justify funding from user resources. Would not fund in any case unless program could demonstrate a history of successful and useful product. User believes that RV's working directly with his analysts on specific problems would be beneficial in spite of the obvious drawbacks. Individual quoted above suggested recruiting RV's from other sources, noting his belief that the government RV's may not be best qualified, i.e., have best psychic capabilities. Page B-3 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 STAR GATE OPERATIONAL USER INTERVIEW ORGANIZATION: C USER POC: #4 DATE: 26 July 1995 Operational Task: Most tasking requested information describing a person, a location or a thing, e.g., a vessel. Occasionally, the tasking would provide some relevant information about the target or "his/her/its" associates in hope of obtaining useful information about its activities. Motivation for Employing Star Gate: In circa 1993, the SG PM briefed RV activities and his desire to expand the customer base. This desire conjoined with the user's' belief that it provided an alternate source of information led to the subsequent tasking. User was willing to "try" using SG capabilities since there was no cost to the user and, as noted in other interviews, given the very difficult nature of the user's business, "grasping at straws" in the hope of receiving some help is not unreasonable. This -organization had tasked the program in th e (circa) '86-'90 time frame but had terminated tasking since there was no feedback mechanism. User Attitude: User was a believer in the phenomena based upon his "knowledge of what the Soviets were doing" and his perceptions from the media regarding its use by law enforcement agencies. He noted that his lead analysts, who generated the tasking, were very skeptical, as was his management. User insisted that analysts be objective in spite of their skepticism. In general, numerical evaluation of the task was performed by the individual who had defined it. Results - Value/Utili : This interviewee claimed value and utility for the information provided by the RV's, noting that information regarding historical events was always more accurate that information requiring predictions. RVs were "fairly consistent' in identifying the "nature" of the target, e.g., is it a person or a thing, but not always. On occasions where RV inputs were corroborated, additional data were requested, but these data usually could not be corroborated. User commented that all reports had some accurate information, 2 however, the SG data provided was either not specific enough and/or not timely enough to task other assets for additional information. Some SG data was included in "target packages" given to field operatives; however, there was no audit trail so there is no evidence regarding the accuracy or use of these data. User also noted that classification prohibited data dissemination as did concerns about skepticism of others regarding the source and the potential for a subsequent negative impact on his organization. Future Use of SG Services: User desires to continue using SG-type service if the I Only one person provided all of the information at this review. Where the "user" or "Interviewee" is cited, it reflects the remarks of that single individual. 2 User was unaware that the tasking organization and its primary mission were known to the RV's. Portions of the data provided by the RV's could have been predicted from this knowledge. Page B-4 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 program continues. In addition, the user stated that he would be willing to pay for the service if necessary. However, subsequent discussion indicated that his management would not fund the activity unless the credibility could be demonstrated better and the phenomenology legitimized. User went on to claim that only the sponsorship of a government agency could "legitimize " this activity and its application to operational problems. User believes that RV's working directly with his analysts on specific problems would not be beneficial due to the skepticism of his analysts and the deleterious impact that would have on the RV's. The views provided by the user - note none of the actual evaluators were present - appeared to be unique to him and his belief in the phenomenology, i.e., his remarks indicated that the use of this process was not actively supported by anyone else in his organization. The numerical evaluations of the 19 tasks performed in 1994/95 certainly do not indicate, on the average, either a high degree of accuracy or value of the data provided. Page B-5 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2 STAR GATE OPERATIONAL USER INTERVIEW ORGANIZATION: E USER POC: #9 DATE: 7 July 1995 Operational Task: Request to assist in determining if a suspect was engaged in espionage activities, e.g., who is he meeting? where? about what? are these activities related to espionage or criminal acbons? Tasking comprised a series of four sequential tasks, each time a bit more information was provided to the RVs, including at one point the name of the suspect. (Note: this "sequential tasking" is unique. Each of the tasks assigned from other operational organizations was a "singular" or "stand alone" event.) Motivation for Employing Star Gate: SG PMO briefed RV activities and his desire to expand customer base. User was willing to "try"using SG capabilities since there was no cost to the user and, given the very difficult nature of user business, "grasping at straws" in the hope of receiving some help is not unreasonable. User Attitude: Pre-SG experience - User (#9) had a perception of beneficial assistance allegedly provided to domestic police by parapsychologists; thereby he was encouraged to try using the SG capabilities and hopeful of success. Post-SG experience - Still very positive in spite of the lack of value or utility from SG efforts (see below). User is "willing to try anything" to obtain assistance in working his very difficult problems. .Results - Value/Utility: None of the information provided in any of the four sequential tasks was specific enough to be of value or to warrant tasking his surveillance assets to collect on-site information as a result of SG information. SG data was too generic and while it may have contained accurate information, it required too much personal interpretation to warrant subsequent actions by his assets. Much of the SG information was clearly wrong so there was no way to ascertain the validity of the rest. One major deficiency noted in the SG responses was the lack of any RV data regarding large fund transfers that the suspect was known to be engaged in and which the user believes would have been uppermost in the suspect's mind. User would be more supportive of process if data provided was more specific and/or closely identified with known information. Future Use of SG Services: User would be willing to use SG-type services in future. However, in current budget environment, demonstrated value and utility are not adequate to justify funding from user resources. User would be willing to have a joint activity whereby RV's work directly with his analysts on specific problems if: a) user did not pay for RV services and b) commitment for joint RV's services was long term , i.e., several years. Page B-6 Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200300002-2