R000200280002-5 16 May 1995
Decision Augmentation Theory:
Toward a Model
of -
Anomalous Mental Phenomena
by
Edwin C. May, Ph.D
Science Applications International Corporation
Menlo Park, CA
Jessica M. Utts, Ph.D.
University of California, Davis
Division of Statistics
Davis, CA
and
S. James P. Spottiswoode
Science Applications International Corporation (Consultant)
Menlo Park, CA
Abstract
Decision Augmentation Theory (DAT) holds that humans integrate information obtained by anoma-
lous cognition into the usual decision process. The result is that, to a statistical degree, such decisions
are biased toward volitional outcomes. We introduce our model and show that the domain over which it
is applicable is within a few standard deviations from chance. We contrast the theory's experimental
consequences with those of models that treat anomalous effects as due to a force. We derive mathemat-
ical expressions for DAT and for force-like models using two distributions, normal and binomial. DAT
is testable both retrospectively and prospectively, and we provide statistical power curves to assist in the
experimental design of such tests. We show that the experimental consequences of our theory are dif-
ferent from those of force-like models except for one special case.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
6~MdRcA&gMWMWIW*of8tggdr(t'!#"P§fCa~1ROO0200280002-5 16May1995
Introduction
We do not have positive definitions of the effects that generally fall under the heading of anomalous
mental phenomena! In the crassest of terms, anomalous mental phenomena are what happens when
nothing else should, at least as nature is currently understood. In the domain of information acquisi-
tion, or anomalous cognition (AC), it is relatively straightforward to design an experimental protocol
(Honorton et al., 1990, Hyman and Honorton, 1986) to assure that no known sensory leakage of in-
formation can occur. In the domain of macroscopic anomalous perturbation (AP), however, it is often
very difficult.
We can divide anomalous perturbation into two categories based on the magnitude of the putative ef-
fect. Macro-AP include phenomena that generally do not require sophisticated statistical analysis to
tease out weak effects from the data. Examples include inelastic deformations in strain gauge experi-
ments, the obvious bending of metal samples, and a host of possible "field phenomena" such as teleki-
nesis, poltergeist, teleportation, and materialization. Conversely, micro-AP covers experimental data
from noisy diodes, radioactive decay and other random sources. These data show small differences
from chance expectation and require statistical analysis.
One of the consequences of the negative definitions of anomalies is that experimenters must assure that
the observables are not due to "known" effects. Tkaditionally, two techniques have been employed to
guard against such interactions:
(1) Complete physical isolation of the target system.
(2) Counterbalanced control and effort periods.
Isolating physical systems from potential "environmental" effects is difficult, even for engineering spe-
cialists. It becomes increasingly problematical the more sensitive the AP device. For example Hubbard,
Bentley, Pasturel, and Issacs (1987) monitored a large number of sensors of environmental variables
that could mimic perturbational effects in an extremely isolated piezoelectric strain gauge. Among
these sensors were three-aids accelerometers, calibrated microphones, and electromagnetic and nu-
clear radiation monitors. In addition, the strain gauges were mounted in a government-approved en-
closure to assure no leakage (in or out) of electromagnetic radiation above a given frequency, and the
enclosure itself was levitated on an air suspension table. Finally, the entire setup was locked in a con-
trolled access room which was monitored by motion detectors. The system was so sensitive, for exam-
ple, that it was possible to identify the source of a perturbation of the strain gauge that was due to inno-
cent, gentle knocking on the door of the closed room. The financial and engineering resources to isolate
such systems rapidly become prohibitive.
The second method, which is commonly in use, is to isolate the target system within the constraints of
the available resources, and then construct protocols that include control and effort periods. Thus, we
trade complete isolation for a statistical analysis of the difference between the control and effort peri-
ods. The assumption implicit in this approach is that environmental influences of the target device will
be random and uniformly distributed in both the control and effort conditions, while anomalous effects
Ile Cognitive Sciences Laboratory has adopted the term anomalous mentalphenomena instead of the more widely knownpsi.
likewise, we use the terms anomalous cognition and anomalous perturbation forESP andPK, respectively. Wehavedoneso
because we believe that these terms are more naturally descriptive of the observables and are neutral with regard to mecha-
nisms. 'Mese new terms will be used throughout this paper.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 2
R000200280002-5 16 May 1995
will tend to occur in the effort periods. Our arguments in favor of an anomaly, then, are based on statis-
tical inference and we must consider, in detail, the consequences of such analyses.
Background
As the evidence for anomalous mental phenomena becomes more widely accepted (Bern and Honor-
ton, 1994, Utts, 1991, Radin and Nelson, 1989) it is imperative to determine their underlying mecha-
nisms. Clearly, we are not the first to begin thinking of potential models. In the process of amassing
incontrovertible evidence of an anomaly, many theoretical approaches have been examined; in this sec-
tion we outline a few of them. It is beyond the scope of this paper, however, to provide an exhaustive
review of the theoretical models; a good reference to an up-to-date and detailed presentation is Stokes
(1987).
Brief Review of Models
TWo fundamentally different types of models of anomalous mental phenomena have been developed:
those that attempt to order and structure the raw observations in experiments (i.e., phenomenological
models), and those that attempt to explain tHese phenomena in terms of modifications to existing physi-
cal theories (i.e., fundamental models). In the history of the physical sciences, phenomenological mod-
els, such as the Snell's law of refraction or Ampere's law for the magnetic field due to a current, have
nearly always preceded fundamental models, such as quantum electrodynamics and Maxwell's theory.
In producing useful models of anomalies it may well be advantageous to start with phenomenological
models, of which DAT is an example.
Psychologists have contributed interesting phenomenological approaches. Stanford (1974a and 1974b)
proposed PSI-Mediated Instrumental Response (PMIR). PMIR states that an organism uses anoma-
lous mental phenomena to optimize its environment. For example, in one of Stanford's classic experi-
ments (Stanford, Zenhausern, Thylor, and Dwyer 1975) subjects were offered a covert opportunity to
stop a boring task prematurely if they exhibited unconscious anomalous perturbation by perturbing a
hidden random number generator. Overall, the experiment was significant in the unconscious tasks; it
was as if the participants were unconsciously scanning the extended environment for anyway to provide
a more optimal situation than participating in a boring psychological task!
As an example of a fundamental model, Walker (1984) proposed a literal interpretation of quantum
mechanics and posited that since superposition of eigenstates holds, even for macrosystems, anoma-
lous mental phenomena might be due to macroscopic examples of quantum effects. These ideas
spawned a class of theories, the so-called observation theories, that were either based upon quantum
formalism conceptually or directly (Stokes, 1987). Jahn and Dunne (1986) have offered a "quantum
metaphor" which illustrates many parallels between these anomalies and known quantum effects. Un-
fortunately, these models either have free parameters with unknown values, or are merely hand waving
metaphors. Some of these models propose questionable extensions to existing theories. For example,
even though Walkees interpretation of quantum mechanical formalism might suggest wave-like prop-
erties of macrosystems, the physics data to date not only show no indication of such phenomena at room
temperature but provide considerable evidence to suggest that macrosystems lose their quantum coher-
ence above 0.5 Kelvins (Washburn and Webb, 1986) and no longer exhibit quantum wave-like behavior.
3
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
R000200280002-5
16 May 1995
This is not to say that a comprehensive model of anomalous mental phenomena may not eventually
require quantum mechanics as part of its explanation, but it is currently premature to consider such
models as more than interesting speculation. The burden of proof is on the theorist to show why Sys-
tems, which are normally considered classical (e.g., a human brain), are, indeed, quantum mechanical.
That is, what are the experimental consequences of a quantum mechanical system over a classical one?
Our Decision Augmentation Theory is phenomenological and is a logical and formal extension of Stan-
ford's elegant PMIR model. In the same manner as early models of the behavior of gases, acoustics, or
optics, DAT tries to subsume a large range of experimental measurements into a coherent lawful
scheme. Hopefully this process will lead the way to the un -covering of deeper mechanisms. In fact DAT
leads to the idea that there may be only one underlying mechanism of all anomalous mental phenome-
na, namely a transfer of information from future to past.
Historical Evolution of Decision Augmentation
May, Humphrey, and Hubbard (1980) conducted a careful random number generator (RNG) experi-
ment which was distinguished by the extreme engineering and methodological care that was taken to
isolate any potentially known physical interactions with the source of randomness (D. Druckman and J.
A. Swets, page 189, 1988). It is beyond the scope of this paper to describe this experiment completely;
however, those specific details which led to the idea of Decision Augmentation are important for the
sake of historical completeness. The authors were satisfied that they had observed a genuine statistical
anomaly and additionally, because they had developed an accurate mathematical model of the random
device, they were assured that the deviations were not due to any known physical interactions. They
concluded, in their report, that some form of anomalous data selection had occurred and named it Psy-
choenergetic Data Selection.
Following a suggestion by Dr. David R. Saunders of MARS Measurement and Associates, we noticed in
1986 that the effect size in binary RNG studies varied on the average as one over the square root of the
number of bits in the sequence. This observation led to the development of the Intuitive Data Sorting
model that appeared to describe the RNG data to that date (May, Radin, Hubbard, Humphrey, and
Utts, 1985). The remainder of this paper describes the next step in the evolution of the theory which is
now named Decision Augmentation Theory.
Decision Augmentation Theory-A General Description
Since the case for AC-mediated information transfer is now well established (Bem and Honorton,
1994) it would be exceptional if we did not integrate this form of information gathering into the decision
process. For example, we routinely use real-time data gathering and historical information to assist in
the decision process. Why, then, should we not include AC in the decision process? DAT holds that AC
information is included along with the usual inputs that result in a final human decision that favours a
"desired" outcome. In statistical parlance, DAT says that a slight, systematic bias is introduced into the
decision process by AC.
This philosophical concept has the advantage of being quite general. Th illustrate the point, we describe
how the "cosmoe determines the outcome of a well-designed, hypothetical experiment. 1b determine
the sequencing of conditions in an RNG experiment, suppose that the entry point into a table of ran-
dom numbers will be chosen by the square root of the barometric pressure as stated in the weather re-
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 4
~qjpl ROO 0200280002-5
16 May 1995
port that will be published seven days hence in the New York Times. Since humans are notoriously bad at
predicting or controlling the weather, this entry point might seem independent of a human decision; but
why did we "choose" seven days in advance? Why not six or eight? Why the New York Times and not the
London Times? DAT would suggest that the selection of seven days, the New York Times, the barometric
pressure, and square root function were better choices, either individually or collectively, and that other
decisions would not have led to as significant an outcome. Other non-technical decisions may also be
biased by AC in accordance with DAT. When should we schedule a Ganzfeld session; who should be the
experimenter in a series; how should we determine a specific order in a tri-polar protocol? DAT ex-
plains anomalous mental phenomena as a process of judicious sampling from a world of events that are
unperturbed. In contrast, force-like models, hold that some kind of mentally-mediated force perturbs
the world. As we will show below, these two types of models lead to quite different predictions.
It is important to understand the domain in which a model is applicable. For example, Newton's laws
are sufficient to describe the dynamics of mechanical objects in the domain where the velocities are very
much smaller than the speed of light, and where the quantum wavelength of the object is very small
compared to the physical extent of the object. If these conditions are violated, then different models
must be invoked (e.g., relativity and quantum mechanics, respectively). The domain in which DAT is
applicable is when experimental outcomes are in a statistical regime (i.e., a few standard deviations
from chance). In other words, could the measured effect occur under the null hypothesis? This is not a
sharp-edged requirement but DAT becomes less apropos the more a single measurement deviates from
mean-chance-expectation (MCE). We would not invoke DAT, for example, as an explanation of levita-
tion if one found the authors hovering near the ceiling! The source of the statistical variation is unre-
stricted and may be of classical or quantum origin, because a potential underlying mechanism for DAT
is precognition. By this means, experiment participants become statistical opportunists.
Development of a Formal Model
While DAT may have implications for anomalous mental phenomena in general, we develop the model
in the framework of understanding experimental results. In particular, we consider anomalous per-
turbation versus anomalous cognition in the form of decision augmentation in those experiments whose
outcomes are in the few-sigma, statistical regime.
We define four possible mechanisms for the results in such experiments:
(1) Mean Chance Expectation. The results are at chance. That is, the deviation of the dependent vari-
able meets accepted criteria for MCE. In statistical terms, we have measurements from an unper-
turbed parent distribution with unbiased sampling.
(2) Anomalous Perturbation. Nature is modified by some anomalous interaction. That is, we expect
an interaction of a "force" type. In statistical parlance, we have measurements from a perturbed
parent distribution with unbiased sampling.
(3) Decision Augmentation. Nature is unchanged but the measurements are biased. That is, AC in-
formation has "distorted" the sampling. In statistical terms, we have measurements from an unper-
turbed parent distribution with biased sampling.
(4) Combination. Nature is modified and the measurements are biased. That is, both anomalous ef-
fects are present. In statistical parlance, we have conducted biased sampling from aperturbed par-
ent distribution.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 5
G8O)OT6i&rcCi4MM9g-AUMlRO00200280002-5 16May1995
General Considerations and Definitions
Since the formal discussion of DAT is statistical, we will describe the overall context for the develop-
ment of the model from that perspective. Consider a random variable, X, that can take on continuous
values (e.g., the normal distribution) or discrete values (e.g., the binomial distribution). Examples of X
might be the hit rate in an RNG experiment, the swimming velocity of single cells, or the mutation rate
of bacteria. Let Y be the average of X computed over n values, where n is the number of items that are
collected as the result of a single decision-one trial. Often this may be equivalent to a single effort
period, but it also may include repeated efforts. T'he key point is that, regardless of the effort style, the
average value of the dependent variable is computed over the n values resulting from one decision
point. In the examples above, n is the sequence length of a single run in an RNG experiment, the num-
ber of swimming cells measured during the trial, or the number of bacteria-containing test tubes present
during the trial. As we will show below, force-like effects require that the Z-score, which is computed
from the Ys, increase as the square root of n. In contrast, informational effects will be shown to be inde-
pendentofn.
Assumptions for DAT
We assume that the parent distribution of a physical system remains unpeiturbed; however, the mea-
surements of the physical system are systematically biased by some AC-mediated informational pro-
cess.
Since the deviations seen in experiments in the statistical regime tend to be small in magnitude, it is safe
to assume that the measurement biases will also be small; therefore, we assume small shifts of the mean
and variance of the sampling distribution. Figure I shows the distributions for biased and unbiased
measurements.
The biased sampling distribution shown in Figure 1 is assumed to be normally distributed as:
Z - N(,u . ly.2),
where y, and a. are the mean and standard deviation of the sampling distribution.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 6
Figure 1. Sampling Distribution Under DAT.
appjWUftgkfj9M5kejg"* R000200280002-5 16 May 1995
Assumptions for an Anomalous Perturbation- Model
DAT can be contrasted to force-like effects. With a few exceptions reported in the literature of "field"
phenomena, anomalous perturbation appears to be relatively "small." Thus, we begin with the assump-
tion that a putative anomalous force would give rise to a perturbational interaction, by which we mean
that, given an ensemble of entities (e.g., binary bits, cells), an anomalous force would act equally on each
member of the ensemble, on the average. We call this type of interaction micro-AP.
Figure 2 shows a schematic representation of probability density functions for a parent distribution un-
der the micro-AP assumption and an unperturbed parent distribution. In the simplest micro-AP model,
the perturbation induces a change in the mean of the parent distribution but does not effects its vari-
ance. We parameterize the mean shift in terms of a multiplier of the initial standard deviation. T11us,
we define an AP-effect size as:
CAP = (91 -,U0)
a0
where ILI and ILO are the means of the perturbed and unperturbed distributions, respectively, and where
oo is the standard deviation of the unperturbed distribution.
For the moment, we consider eAp as a parameter which, in principle, could be a function of a variety of
variables (e.g., psychological, physical, environmental, methodological). As we develop DAT for specif-
ic distributions and experiments, we will discuss this functionality Of CAP
Calculation of Er721
We compute the expected value and variance of Z2 for mean chance expectation and under the force-
like and information assumptions. We do this for the normal and binomial distributions. The details of
the calculations can be found in the Appendix; however, we summarize the results in this section. Thble
1 shows the results assuming that the parent distribution is normal.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 7
Figure 2. Parent Distribution for micro-AP
6MIRWALFg,MafAM?iergggpp%%i&MRJIWJkWlRO00200280002-5 16May1995
1hble 1.
Normal Parent Distribution
Mechanism
tit
Q
uan MCE micro-AP DAT
y
E(Z2) 1 1 + E2 y'2 + Cj.2
n
AP
Var(n 2 2(1 + 2EA2pn)2 (a,4
+ ~U.2a'2)
Table 2 shows the results assuming that the parent distribution is binomial. In this calculation, p0 is the
binomial event probability and ao = Vp_O_U_-__p07.
Tible 2.
Binomial Parent Distribution
tit Mechanism
Quan MCE micro-AP DAT
y
E(Z2) 1 1 + E2 (n - 1) + U.2 + a.2
LAC(I - 2pO
AP a0
r(Z2) 2+ 1 (1 -602
Va 02 0) 2(1 + 2E2 n)* 2(a,4 +
n AP 2,u,2a.2)
The variance shown assumespo = 0.5 and n > 1. See the Appendix for other cases.
We wish to emphasize at this point that in the development of the mathematical model, the parameter
sAp for micro-A.P, and the parametersuz, and a, in DAT may all possibly depend upon n; however, for
the moment, we assume that they are all n-independent. We shall discuss the consequences of this as-
sumption below.
Figure 3 displays these theoretical calculations for the three mechanisms graphically.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 8
Figure 3. Predic ions ot MCJ:~ micro-AV, and DAL
t
LAowr=%RMWgktFP&RIRWIUW.%Ago&hWWW&OU01RO00200280002-5 16May1995
Within the constraints mentioned above, this formulation predicts grossly different outcomes for these
models and, therefore, is ultimately capable of separating them, even for very small perturbations.
Retrospective Tests
It is possible to apply DAT retrospectively to any body of data that meet certain constraints. It is critical
to keep in mind the meaning of n-the number of measures of the dependent variable over which to
compute an average during a single trial following a single decision In terms of their predictions for
experimental results, the crucial distinction between DAT and the micro-AP model is the dependence
of the results upon n; therefore, experiments which are used to test these theories must be those in
which n is manipulated and participants are held blind to its values. May, Spottiswoode, Utts and James
(1994) retrospectively apply DAT to as many data sets as possible, and examine the consequences of any
violations of these criteria.
Aside from these considerations, the application of DAT is straight forward. Having identified the unit
of analysis and Y; simply create a scatter diagram of points (Z2 , n) and compute a least square fit to a
straight line. Thbles 1 and 2 show that for the micro-AP modet the square of the effect size is the slope of
the resulting fit. A Student's t-test may be used to test the hypothesis that the effect size is zero, and thus
test for the validity of the micro-AP model. If the slope is zero, these same tables show that the intercept
may be interpreted as an AC strength parameter for DAT. A follow-on paper will describe these tech-
niques in detail (May, Spottiswood, and Utts, 1994).
Prospective Tests
A prospective test of DAT could not only test whether anomalous effects occurred, but would also dif-
ferentiate between micro-AP and DAT. In such tests, n should certainly be a double-blind parameter
and take on at least two values. If you wanted to check the prediction of a linear functional relationship
between n and the E(Z2) that is suggested by micro-AP model, the more values of n the better. It is not
possible to separate the micro-AP model from DAT at a single value of n.
In any prospective test, it is helpful to know the number of runs, N, that are necessary to determine with
95% confidence, which of the two models best fits the data. Figure 4 displays the problem graphically.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 9
Figure 4. Model Predictions for the Power Calculation.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Decision Augmentation Theory: Toward a Model of AMP 16 May 1995
Under micro-AP, 95% of the values of Z2will be greater than the point indicated in Figure 4. Even if the
measured value of 72is at this point, we would like the lower limit of the 95% confidence interval for this
value to be greater than the predicted value under the DAT model. Or:
E(Z,2) - 1.645!!A-P - 1.9642 > E,,(Z2)
)N_ FN
Solving for N in the equality, we find:
N 3.605 CAP
I EAP(Z2) - EAC(Z2)
Since cAp ~!! aAC, this value of Nwill always be the larger estimate than that derived from beginning with
DAT and calculating the confidence intervals in the other direction.
Suppose, from an earlier experiment, one can estimate a single-trial effect size for a specific value of n,
say nj. Tb determine whether the micro-AP model or DAT is the proper description of the mechanism,
we must conduct another study at an additional value of n, say n2. We use Equation 1 to compute how
many runs we must con duct at n2 to assure a separation of mechanism with 95 % confidence, and we use
the variances shown in Thbles 1 and 2 to compute cAIx Figure 5 shows the number of runs for an RNG-
like experiment as a function of effect size for three values of n2.
We chose nj = 100 bits because it is typical of the numbers found in the RNG database and the values of
n2 shown are within easy reach of today's computer-based RNG devices. For example, assuming a, =
1.0 and assuming an effect size of 0.004, a value derived from a publication of PEAR data (Jahn, 1982),
then at nj = 100,y, = 0.004 x VTO = 0.04 andEAC(Z2) = 1.0016. Suppose n2 = 104, then EAP(Z2) =
1.160 and aAp = 1.625. Using Equation 1, we find N = 1368 runs, which can be approximately obtained
from Figure 5. That is in this example, 1368 runs are needed to resolve the micro-AP model from DAT
at n2 = 104 at the 95% confidence level. Since these runs are easily obtained in most RNG experiments,
an ideal prospective test of DAT, which is based on these calculations, would be to conduct 1500 runs
randomly counterbalanced between n = 102 and n = 104 bits/trial. If the effect size at n = 102 is near
0.004, than we would be able to distinguish between micro-AP and DAT with 95% confidence.
5
10
104
0 103 105 104
o
2
Z
10
0 106
'M -
1 0
100
0.001
0.010
0.100
AC Effect
Size
at
nj
100
bits
Figure 5. Runs Required for RNG Effect Sizes
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 10
A roved For Release 2000/08/10 : CIA-RDP96-00791ROO0200280002-5
D991sion Augmentation Theory: Toward a Model of AMP 16 May 1995
Figure 6 shows similar relationships for effect sizes that are more typical of anomalous perturbation
experiments using biological target systems (May and Vilenskaya, 1994).
In this case, we chose nj = 2 because it is easy to use two targets simultaneously. If we assume an effect
size of 0.3 and a, = 1.0, at n2 = 10 we compute EAC(Z2) = 1. 180, EAp(Z2) = 1.900, OrAp = 2.366 and N
140, which can be approximately obtained from Figure 6.
We have included n2 = 100 in Figure 6, because this is within reach in cellular experiments although it is
probably not practical for most biological experiments.
1000-
0
100
n2 = 10
10
n2 = 20
n2 = 100
0.1 1.0
Effect Size at nj 2 units
Figure 6. Runs Required for Biological Effect Sizes
We chose nj 2 units for convenience. For example in a plant study, the physiological responses can
easily be averaged over two plants and n2 = 10 is within reason for a second data point. A unit could be a
test tube containing cells or bacteria; the collection of all ten test tubes would simultaneously have to be
the target to meet the constraints of a valid test.
The prospective tests we have described so far are conditional; that is, given an effect size, we provide a
protocol to test if the mechanism for the anomalies is micro-AP or DAT. An unconditional test does not
assume any effect size; all that is necessary is to collect data at a large number of different values of n,
and fit a straight line through the resulting Vs. The mechanism is micro-AP if the slope is non-zero and
may be DAT if the slope is zero.
Stouffer's Z Tests
One consequence of DAT is that more decision points in an experiment lead to stronger results, because
an operator has more opportunity to exercise AC abilities. We derive a test criteria to determine wheth-
er a force-like interaction or an informational mechanism is a better description of the data.
Consider two experiments of M decisions at nj and N decisions at n2, respectively. Regardless of the
mechanism, the Stouffees Z for the first experiment is given by:
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 11
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Decision Augmentation Theory: Toward a Model of AMP 16 May 1995
u
~n_, Z Eli
Z(1) J-1 ~n -,Me
S ~M_
where e1j is the effect size for one decision and where El is the average effect size over the M decisions.
Under the micro-AP assumption that the effect size,zi, is constant regardless of n, Stouffer's Z in the
second experiment is given by:
Z'(2) ;%:N~: Z (1)
n1M -' '
Under the DAT assumption that the effect size is proportional to 1/Vn-, the Stouffer's Z in the second
experiment becomes:
Z(I)
Z(2) = rN
M
As in the other tests of DAT, if data are collected at two values of n, then a test between these Stouffer's
Z values may yield a difference between the competing mechanisms.
Discussion
We now address the possible n-dependence of the model parameters. A degenerate case arises if eAp is
proportional to llvl; if that were the case, we could not distinguish between the micro-AP model and
DAT by means of tests on the n dependence of results. If it were the case that in the analysis of the data
from a variety of experiments, participants, and laboratories, the slope of a Z2vs n linear least-squares
fit were zero, then either EAp = 0.0 or eAp is proportional to 11V;7, the accuracy depending upon the
precision of the fit (i.e., errors on the zero slope). An attempt might be made to rescue the micro-AP
hypothesis by explaining the 11Vn_ dependence of EAp in the degenerate case as a fatigue or some other
time dependence effect. That is, it might be hypothesized that anomalous perturbation abilities would
decline as a function of n; however, it seems improbable that a human-based phenomenon would be so
widely distributed and constant and give the 11,,ln- dependency in differing protocols needed to imitate
DAT. We prefer to resolve the degeneracy by wielding Occam's razor: if the only type of anomalous
perturbation which fits the data is indistinguishable from AC, and given that we have ample demonstra-
tions of AC by independent means in the laboratory, then we do not need to invent an additional phe-
nomenon called anomalous perturbation. Except for this degeneracy, a zero slope for the fit allows us
to reject all micro-AP models, regardless of their n-dependencies.
DAT is not limited to experiments that capture data from a dynamic system. DAT may also be the mech-
anism in protocols which utilize quasi-static target systems. In a quasi-static target system, a random
process occurs only when a run is initiated; a mechanical dice thrower is an example. Yet,inaseriesof
unattended runs of such a device there is always a statistical variation in the mean of the dependent
variable that may be due to a variety of factors, such as Brownian motion, temperature, humidity, and
possibly the quantum mechanical uncertainty principle (Walker, 1974). Thus, the results obtained will
ultimately depend upon when the run is initiated. It is also possible that a second-order DAT mecha-
nism arises because of protocol selection; how and who determines the order in tri-polar protocols. In
second order DAT there may be individuals, other than the formal subject, whose decisions effect the
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 12
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Decision Augmentation Theory: Toward a Model ol AMP 16 May 1995
experimental outcome and are modified by AC. Given the limited possibilities in this case, we might
expect less of an impact from DAT.
In surveying the range of anomalous mental phenomena, we reject the evidence for experimental mac-
ro-AP because of poor artifact control and accept the evidence for precognition and micro-AP because
of the large number of studies and the positive results of the meta-analyses. We believe that DAT, there-
fore, might be a general model for anomalous mental phenomena in that it reduces mechanisms for
laboratory phenomena to only one-the anomalous trans.temporal acquisition of information.
Acknowledgements
Since 1979, there have been many individuals who have contributed to the development of DAT. We
would first like to thank David Saunders without whose remark this work would not have been. Beverly
Humphrey kept the philosophical integrity intact at times under extreme duress. We are greatly appre-
ciative of ZoltAn Vassy, to whom we owe the Z-score formalism, to George Hansen, Donald McCarthy,
and Scott Hubbard for their constructive criticisms and support.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 13
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Decision Augmentation Theory: Toward a Model of AMP 16 May 1995
References
Bern, D. J. and Honorton, C. (1994). Does psi exist? Replicable evidence for an anomalous process of
information transfer. Psychological Bulletin. 115, No. 1, 4-18.
Druckman, D and Swets, J. A. (Eds.) (1988). Enhancing Human Performance. Issues, Theories, and
Techniques. Washington D.C., Nation Academy Press.
Honorton, C., Berger, R. E., Varvoglis, M. R, Quant, M., Derr, R, Schechter, E. L, and Ferrari, D. C.
(1990) Psi Communication in the Ganzfeld. Journal of Parapsychology, 54, 99-139.
Hubbard, G. S., Bentley, P P., Pasturel, R K, and Isaacs, J. (1987). A remote action experiment with a
piezoelectric transducer. Final Report - Objective H, Task 3 and 3a. SRI International Project
1291, Menlo Park, CA.
Hyman, R. and Honorton, C. (1986). A joint communiqu& The psi ganzfeld controversy. Journal of
Parapsychology. 50,351-364.
Jahn, R. G. (1982). The persistent paradox of psychic phenomena: an engineering perspecitve.
Proceedings of the IEEE. 70, No. 2, 136-170.
Jahn R. G. and Dunne, B. J. (1986). On the quantum mechanics of consciousness, with application to
anomalous phenomena. Foundations of Physics. 16, No 8, 721-772..
May, E. C., Humphrey, B. S., Hubbard, G. S. (1980). Electronic System Perturbation nchniques. Final
Report. SRI International Menlo Park, CA.
May, E. C., Radin, D. I., Hubbard, G. S., Humphrey, B. S., and Utts, J. (1985) Psi experiments with
random number generators: an informational model. Proceedings of Presented Papers Vol L The
Parapsychological Association 28th Annual Convention, Tbfts University, Medford, MA, 237-266.
May, E. C. and Vilenskaya, L. (1994). Overview of Current Parapsychology Research in the Former
Soviet Union. Subtle Energies. 3, No 3. 45-67.
Radin, D. I. and Nelson, R. D. (1989). Evidence for consciousness-related anomalies in random
physical systems. Foundations ofPhysics. 19, No. 12, 149.9-1514.
Stanford, R. G. (1974a). An experimentally testable model for spontaneous psi events I. Extrasensory
events. Journal of the American Socielyfor Physical Research, 68, 34-57.
Stanford, R. G. (1974b). An experimentally testable model for spontaneous psi events II. Psychokinetic
events. Journal of the American Socielyfor Physical Research, 68, 321-356.
Stanford, R. G., Zenhausem R., Taylor, A., and Dwyer, M. A. (1975). Psychokinesis as psi-mediated
instrumental response. Journal of the American Societyfor Physical Research, 69, 127-133.
Stokes, D. M. (1987). Theoretical parapsychology. In Advances in Parapsychological Research 5.
McFarland & Company, Inc. Jefferson NC, 77-189.
Utts, J. (1991). Replication and meta-analysis in parapsychology. Statistical Science. 6, No. 4, 363-403.
Walker, E. H. (1974). Foundations of Paraphysical and Parapsychological phenomena. Proceedings of
an International Conference. Quantum Physics and Parapsychology. Oteri, E. Ed. Parapsychology
Foundation, Inc. New York, NY, 1-53.
Walker, E. H. (1984). A review of criticisms of the quantum mechanical theory of psi phenomena.
Journal of Parapsychology. 48,277-332.
Washburn S. and Webb, R. A. (1986). Effects of dissipation and temperature on macroscopic quantum
tunneling in Josephson junctions. In New Techniques and Ideas in Quantum Measurement Theory.
Greenburger, D. M. Ed. New York Academy of Sciences, New York, NY, 66-77.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 14
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Decision Augmentation Theory: Toward a Model of AMP
Appendix
Mathematical Derivations for the Decision Augmentation Theory
In this appendixwe develop the formalism for the Decision Augmentation Theory (DAY). We consider
cases for mean chance expectation, force-Me interactions, and informational processes under two as-
sumptions--normality and Bernoulli sampling. For each of these three models, we compute the ex-
pected values of Z and Z2 , and the variance'of Z2*
Mean Chance Expectation
Normal Distribution
We begin by considering a random variable, X, whose probability density function is normal, (i.e., N(YO,
q02)t). After many unbiased measures from this distribution, it is possible to obtain reasonable ap-
proximations to yo and qo2 in the usual way. Suppose n unbiased measures are used to compute a new
variable, Y, given by:
-V
Yk "i't
n Z-1
J-1
Then Y is distributed as N( go, a,, 2), where an2 a0 2/n. If Z is defined as
Z Yk - 90
then Z is distributed as N(O, 1) and E(Z) is given by:
-V (z) ze-O-"'dz = 0.
MCEP f
Since Var(Z) = I = E(Z2) - E2(Z), then
Elf (z2) f Z2e - 0-1,2dZ 1. (2)
We
The Var(Z2) = E(Z4) - E2(Z2) = E(Z4) - 1. But
* We wish to thank Zoltan Vassy for originally suggesting the Z2 formalism.
t Throughout this appendix, this notation means: -0.5 2
N(u, a2)
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 Appendix: I
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Decision Augmentation Theory: Toward a Model of AMP
EN (Z4)
MCE vs~ f z4e-0 -5--Idz = 3.
So
Varu'IcE(V) = 2. (3)
Bernoulli Sampling
Let the probability of observing a one under Bernoulli sampling be given bypo. After n samples, the
discrete Z-score is given by:
Z = k - npo
ao Fn
where
ae = FPO(I - PO)
and k is the number of observed ones ( 0 <-* < n). The expected value of Z is given by:
n
E" - npO)Bk(n,pO), (4)
.CE(Z) Z(k
ao Fnk-O
where
(n) pk(I - poy -k.
Bk(n,PO) = k o
The first term in Equation 4 is the E(k) which, for the binomial distribution, is nPO. Thus
n
ED
MCE(Z) npO)Bk(n,pO) = 0.
n J(k (5)
k-0
The expected value of Z2 is given by:
Em'cE(Z') Var(Z) + EI(Z),
Var(k - npo) +0'
naO2
VCE(Z2) n620 1.
E'D
naO2 (6)
As in the normal case, the Var(Z2) = E(Z4) - E2(Z2) E(Z4) - 1. But*
I Johnson, N. L, and S. Kotz, Discrete Distributions, John Wfley & Sons, New York, p. 51, (1969).
Approved For Release 2000/08/10 : CIA-RDP96-00791ROO0200280002-5 Appendix:2
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Decision Augmentation Theory: Toward a Model of AMP
n
EBMCE(Z') = n J(k - VOYBAnIPO)
0 k-0
1 (1 - 6a2).
3 + na2o 0
So,
6a2 2
Va4cE(V) = 2 + -L (1 0) = 2 - JI, (po 0.5). (7)
na2
0
Force-Like Interactions
Normal Distribution
Under the perturbation assumption described in the text, we let the mean of the perturbed distribution
be given byuo+ %oo, where ep is an anomalous-perturbation strength parameter, and in the general
case may be a function of n and time. The parent distribution for the random variable, X, becomes
NW + evao, qo2). As in the mean-chance-expectation case, the average of n independent values of X,
is Y - N(uO + Epao, a, 2). Let
where
Y go + E.Pao + AY,
Ay Y - (Uo + E-Pao).
For a mean of n samples, the Z-score is given by
Z = Y - 140 = -C.Pgo + AY = S. Fn +
as a.
where C is distributed as N(O, 1) and is given by Ay / a,,. Then the expected value of Z is given by
EA'p(Z) = EAp(-cp Fn + cp Fn + E(C) = Ep Fn (8)
and the expected value of Z2 is given by
+ 6 2
V n,
EA1
P(Z2) EAP([_Cp Fn + C]2) E2 (C2)
= n ap + E + 2xp Fn E (C)
since E(~) = 0 and EA~) = 1.
(9)
In general, Z2 is distributed as a non-central X2 With 1 degree of freedom and non-centrality parameter
ne,p 2) X2(1, npp 2). Thus, the variance of Z2 is given by*
VarANP(Z2) = 2(1 + 2n62 (10)
Bernoulli Sampling
As before, let the probability of observing a one under mean chance expectation be given bypo, and the
discrete Z-score be given by:
* Johnson, N. L, and S. Kotz, Confinuous Univafiate Distautww-Z John Wfley & Sons, New York, p. 134, (1970).
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 Appendix: 3
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Decision Augmentation Theory: Toward a Model of AMP
Z = k - npo
Co ~n
where k is the number of observed ones ( 0 < k < n). Under the perturbation assumption, we let the
mean of the distribution of the single-bit probability be given bypi = po + Ep(YO, where C"p is an anoma-
lous-perturbation strength parameter. The expected value of Z is given by:
'I
E B Z(k - np0)Bk(n,pj),
."P(Z) aO Fn k.0
where
(n)pk(l _pjn-~
Bk(nIPJ = k i
The expected value of Z becomes
D
EAPP = kBk(n,Pl) - nP
'01 1 n 01
0
Fn k-0
= (~j - po) Fn = E. Fn.
Go
Since r. E(Z)lV-n, so Ep is also the binomial effect size. The expected value of Z2 is given by:
EA'p(Z') Var(Z) + E'(Z),
Var(k - npo) + E~n,
nU2
0
P20 - PI) + s~n.
or2
0
Expanding in terms of pI po + Eapo~,
E 'W-(l - 2po).
.UZ2) = 1 + E2 (n - 1) +L (12)
ap Go
If po = 0. 5 (i.e., a binary case) and n > 1, then Equation 12 reduces to the E(Z2) in the normal case,
Equation 9.
We begin the calculation of Var(Z2) by using the equation for thejth moment of a binomial distribution
Mj = -4L[(q + pe')RI
d(J
Since Var(Z2) = E(Z4) - E2(Z2), we must evaluate E(Z4). Or,
EABP(Z4) 7(k - nP0)"Bk(n,PI)-
n '
2aO k-0
Expanding n -2oo -4(k - npo)~ using the appropriate moments, and subtracting E2(Z2), yields
Var(Z2) = Co + C, n + C_ 1 n -'. (13)
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 Appendix: 4
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Decision Augmentation Theory: Toward a Model of AMP
Where
2
Co = 2 - 36S2 + JOE4 2E2 ap
ap ap + 8 !-aP (1 - 2p0) (1 + 6
Co ap a2
0
E2 (1
C, = 4 - E 2 ) + 4 LE- (1 - 2p0), and
ap ap a0
'c2
C-1 = 48 - 6[E2 - 3]2 + 12Lq_P (1 - 2po) + (1 _'P) + L"P-(l ~po)(J~p2 1~po + 1).
ap ao or2 03 0
0 0
Under the condition that Eap < I (a frequent occurrence in many experiments), we ignore any terms of
higher order than Eap~ Then the variance reduces to
-2
Var(Z2) = 2 - 3()E2 + 8i'p-(l - 2p0) + 6-ap + 4E~n +
ap Cro G2
0
2P)
2 + (1 - 7Ea + EaP (1 - ~po)(j~p2
6 + 36Eap a2 or3 0 _ 1~po + 1)
0 0 t
We notice that when 0, the variance reduces to the mean-chance-expectation case for Bernoulli
sampling. When n > 1, z < I, and po= a5, the variance reduces to that derived under the normal
distribution assumption. Or,
VarABP(Z2) - 2(1 + 2nE2). (14)
ap
Information Process
Normal Distribution
The primary assumption in this case is that the parent distribution remains unchanged, (i.e., N(po, q02).
It further assumes that because of an anomalous-cognition-mediated bias the sampling distribution is
distorted leading to a Z-distribution as N(Pac, aac2). In the most general case,yac and aac may be func-
tions of n and time.
The expected value of Z is given by (by definition)
E
ACP
I
The expected value of Z2 is given by definition as
(15)
EN 2 + a2..
'IC(Z2) (16)
The Var(Z2) can be calculated by noticing that
Z2 2
X2 1
So the Var(Z2) is given by
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 Appendix: 5
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Decision Augmentation Theory: Toward a Model of AMP
Var _) = 2 1 + 2#2
(Z2
a.2 T2,
Va rAVC (Z 2) = 2(a.4 + 2#2 U2 (17)
ac ac
Bernoulli Sampling
As in the normal case, the primary assumption is that the parent distribution remains unchanged, and
that because of a psi-mediated bias the sampling distribution is distorted leading to a discrete Z-dis-
tribution characterized bypac (n) and ua, 2(n). Thus, by definition, the expected values of Z and Z2 are
given by
EAC(Z) = U-
EAC(Z') = Ua'c + 'ga'c-
For any value of n, estimates of these parameters are calculated from N data points as
-12:zj,and
N
J-1
~2 N ( N Zj2 2
cFac TN --I) I 7q
The Var(Z2) for the discrete case is identical to the continuous case. Therefore
(18)
VarAIIC(Z2) (g4 + ~U 2 aa2.).
= 2 ac ac (19)
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 Appendix: 6
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Applications of Decision Augmentation Theory 14 May 1995
Applications
of
Decision Augmentation Theory
by
Edwin C. May, Ph.D
S. James Spottiswoode- (Consultant)
Science Applications International Corporation
Menlo Park, CA
Jessica M. Utts, Ph.D.
University of California, Davis
Division of Statistics
Davis, CA
Christine L. James
Science Applications International Corporation
Abstract
Decision Augmentation T'heory (DAT) provides an informational mechanism for a class of anomalous
mental phenomena which have hitherto been viewed as being caused by a force-like mechanism. Under
specifiable conditions, DAT's predictions for statistical anomalous perturbation databases are differ-
ent from those of all force-like mechanisms. For large random number generator databases, DAT pre-
dicts a zero slope for a least squares fit to the (Z2 n) scatter diagram, where n is the number of bits result-
ing from a single run and Z is the resulting Z-score. Wefindaslopeof(l.73±3.19) X 10-6(t=0.543,df
= 126, p = 0.295) for the historical binary random number generator database which strongly suggests
that some informational mechanism is responsible for the anomaly. In a 2-sequence length analysis of a
limited set of RNG data from the Princeton Engineering Anomalies Research laboratory, we find that a
force-like explanation misses the observed data by 8.6-o; however, the observed data are within 1.1-a of
the DAT prediction. We also apply DAT to one pseudorandom number generator study and find that its
predicted slope is not significantly different from the expected value for an informational mechanism.
We review and comment on six published articles that discussed DAT's earlier formalism (i.e., Intuitive
Data Sorting). We found two studies that support a force-like mechanism. Our analysis of Braud's 1990
hemolysis study confirms his finding in favor of an influence model over a selection one (p = 0. 023), and
Braud and Schlitz (1989) demonstrated a force-like interaction in their remote staring experiment (p =
0.020). We provide six circumstantial arguments against an influence hypothesis. Our anomalous
cognition research suggests that the quality of the data is proportional to the total change of Shannon
entropy. We demonstrate that the change of Shannon entropy of a binary sequence from chance is in-
dependent of sequence length; thus, we suggest that a fundamental argument supports DAT over influ-
ence models. In our conclusion, we suggest that, except for one special case, the physical random num-
ber generator database cannot be explained by any influence model, and that contradicting evidence
from two experiments on biological systems should inspire more investigations in a way that would al-
low valid DAT analyses.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Applications of Decision Augmentation Theory 14 May 1995
Introduction
May, Utts, and Spottiswoode (1994) proposed Decision Augmentation Theory as a general model of
anomalous mental phenomena! DAT holds that anomalous cognition information is included along
with the usual inputs that result in a final human decision that favours a "desired" outcome. In statisti-
cal parlance, DAT says that a slight, systematic bias is introduced into the decision process by anoma-
lous cognition.
This concept has the advantage of being quite general. We know of no experiment that is devoid of at
least one human decision; thus, DAT might be the underlying basis for anomalous mental phenomena.
May et al. (1994) mathematically developed this concept and constructed a retrospective test algorithm
than can be applied to existing databases. In this paper, we summarize the theoretical predictions of
DAT, review the criteria for valid retrospective tests, and analyze the historical random number genera-
tor (RNG) database. In addition, we summarize the findings from one prospective test of DAT and
comment on the published criticisms of an earlier formulation, which was then called Intuitive Data
Sorting. We conclude with a discussion of the RNG results that provide a strong circumstantial argu-
ment against a force-like explanation. As part of this review, we show that one biological-AP experi-
ment is better described by an influence model.
Review of Decision Augmentation Theory
Since the formal discussion of DAT is statistical, we will describe the overall context for the develop-
ment of the model from that perspective. Consider a random variable, X, that can take on continuous
values (e.g., the normal distribution) or discrete values (e.g., the binomial distribution). Examples of X
might be the hit rate in an RNG experiments the swimming velocity of single cells, or the mutation rate
of bacteria. Let Y be the average of X computed over n values, where n is the number of items that are
collected as the result of a single decision-one trial. Often this may be equivalent to a single effort
period, but it also may include repeated efforts. The key point is that, regardless of the effort style, the
average value of the dependent variable is computed over the n values resulting from one decision
point. In the examples above, n is the sequence length of a single run in an RNG experiment, the num-
ber of swimming cells measured during the trial, or the number of bacteria-containing test tubes present
during the trial. As we will show below, force-like effects require that the Z-score, which is computed
from the Ys, increase as the square root of n. In contrast, informational effects will be shown to be inde-
pendentofn.
Under DAT, we assume that the underlying parent distribution of a physical system remains unper-
turbed; however, the measurements of the physical system are systematically biased by an AC-mediated
informational process. Since the deviations seen in actual experiments tend to be small in magnitude, it
is safe to assume that the measurement biases are small and that the sampling distribution will remain
normal; therefore, we assume the bias appears as small shifts of the mean and variance of the sampling
distribution as:
Z - N(u,, a.2),
Ile Cognitive Sciences Laboratory has adopted the term anomalous mentalphenomena instead of the more widely knownpsi.
Likewise, we use the terms anomalous cognition and anomalouspenwbation forESP and PK, respectively. Wehavedoneso
because we believe that these terms are more naturally descriptive of the observables and are neutral in that they do not imply
mechanism. 77hese new terms will be used throughout this paper.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 2
AA ~pvecl For Release 2000/08/10 : CIA-RDP96-00791ROO0200280002-5
% Ications of Decision Augmentation Theory 14 May 1995
where 1h and a, are the mean and standard deviation of the sampling distribution. Under the null hy-
pothesis, pt, = 0. 0 and a, L 0.
Review of an Influence Model
For comparison's sake, we summarize a class of influence models. We begin with the assumption that a
putative anomalous force would give rise to a perturbational interaction bywhich we mean that given an
ensemble of entities (e.g., random binary bits), an anomalous force would act equally on each member
of the ensemble, on the average. We call this type of interaction micro-AP.
In the simplest micro-AP model, the perturbation induces a change in the mean of the parent distribu-
tion but does not effect its variance. We parameterize the mean shift in terms of a multiplier of the
initial standard deviation. Thus:
JUI -= YO + -AP 0'01
where [t, and p0 are the means of the perturbed and unperturbed distributions, respectively, and where
cro is the standard deviation of the unperturbed distribution. eAp can be considered the AP effect size.
Under the null hypothesis for binary RNG experiments,al = YO = 0.5, CO = 0.5, and EAP = 0.
The expected value and the variance of Z2 for mean chance expectation and under the force-like and
information assumptions for the normal distribution are shown in Thble 1. The details of the calcula-
tions can be found in May, Utts, and Spottiswoode (1994).
Thble 1.
Normal Parent Distribution
Mechanisms
Quantit
y
MCE Micro-AP DAT
E(Z2) 1 1 +E2 n 'U.2 +
AP C.2
Var(Z2) 2 2(1 + 2E,22(a.!+
n) I 2,u,2a,2)
Figure I graphically displays these theoretical calculations for the three mechanisms.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 3
Figure 1. Predictions of MCE, micro-AP, and DAT
oved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
~
,cat,
Prppcat ons of Decision Augmentation Theory 14 May 1995
This formulation predicts grossly different outcomes for these models and, therefore, is ultimately ca-
pable of separating them, even for very small effects. The important differences are in the slope and
intercept values. MCE gives a slope of zero and an intercept of one. Dat predicts a slope of zero, but an
intercept greater than one, and Micro-AP predicts an intercept of one, but a slope greater than zero.
Monte Carlo Verification
The expressions shown in Thble 1 are representations which arise from simple algebraic manipulations
of the basic mathematical assumptions of the models. Th verify that these expressions give the expected
results, we used a published pseudo random number generator (Lewis, 1975) with well-understood
properties to produce data that mimicked the results under three models (i.e., MCE, micro-AP and
DAT). Our standard implementation of the pseudo-RNG allows the integers in the range (0,215- 1] as
potential seeds. For the sequence lengths 100, 500, 1000, and 5000, we computed Z-scores for all pos-
sible seeds with an effect size of 0.0 to simulate MCE and an effect size of 0. 03 to simulate micro-AP. Th
simulate DAT, we used the fact that in the special case where the effect size varies as I IV-n, micro-AP
and DAT are equivalent. For this case we used effect sizes of a 030, 0.0134, 0.0095, and 0. 0042 for the
above sequence lengths, respectively. Figules 2a-c show the results of 100 trials, which were chosen
randomly from the appropriate Z-score data sets, at each of the sequence lengths for each of the mod-
els. In each Figure, MCE is indicated by a horizontal solid line at Z2 = 1.
The slope of a least squares fit computed under the MCE simulation was - (2.81ZL2.49) x 10-~ which
corresponded to ap-value of 0.812 when tested against zero, and the intercept was 1.007:0.005, which
corresponds to ap-value of 0.131 when tested against one. Under the micro-AP model, an estimate of
the effect size using the expression in Thble 1 was eAp = 0.0288±0.002, which is in good agreement with
0.03, the value that was used to create the data. Similarly, under DAT the slope was - (2.44±57 10) x
10-~ which corresponded to a p-value of 0.515 when tested against zero, and the intercept was
1.050±0.001, which corresponds to ap-value of 2.4 X 10-4 when tested against one.
Thus, we are able to say that the Monte Carlo simulations confirm the simple formulation shown in
Thble 1.
Retrospective Tests
It is possible to apply DAT retrospectively to any body of data that meet certain constraints. It is critical
to keep in mind the meaning of n-the number of measures of the dependent variable over which to
compute an average during a single trial following a single decision, In terms of their predictions for
experimental results, the crucial distinction between DAT and the micro-AP model is the dependence
of the results upon n; therefore, experiments which are used to test these theories ideally should be
those in which experiment participants are blind to n, and where the distribution of n does not contain
extreme outliers.
Aside from these considerations, the application of DAT is straight forward. Having identified the unit
of analysis andn, simply create a scatter diagram of points (Z~ n) and compute a weighted least square
fit to a straight line. Table I shows that for the micro-AP model, the slope of the resulting fit is the square
of the AP-effect size. A student's Mest may be used to test the hypothesis that the AP-effect size is zero,
and thus test for the validity of the micro-AP model. If the slope is zero, these same tables show that the
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 4
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Aopllcations of Decision Augmentation Theory 14 May 1995
and thus test for the validity of the micro-AP model If the slope is zero, these same tables show that the
intercept may be interpreted as a strength parameter for DAT In other words, an intercept larger than
one would support the DAT model, while a slope greater than zero would support the micro-AP model.
If the DAT strength is presumed to be constant (i.e., y, and a, are constant) then an additional test is
possible. That is, in two experiments involving N at nj and M at n2 decisions, respectively, DAT predicts
that Stouffer's Zs of these experiments should be in the ratio of VJW and VW x Vn--2Tnl- for AR
2.0 . . . . 2.0
1.6- 1.6-
1.2 1.2
0.8- 0.8-
0'4- 0.4-
0.0 0.0 . . . . . . . . . . . .
0 1200 2400 3600 4800 6000 0 1200 2400 3600 4800 6000
(a) (b)
6.0 . . . .
4.8
3.6
2.4
1.2
0.0
0 1200 2400 3600 4800 6000
(C)
Figure2. Z2vsn for Monte Carlo Simulations ofMCE, micro-AP, and DAT.
Historical Binary RNG Database
Radin and Nelson (1989) analyzed the complete literature (i.e., over 800 individual studies) of con-
sciousness-related anomalies in random physical systems. They demonstrated that a robust statistical
anomaly exists in that database. Although they analyzed this data from a number of perspectives, they
report an average Z / VW effect size of approximately 3 x 10-4, regardless of the analysis type. Radin
and Nelson did not report p-values, but they quote a mean Z of 0.645 and a standard deviation of 1.601
for 597 studies. We compute a single-mean t-score of 9.844, df = 596 (p = 3.7 x 10 -~-
We returned to the original publications of all the binary RNG studies from those listed by Radin and
Nelson and identified 128 studies in which we could compute, or were given, the average Z-score, the
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 5
d F o rf ~e 11cep e 2RUO 0 M10 8 1 ~a) RDP96-00791 R000200280002-5
0 g on t,: CIN
I~Oriocavfl n n on eory 14 May 1995
0 S 0 0 s?
number of runs, N, and the sequence length, n, which ranged from 16 to 10,000. For each of these stud-
ies we computed:
ZT = Ul + IN - I S,2.
k -N)
(1)
Since we were unable to determine the standard deviations of the Z-scores from the literature, we as-
sumed that s, = 1.0 for each study. We see from Table 1 that under mean chance expectation the ex-
pected variance of each Z2 is 2 0 so that the estimated standard deviation for the Z__2 for a given study is
VTOW.
Figure 3 shows a portion of the 128 data points (72,n). MCE is shown as a solid line (i.e., Z2 = 1), and
the expected best-fit lines for two assumed AP effect sizes of a 01 and 0. 003, respectively, are shown as
short dashed lines. We calculated a weighted (i.e., using N12.0 as the weights) least squares fit to an a +
b*n straight line for the 128 data points and display it as a long-dashed line. For clarity, we have offset
and limited the Z2 axis and have not shown the error bars for the individual points, but the weights and
all the data were used in the least squares fit. We found an intercept of a = 1. 036±0.004. The 1-stan-
dard error for the intercept is small and is shown in Figure 3 in the center of the sequence range. The
t-score for the intercept being different from 1. 0 (i.e., t = 9. 1, df = 126, p = 4.8 x 10 -2~ is in good
agreement with that derived from Radin and Nelson's analysis. Since we set standard deviations for all
the Zs equal to one; and since Radin and Nelson showed that the overall standard deviation was 1.6, we
would expect that our analysis would be more conservative than theirs because a larger standard devi-
ation would increase our computed value for the intercept.
The important result, however, was that the slope of the best-fit line was b = (I. 73±3. 19) X 10 -6
(t = 0.543, df = 124 p = 0.295), which is not significantly different from zero. Adding and subtracting
one standard error to the slope estimate produces and interval that encompasses zero. Even though a
very small AP effect size might fit the data at large sequence lengths, it is clear in Figure 3 what happens
at small sequence lengths; an EAp 0. 003, suggests a linear fit that is significantly below the actual fit.
1.40
1.30
1.20
C14
1.10
1.00
0.90
1 . I . . I .
Beat Fit
intercept Sim
a: 1.036±0.004 b: (1.730±3.194)XIO-6 -
t: 9.10, df = 126 t: 0.543, df = 126
P: 4.8 x 10-20 p: 0.295
eap 0.01,
Eap 0. 3
~O 0
Best Fit
MCE
0 2000 4000 6000 8000 10000
Sequence Length (n)
Figure 3. Binary RNG Database: Slope and Intercept for Best Fit Line
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 6
njaOt- CIA-RDP96-00791ROO0200280002-5
Fo p
r le SjOen2XOO/O8/
WPrl PCV.1 00 n s o F&e c ugme [on I neory 14 May 1995
The sequence lengths from this database are not symmetric nor are they uniformly distributed; they
contain outliers (i.e., median = 64, average = 566). Figure 4 shows that the lower half of the data, how-
ever, is symmetric and nearly uniformly distributed (i.e., median = 35, average = 34). Since the criteri-
on for a valid retrospective test is that n should be uniform, or at least not contain outliers, we analyzed
the two median halves independently. The intercept for the weighted best-fit line for the uniform lower
half is a = 1. 022±0.006 (t = 3.63, df = 62, p = 2 9 x 10 -4), and the slope is b = (-0. 034±3.70) X 10 -4
(t = -0.010, df = 62, p 0.504). The fits for the upper half yield a = 1.064±0.005 (t = 13.47, df = 62, p
= 1.2 x 10 -41) and b (-4.52±2.38) X 10 -6 (t = -1.903, df = 62, p = 0. 969), for the intercept and
slope, respectively.
Since the best retrospective test for DAT is one in which the distribution of n contains no outliers, the
statistically zero slope for the fit to the lower half of the data is inconsistent with a simple AP model.
Although the same conclusion could be reached from the fits to the database in its entirety (i.e., Figure
3), we suggest caution in that this fit could possibly be distorted by the distribution of the sequence
lengths. That is, a few points at large sequence lengths can easily influence the slope. Since the slope for
the upper half of the data is statistically slightly negative, it is problematical to assign an imaginary AP
effect size to these data. More likely, the results are distorted by a few outliers in the upper half of the
data.
0.40
0.30-
0.20
LFL
0.10
0.00 L
0 20 40 60 80 100
Sequence Length (n)
Figure 4. Historical Database: Distribution of Sequence Lengths < 64.
From these analyses, it appears that Z2 does not linearly depend upon the sequence length; however,
since the scatter is so large, even a linear model is not a good fit (i.e., X2 = 171.2, df = 125, p = 0. 0038),
where X2 is a goodness-of-fit measure in general given by:
V
X2 (y,
j)
j-1 i
where the aj are the errors associated with data pointyj,fi is the value of the fitted function at pointi, and
v is the number of data points.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 7
Ap fpv Forf%elgep e 000/08/tl.O : CIA-RDP96-00791ROO0200280002-5
caldns 0 e S?
Ap~ 0 on~uomen tlon Theory 14 May 1995
A "good" fit to a set of data should lead to a non-significant X~ The fit is not improved by using higher
order polynomials (i.e.,X2 170.8, df = 124;X2 = 174.1, df = 123; for quadratics and cubics, respective-
ly). If, however, the AP effect size was any monotonic function of n other than the degenerate case
where the AP effect size is exactly proportional to 1 IV-n, it would manifest as a non-zero slope in the
regression analysis.
Within the limits of this retrospective analysis, we conclude for RNG experiments that we must reject all
influence models which propose a shift of the mean of the parent distribution.
Princeton Engineering Anomalies Research Laboratory RNG Data
The historical database that we analyzed does not include the extensive RNG data from the Princeton
Engineering Anomalies Research (PEAR) laboratory since the total number of bits in their experi-
ments exceeds the total amount in the entire historical database. For example, in a recent report Nel-
son, Dobyns, Dunne, and Jahn (1991) analyze 5.6 x 106 trials all at n = 200 bits. In this section, we
apply DAT retrospectively to their published work where they have examined other sequence lengths;
however, even in these cases, they report over five times as much data as in the historical database.
Jahn (1982) reported an initial RNG data set with a single operator at n = 200 and 2,000. Data were
collected both in the automatic mode (i.e., a single button press produced 50 trials at n) and the manual
mode (i.e., a single button press produced one trial at n). From a DAT perspective, data were actually
collected at four values of n (i.e., 200,2000,200 x50 = 10, 000, and 2000 x5O = 100,000). Unfortunately
data from these two modes were grouped together and reported only at 200 and Z 000 bit/trial. It would
seem, therefore, we would be unable to apply DAT to these data. Jahn, however, reports that the differ-
ent modes "...give little indication of importance of such factors in the overall performance." This qual-
itative statement suggests that the micro-AP model is indeed not a good description for these data, be-
cause, under micro-AP, we would expect stronger effects (i.e., higher Z-scores) at the longer sequence
lengths.
Nelson, Jahn, and Dunne (1986) describe an extensive RNG and pseudo-RNG database in the manual
mode only (i.e., over 7 x 106 trials); however, whereas Jahn provide the mean and standard deviations
for the hits, Nelson et al. report only the means. We are unable to apply DAT to these data, because any
assumption about the standard deviations would be highly amplified by the massive data set.
As part of a cooperative agreement in 1987 between PEAR and the Cognitive Sciences Program at SRI
International, we analyzed a set of RNG data from a single operator! Since they supplied the raw data
for each button press, we were able to analyze this data at two extreme values of n. We combined the
individual trial Z-scores for the high and low aims, because our analysis is two-tailed, in that we examine
Z~
Given that the data sets at n = 200 and 100, 000 were independently significant (Stouffer's Z of 3.37 and
245, respectively), and given the wide separation between the sequence lengths, we used DAT as a ret-
rospective test on these two data points.
* We thank R. Jahn, B. Dunne, and R. Nelson for providing this raw data for our analysis in 1987.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Applications of Decision Augmentation Theory 14 May 1995
Because we are examining only two values of n, we do not compute a best-fit slope. Instead, as outlined
in May, Utts, and Spottiswoode (1994), we compare the micro-AP prediction to the actual data at a
single value of n.
At n = 200, 5918 trials yielded 7 = 0. 044 ± 1.030 and 72 = 1. 063 ± 0.019. We compute aproposed AP
effect size Z I VTO = 3. 10 x 10 -~ With this effect size, we computed what would be expected under
the micro-AP model at n = 100,000. Using the theoretical expressions in Thble 1, we computed Z2 =
1.961 ± 0.099. The 1-sigma error is derived from the theoretical variance divided by the actual number
of, trials (597) at n = 100, 000. The observed values were Z = 0. 100 ± 0.997 and Z2 = 1.002 ± 0.050. A
Mest between the observed and expect values of Z2 gives t = &643, df = 1192. Considering this t as
equivalent to a Z, the data at n = 100,000 fails to meet what would be expected under the influence
model by& 6-a. Suppose, however, that the effect size observed at n = 100, 000 (3.18 x 10 -~ better
represents the AP effect size. We computed the predicted value of Z2 = 1. 00002 ± 0.018 for n = 200.
Using a mest for the difference between the observed value and this predicted one gives t = 2.398,
df=11,834. The micro-AP model fails in this direction by more than 2.3-C. DAT predicts that Z2 would
be statistically equivalent at the two sequence lengths, and we find that to be the case (t = 1. 14, df
6513, p = 0. 127).
Jahn (1982) indicates in their RNG data that "Tkaced back to the elemental binary samples, these values
imply directed inversion from chance behavior of about one or one and a half bits in every one thou-
sand ... ... Assuming 1.5 excess bits/1000, we calculate an AP effect size of 0. 003, which is consistent with
the observed value in their n = 200 data set. Since this was the value we used in our DAT analysis, we are
forced to conclude that this data set from PEAR is inconsistent with the simple micro-AP model, and
that Jahn's statement is not a good description of the anomaly.
We urge caution in interpreting these calculations. As is often the case in a retrospective analysis, some
of the required criteria for a meaningful test are violated. These data were not collected when the oper-
ators were blind to the sequence length. Secondly, these data represent only a fraction of PEAR!s data-
base.
A Prospective Test of DAT
In developing a methodology for future tests, Radin and May (1986) worked with two operators who
had previously demonstrated strong ability in RNG studies. They used a pseudo-RNG, which was
based on a shift-register algorithm by Kendell and has been shown to meet the general criteria for "ran-
domness" (Lewis, 1975), to create the binary sequences.
The operators were blind to which of nine different sequences (i.e., n = 101, 201, 401, 701, 1001, 2001,
4001, 7001, 10001 bits) were used in any given trial, and the program was such that the trials lasted for a
fixed time period and feedback was presented only after the trial was complete. Thus, the criteria for a
valid test of DAT had been met, except that the source of the binary bits was a pseudo-RNG.
We re-analyzed the combined data from this experiment with the current Z-score formalism of DAT.
For the 200 individual runs (i.e. 10 at each of the sequence lengths for each of the two participants) we
found the best fit line to yield a slope = 4.3 X10-8 ± 1.6 X 10-6 (t = 0.028, df = 8, p = 0.489) and an
intercept = 1.16 ± a 06 (t = 2.89, df = 8, p = a 01). The slope interval easily encompasses zero and is
* Ite original IDS analysis required the sequence lengths to be odd because of the logarithmic formalism.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 9
J00riggfidnForf P Ice en2 OgOm/O8/?aql: Ci&-RDP96-00791 R000200280002-5
so &
of
ST
0 0 Ru on on gory 14 May 1995
not significantly different from zero, the intercept significance level (p = 0.01) is consistent with what
Radin and May reported e anlier.
Since the pseudo-RNG seeds and bit streams were saved for each trial, it was possible to determine if
the experiment sequences exactly matched the ones produced by the shift register algorithm; they did.
Since their UNIX-based Sun Microsystems workstations were synchronized to the system clock, any
momentary interruption of the clock would "crash" the machine, but no such crashes occurred. There-
fore, we believe no force-like interaction occurred.
Th explore the timing aspects of the experiment Radin and- May reran each run with pseudo-RNG seeds
ranging from -5 to +5 clock ticks (i.e., 20 ms/tick) from the actual seed used in the run. We plot the
resulting run effect sizes, which we computed from the experimental F-ratios (Rosenthal, 1991), for
operator 531 in Figure 5. The estimated standard errors are the same for each seed shift and equal
0.057.
0.20
0.15-
r4
0.10
0.05 -
L-E)
0.00
-6 -4 -2 0 2
Relative Seed Position
Figure 5. Seed Timing for Operator 531 (298 Runs).
Radin and May erroneously concluded that the significant differences between zero and adjacent seed
positions was meaningful, and that the DAT ability was effective within 20 milliseconds. In fact, the
situation shown in Figure 5 is expected. Differing from true random number generators in which slight
changes in timing produce essentially the same sequence, pseudo-RNGs produced totally different se-
quences as a function of single digit seed changes. Thus, it would be surprising if the seed-shift display
produced anything but a spike at seed shift zero. We will return to this point in our analysis of some of
the published remarks on our theory.
From this prospective test of DAT, we conclude that for pseudo-RNGs it is possible to select a proper
entry point into a bit stream to produce significant deviations from mean chance expectation that are
independent of sequence length.
The Literature: Review and Comment
We have identified six published articles that have commented upon the Intuitive Data Sorting theory,
the earlier name for DAT. In this section, we chronologically summarize and comment on each report.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 10
pd Foigecle e -RDP96-00791 R000200280002-5
0 isis 2000/08/10 : C
R
kffi Oal n 9 0 on Augmentation eory 14 May 1995
Walker - September 1987
In his first of two criticisms of Intuitive Data Sorting (IDS), Walker (1987) suggested that his Monte
Carlo simulations did not fit the predictions of the model. He generated a single deviant set of 100 bits
(i.e., Z = 233, p = 0.01), and he inserted this same sequence as the first 100 bits of 400 otherwise ran-
domly generated sequences ranging from 100 to 106 bits in length. Walker's analysis of these sequences
did not yield a least square's slope of -0.5 as predicted under the IDS formalism. Walker concluded that
the model was incorrect. Walker's sequences, however, are not the type that are generated in AP ex-
periments or the type for which the IDS model is valid.
May et al. (1985) were explicit about the character of the sequences that fit the IDS model. Specifically,
Walker quotes May et al. "Using psi-acquired information, individuals are able to select locally deviant
subsequencesfrom a large random sequence." (Italics are used in the original May paper.) The very next
sentence on page 249 of the reference says, "Such an ability, if mediated by precognition, would allow
individuals (subjects or experimenters) to initiate a collection unit of continuous samples (this has been
reported as a trial, a block, a run, etc.) in such a way as to optimize thefinal result. (Italics added here for
emphasis.) Walker continued, "Indeed, the only way the subject can produce results that agree with the
data is to wait for an extra-chance run that ifiatches the experimental run length." In the final analysis,
Walker actually supported our contention that individuals select deviant subsequences. Both from our
text and the formalism in our 1985 paper, it is clear that what we meant by a "large random sequence,"
was large compared to the trial length, n.
In his second criticism of IDS in the same paper, Walker proposed that individuals would have to exhibit
a physiologically impossible control over timing (e.g., when to press a button). As evidence apparently
in favor of such an exquisite timing ability, he referred to the data presented by Radin and May (1986)
that we have discussed above. (Please see Figure 5.) Walker suggested that Radin and May's result,
therefore, supported his quantum mechanical observer theory. It is beyond the scope of this paper to
critique Walker's quantum mechanical models, but we would hope they do not depend upon his analysis
of Radin and May's results. The enhanced hitting at zero seed and the suppressed values ± one 20 ms
clock tick that we show in Figure 5 is the expected result based upon the well-understood properties of
pseudo-RNG's and does not represent the precision of the operator's reaction time.
We must consider how it is possible with normal human reactions to obtain significant scores, which can
only happen in 20 ms windows. In typical visual reaction time measurements, Woodworth and Schlos-
berg (1960) found a standard deviation of 30 ms. If we assume these human reactions are typical of
those for AC performance and are normally distributed, we compute the maximum probability of being
within a 20 ins window (i.e., centered about the mean) of 23.5%. For the worst case, the operators must
"hit" significant seeds less often than 23.5% of the time. Radin and May do not report the number of
significant runs, so we provide a worst-case estimate. Given that they quote ap-value of 0.005 for 500
trials, we find that 39 trials must be independently significant. That is, the accumulated binomial proba-
bility is 0.005 for 39 hits in 500 trials with an event probability of 0. 05. This corresponds to a hitting rate
(i.e., 39/500) of only 7.8%, a value well within the capability of human reaction times. We recognize that
it is not a requirement to hit only on significant seeds; however, all other seeds leading to positive Z-
scores are less restrictive than the case we have presented.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 11
A d F ft e 0001081tl . -RDP96-00791 R000200280002-5
c9ap
CaVROC 5 0 s on I heory
APPR n id onjugmen a7,: CA 14 May 1995
The zero-center "spike" in Figure 5 misled Walker and others into thinking that exceptional timing was
required to produce the observed deviations. As we have sfiown this is not the case, and, therefore,
Walker's second criticism of the IDS theory is not valid.
Bierman - 1988
Bierman (1988) attempted to test the IDS model with a gifted subject. His experimental design ap-
peared to meet most of the criteria for a valid test of the model; however, Bierman found no evidence of
an anomaly and stated that no conclusions could be drawn from his work. We encourage Bierman to
continue with this design and to be specific with what he would expect to see if DAT were the correct
mechanism compared to if it were not.
Braud and Schlitz - 1989
Braud and Schlitz (1989) conducted an electrodermal PK experiment specifically to test the IDS model.
They argued that if the mechanism of the effect were informational, then allowing participants more
opportunities to select locally deviant values of the dependent variable should yield stronger effects. In
their experiment, 12 electrodermal sampling epochs were either initiated individually by a press of a
button, or all 12 were determined as a result of the first button press. Braud and Schlitz hypothesized
that under IDS, they would expect to see a larger overall effect in the former condition. Theyfoundthat
the single button press data yielded a significant result; whereas the multiple press data scored at chance
Qsingle[31] = 214, p = 0.02, t..&i[311 = -0.53). They correctly concluded, therefore, that their data
were more consistent with an influence mechanism than with an informational one.
One implication of their result, which is supported by Braud's 1990 study (see below), is that perhaps
there is something unique about biological systems that allow force-like interactions, whereas physical
systems such as RNGs do not.
Vassy -1990
Vassy (1990) used a similar timing argument to refute the IDS model as did Walker (1987). Vassygener-
ated pseudo-RNG single bits at a rate of one each 8.7 ms. He argued that if IDS were operating, that a
subject would be more likely to identify bursts of ones rather than single ones given the time between
consecutive bits. While he found significant evidence for the primary task of "selecting" individual bits,
he found no evidence that these hits were imbedded in excess clusters of ones.
We compute that the maximum probability of a hit within an 8.7 ms window centered on the mean of the
normal reaction curve with a standard deviation of 30 ins (Woodworth and Schlosberg, 1960) is 11.5%.
Vassy quotes an overall Z-score for 100 runs of 239. From this, we compute a mean Z of 0.239 for each
run of 36 bits. To obtain this result requires an erxess hitting of 0. 717 bits, which corresponds to an ex-
cess hitting rate of 2%. Given that 11.5% is the maximum one can expect with normal human reaction
times, Vassy's results easily allow for individual bit selection, and, thus, cannot be used to reject the DAT
model on the basis of timing.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 12
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
Applications of Decision Augmentation Theory 14 May 1995
Braud - 1990
In a cooperative effort with SRI International, Braud (1990) conducted a biological AP study with hu-
man red blood cells as the target system. The studywas designed, in part, as a prospective test of DAT, so
all conditions for a valid test were satisfied. Braud found that a significant number of individuals were
independently able to "slow" the rate of hemolysis (i.e., the destruction of red blood cells in saline solu-
tion) in what he called the "protect" mode. Using data from the nine significant participants, Braud
found support in favor of micro-AP over DAT Figure 6 shows the results of our re-analysis of all of
Braud's, raw data using our more modern formalism of DAT
10
0 Effort Data
11 Control Data
8 x Predicted AP
Predicted DAT
6 -
4 -
2
0
-2 t . . .
0 2
4
6
8
10
Number
of
Test
Thbes
Figure 6. DAT Analysis of Hemolysis Data.
The solid line indicates the theoretical mean chance expectation. The squares are the mean values of
Z2 for the control data, and the error bars indicate the 1-standard error for the 32 trials in the study. We
notice that the control data with eight test tubes is significantly below chance (t = -2.79, df = 6Z p =
0.996). Compared to the chance line, the effort data is significant (t = 4.04, df = 31, p = 7 6 x 10-5) for
eight test tubes and nearly so for n = 2 (t = 2.06, df = 31, p = 0. 051). The x at n = 8 indicates the
calculated value of the mean of Z2 assuming that the effect size at n = 2 was entirely because of AP;
similarly, the x at n = 2 indicates the calculated value assuming that the effect size, which was observed
at n = 8, was totally due to AP These AP predictions are not significantly different from the observed
data (t = 0. 156, p = 0. 431, df = 62 and I = 0. 906, p = 0. 184, df = 62, at n = 2 and 8, respectively).
Whereas DAT predicts no differences between the data at the end points for n, we find a significant
difference (t = Z 033, p = 0. 023, df = 62). That is, to a statistical degree the data at n = 8, cannot be
explained by selection alone. Thus, we concur with Braud's original conclusion; these results indicate a
possible force-like relationship between mental intent and biological consequences.
It is difficult to conclude from our analysis of a single study with only 32 trials that AP is part of nature;
nonetheless, this result is very important. Thken with the results of Braud and Schlitz (1989) the evi-
dence of possible AP on biological systems is growing. May and Vilenskaya (1993) and Vilenskaya and
May (1995) report that the preponderance of the research on anomalies in the Former Soviet Union is
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 13
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
AklYlications of Decision Augmentation Theory 14 May 1995
the study of AP on biological systems. Their operators, as do ours, report their internal experiences
suggestive of a force-like connection between them and their biological targets.
Dobyns -1993
Dobyns (1993) presents a method for comparing what he calls the "influence" and "selection" models,
corresponding to what we have been calling DAT and micro-AP. He uses data from 490 "tripolar sets" of
experimental runs at PEAR. For each set, there was a high aim, a baseline and a low aim condition.
The three values produced were then sorted into which one was actually highest, in the middle, and
lowest for each set. The data were then summarized into a 3 x 3 matrix, where the rows represented the
three intentions, and the columns represented the actual ordering. If every attempt had been success-
U, the diagonal of the matrix would consist of the number of tripolar sets, namely 490. We present the
data portion of Dobyns' Thble from page 264 of the reference as our Thble 2:
Thble 2.
Scoring Data From Dobyns (1993)
Intention
Actual
High Middle Low Thtal
High 180 167 143 490
Baseline159 156 175 490
LOW 151 167 172 490
Thtal 490 490 490
Dobyns computes an aggregate likelihood ratio of his predictions for the DAT and micro-AP models
and concludes in favor the the influence model with a ratio of 28.9 to one.
However, there are serious problems with the methods used in Dobyns' paper. In this paper we outline
only two of the difficulties. Th fully explain them would require a level of technical discussion not suit-
able for a short summary such as this.
One problem is in the calculation of the likelihood ratio function using his Equation 6, which we repro-
duce from page 265 of the reference:
"P,7p", = P1 Mi 2] 42 13
B(p1q) = P~
qni qnlq 43 ITI ] riT2 rT3
1 2 3
wherep andq are thepredicted rankfrequencies for each aim under the influence and selection models,
respectively, and the n are the observed frequencies for each aim. We agree that this relationship cor-
rectly gives the likelihood ratio for comparing the two models for one row of Thble 2. However, immedi-
ately following that equation, Dobyns writes, "The aggregate likelihood of the hypothesis over all three
intentions may be calculated by repeating the individual likelihood calculation for each intention, and
the total likelihood will simply be the product of factors such as (6) above for each of the three inten-
tions."
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 14
ov c
A d For Reluse 2000/08/10.: CIA-RDP96-00791 R000200280002-5
c
AWMatTons OT 9c s on Augmentation I heory 14 May 1995
That statement is incorrect. A combined likelihood is found- by multiplying the individual likelihoods
only if the random variables are independent of each other (DeGroot, 1986, p. 145). Clearly, the rows
of the table are not independent. In fact, if you know any two of the rows, the third is determined exact-
ly. The correct likelihood ratio needs to build that dependence into the formula!
A second technical problem with the conclusion that the data support the influence model is that the
method itself strongly supports the influence model. As noted by Dobyns, "In fact, applying the test to
data sets that, by construction, contain no effect, yields strong odds (ranging, in a modest Monte Carlo
database, from 8.5 to over 100) in favor of the influence model (page 268)." The actual data in his paper
yielded odds of 28.9 to one in favor of the influence model; however, this value is well within the re-
ported limits from his "influence-less" Monte Carlo data.
Under DAT it is possible that AC-mediated selection might occur at the protocol level, but the primary
way is through timing-initiating a run to capitalize upon a locally deviant subsequence. How this might
work in dynamic RNG devices is clear; wait until such a deviant sequence is in your immediate future
and initiate the run in time to capture it. With "static" devices, such as PEAWs random mechanical
cascade device, how timing enters in is less obvious. Under closer inspection, however, even with this
device there is a statistical variation among unattended control runs. That is, there is never a series of
control runs that give exactly the same mean. Physical effects, such as Browian motion, temperature
gradients, etc., can account for the observed variance in the absence of human operators. Thus, when a
run is initiated to capture favorable local "environmental" factors, even for "static" devices, remains
the operative issue with regard to DAT Dobyns does not consider this case at all in his analysis. If DAT
enters in at the protocol selection, as it probably does, it is likely to be a second-order contribution be-
cause of the limited possibilities from which to select (i.e., six in the tripolar case).
Finally, a major problem with Dobyns' conclusion, which was pointed out when he first presented this
paper at a conference (May, 1990), is that the likelihood ratio supports the influence model even for
their pseudo-RNG data. Dobyns dismisses this finding (page 268) all too easily given the preponder-
ance of evidence that suggest that no influence occurs during pseudo-RNG studies (Radin and May,
1986).
Aside from the technical flaws in Dobyns' likelihood ratio arguments, and even ignoring the problem
with the pseudo-RNG analysis, we reject his conclusions simply because they hold in favor of influence
even in Monte Carlo-constructed unpenw-bed data.
Circumstantial Evidence Against an AP Model for RNG Data
Experiments with hardware RNG devices are not new. In fact, the title of Schmidt's very first paper on
the topic (1969) portended our conclusion, "Precognition of a Quantum Process." Schmidt lists PK as a
third option after two possible sources for precognition, and remarks, "The experiments done so far do
not permit a distinction (if such a distinction is at all meaningful) between the three possibilities." From
1969 onward, the RNG research has been strongly oriented toward a PK model. The term micro-PK,
itself, embeds the force concept further into the lexicon of RNG descriptions.
* Dobyns agrees on this point-private communication.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 is
APPr,egffVrF9,6f B%MF& 3%WWt19i0'1~*JK96-00791 R000200280002-5 14 May 1995
In this section, we examine a number of RNG experimental results that provide circumstantial evidence
against the AP hypothesis. - Any single piece of evidence could be easily dismissed; however, taken to-
gether, they demonstrate a substantial case against AP
Internal Complexity of RNG Devices and Source Independence
Schmidt (1974) conducted the first experiment to explore potential dependencies upon the internal
workings of his generators. Since by definition AP implies a force or influence, it seemed reasonable to
expect that an influence should depend upon the details of the target system. In this study, one genera-
tor produced individual binary bits, which were derived from the P-decay of 90Sr, while the other
"binary" output was a majority vote from 100 bits, each of which were derived from a fast electronic
diode. Schmidt reports individually significant effects with both generators, yet does not observe a sig-
nificant difference between the generators.
This particular study is interesting, quite aside from the timing and majority vote issues; the binary
streams were derived from fundamentally different physical sources. Radioactive P-decay is governed
by the weak nuclear force, and electronic devices (e.g., noise diodes) are governed by the electromag-
netic force. Schematically speaking, the electromagnetic force is approximately 1,000 times as strong as
the weak nuclear force, and modern high-energy physics has shown them to be fundamentally different
after about 10-10 seconds after the big bang (Raby, 1985). Thus, a putative AP-force would have to
interact equally with these two forces; and since there is no mechanism known that will cause the elec-
tromagnetic and weak forces to interact with each other, it is unlikely that AP will turn out to be the first
coupling mechanism. The lack of difference between P-decay and noise diode generators was con-
firmed years later by May et al. (1980).
We have already commented upon one aspect of the timing issue with regard to Radin and May's (1986)
experiment and the papers by Walker (1987) and Vassy (1990). May (1975) introduced a scheme to
remove any first-order biases in binary generators that also is relevant to the timing issue. The output of
his generator was a match or anti-match between the random bit stream and a target bit. One mode of
the operation of the device, which May describes, included an oscillating target bit--one oscillation per
bit at approximately 1 MHz rate! May and Honorton (1975) and Honorton and May (1975) reported
significant effects with the RNG operating in this mode. Thus, significant effects can be seen even with
devices that operate in the microsecond time domain, which is three orders of magnitude faster than
any known physiological process.
Effects with Pseudorandom Number Generators
Pseudorandom number generators are, by definition, those that depend upon an algorithm, which is
usually implemented on a computer. Radin (1985) analyzed all the pseudo-RNGs commonly in use and
found that they require a starting value (i.e., a seed), which is often derived from the computer's system
clock. As we noted above, Radin and May (1986) showed that the bit stream, which proved to be "suc-
cessful" in a pseudo-RNG study, was bit-for-bit identical with the strewn, which was generated later,
but with the same seed. With that generator, at least, there was no change from the expected bit stream.
* Later, this technique was adopted by Jahn (1982) for use in the RNG devices at PEAR.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 16
Approved For RelgaLseiOOO/0811.0 : %-RDP96-00791ROO0200280002-5 14Mayl995
I
Applications of Decision ugmentation Gory
Perhaps it is possible that the seed generator (i.e., system clock) was subjected to some AP interaction.
We propose two arguments against this hypothesis:
(1) Even one cycle interruption of a computers' system clock will usually invoke a system crash; an
event not often reported in pseudo-RNG experiments.
(2) Computers use crystal oscillators as the basis for their internal clocks. Crystal manufacturers usual-
ly quote errors in the stated oscillation frequency of the order of 0.001 percent. That translates to
500 cycles for a 50 MHz crystal, or to 10 tLs in time. Assuming that the quoted error is a 1-(y estimate,
and that a putative AP interaction acts at within the :h 2-a domain, then shifting the clock by this
amount might account for only one seed shift in Radin and May's experiment. By Monte Carlo
methods, we determined that, given a random entry into seed-space, the average number of ticks to
reach a "significant" seed is 10; therefore, even if AP could shift the oscillators by 2-o, it cannot
account for the observed data.
Since computers in pseudo-RNG experiments are not reported as "crashing" often, it is safe to assume
that pseudo-RNG results are only due to AC. In addition, since the results of pseudo-RNG studies are
statistically inseparable from those reported with true RNGs, it is also reasonable to assume that the
mechanisms are similarly AC-based.
Precognitive AC
Using the tools of modem meta-analysis, Honorton reviewed the precognition card-guessing database
(Honorton and Ferarri, 1989). This analysis included 309 separate studies reported by 62 investigators.
Nearly two million individual trials were contributed by more than 50,000 subjects. The combined ef-
fect size was i = a 020±0. OOZ which corresponds to an overall combined effect of 11.40. Two impor-
tant results emerge from Honorton's analysis. First, it is often stated by critics that the best results are
from studies with the least methodological controls. Th check this hypothesis, Honorton devised an
eight-point quality measure (e.g., automated recording of data, proper randomization techniques) and
scored each study with regard to these measures. Tlere was no significant correlation between study
quality and study score. Second, if researchers improved their experiments over time, one would expect
a significant correlation of study quality with date of publication. Honorton found r = 0. 246, df = 307, p
= 2xlO-~ In brief, Honorton concludes that a statistical anomaly exists in this data that cannot be
explained by poor study quality or a large variety of other hypotheses including the file drawer; there-
fore, a potential mechanism underlying DAT has been verified.
SRI International'S RNG Experiment
May, Humphrey, and Hubbard (1980) conducted an extensive RNG study at SRI International in 1979.
They applied state-of-the-art engineering and methodology to construct two true RNGs, one based on
the P-decay of 137pm and the other based on an MD-20 noise diode from Texas Instruments. It is be-
yond the scope of this paper to describe, in detail, the intricacies of this experiment; however, we will
discuss those aspects that are pertinent to this discussion.
Technical Details
Each of the two sources were battery operated and optically coupled to a Digital Equipment Corpora-
tion ISI 11/23 computer. Fail-safe circuitry would disable the sources if critical physical parameters
(e.g., battery voltages and currents, temperature) exceed preset ranges. Both sources were subjected to
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 17
Agroved For Release 2000/08/11.: CINRDP96-00791 R000200280002-5
I I
A lications of Decision Augmenta ion eory 14 May 1995
environmental testing which included extreme temperature _cycles, vibration tests, E&M and nuclear
gamma and neutron radiation tests. Both sources behaved as expected, and the critical parameters,
such as temperature, were monitored and their data stored along with the experimental data.
A source was sampled at 1 KHz rate. After eight milliseconds, the resulting byte was sent to the comput-
er while the next byte was being obtained. In this way, a continuous stream of 1 ms data was presented to
the computer. May et a]. had specified, in advance, that bit number 4 was the designated target bit.
Thus each byte provided 3 ins of bits prior to the target and 4 ins of bits after the target bit.
A trial was defined as a definitive outcome from a sequential analysis of bit four from each byte. In
exchange for not specifying the number of samples in advance, sequential analysis requires that the
T~Te I and lype II errors, and the chance and extra-chance hitting rate be specified in advance. In May
et al.'s two-tailed analysis, a = P = 0.05 and the chance and extra-chance hitting rate was 0.50 and 0.52,
respectively. The expected number of samples to reach a definitive decision was approximately 3,000.
The outcome from a single trial could be in favor of a hitting rate of 0.52, 0.48, or at chance of 0.50, with
the usual risk of error in accordance with the specified Type I and Type II errors.
Each of seven operators participated in 100 trials of this type. For an operator's data to reach indepen-
dently statistical significance, the operator had to produce 16 successes in 100 trials, where a success was
defined as extra-chance hitting (i.e., the exact binomial probability of 16 successes for 100 trials with an
event probability of 0.10 is 0.04 where one less success is not significant). TWo operators produced 16
and 17 successful trials, respectively.
Temporal Analysis
We analyzed the 33 trials from the two independently significant operators from May et al.'s experi-
ment. Each of the 33 trials consisted of approximately 3,000 bits of data with -3 bits and +4 bits of 1
ms/bit temporal history surrounding the target bit. We argue that if the significance observed in the
target bits was because of AP, we would expect a large correlation with the target bit's immediate neigh-
bors, which are only + 1 ms away. As far as we know, there is no known physiological process that can be
cognitively, or in any other way, manipulated within a millisecond. We might even expect a 100% cor-
relation under the complete AP model.
We computed the linear correlation coefficients between bits 3 and 4, 4 and 5, and 3 and 5. For binary
data:
No' - X(df = 1),
where 0 is the linear correlation coefficient and N is the number of samples. Since we examined three
different correlations for 33 trials, we computed 99 different values of N02. Four of them producedX2s
that were significant-well within chance expectation. The complete distribution is shown in Figure 7.
We see that there is excellent agreement of the 99 correlations with theX2 distribution for one degree of
freedom, which is shown as a smooth curve.
We conclude, therefore, that there was no evidence beyond chance to suggest that the target bit neigh-
bors were affected even when the target bit analysis produced significant evidence for an anomaly. So,
knowing the physiological limitations of the human systems, we further concluded that the observed
effects could not have arisen due to a human-mediated force (i.e., AP).
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 is
1grFor e s 200 on WAY? 96-00791 R000200280002-5
s of b$611 On AUAWA' C 14 May 1995
Joor&f
Mathematical Model of the Noise Diode
Because of the unique construction parameters of nxas Instrument's MD-20 noise diode, May et al.
were able to construct a quantum mechanical model of the detailed workings of the device. This model
contained all known properties of the material and it's construction parameters. For example, the band
gap energy in Si, the effective mass of an electron or hole in the semiconductor, and the impurity con-
centration were among the parameters for the model. The model was successful at calculating the
diode's known and measured behavior as a function of temperature. May et al. were able to simulate
their RNG experiment down to the quantum mechanical details of the noise source. They hoped that
by adjusting the model's parameters so that the computed output agreed with the experimental one,
that they could gain insight as to where the influence "entered" the device.
May et al. were not able to find a set of model parameters that mimicked their RNG data. For example,
changing the band gap energy for short periods of time; increasing or reducing the electron's effect
mass; or redistributing or changing the impurity content produced no unexpected changes in the device
output. The only device behavior that could be effected was its known function of temperature.
Because of the construction details of the physical RNG, this result could have been anticipated. The
changes that could be simulated in the model were all in the microsecond domain because of the details
of the device. Both with the RNG and in its model, the diode's multi-MHz output was filtered by a
100-KHz wide bandwidth filter. Thus, any microsecond changes would not pass through the filter. In
shoM because of this filtering, the RNG was particularly insensitive to potential changes of the physical
parameters of the diode.
Yet solid statistical evidence for an anomaly was seen by May et al. Since the diode device was shown
mathematically and empirically to be insensitive to environmental and physical changes, these results
must have been as a result of AC rather than AP In fact, this observation coupled with the bit timing
argument, which we have described above, led May et al. to question force-like models in RNG studies
in general.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 19
Figure 7. Observed and Theoretical Correlation Distributions.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5
AiVilcations of Decision Augmentation Theory 14 May 1995
Summary of Circumstantial Evidence Against AP
We have identified six circumstantial arguments that, when taken together, provide increasingly diffi-
cult requirements that must be met by a putative AP force. In summary, the RNG database demon-
strates that:
(1) Data are independent of internal complexity of the hardware RNG device.
(2) Data are independent of the physical mechanism producing the randomness (i.e., weak nuclear or
electromagnetic).
(3) Effects with pseudorandom generators are statistically equivalent to those observed with true
hardware generators.
(4) Reasonable AP models of mechanism do not fit the data.
(5) In one study, bits which are ± 1 ms from a "perturbed" target bit are themselves unperturbed.
(6) A detailed model of a diode noise source, which includes all known physics of the device, could not
simulate the observed data streams.
In addition, AC, which is a mechanism to describe the data, has been confirmed in non-RNG experi-
ments. We conclude, therefore, an AP force that is consistent with the database must
~ Be equally coupled to the electromagnetic and weak nuclear forces.
~ Be mentally mediated in times shorter than one millisecond.
~ Follow a I 1V_n law.
For these to be true, an AP force would be at odds with an extensive amount of verified physics and
common behavioral observables. We are not saying, therefore, that it cannot exist; rather, we are sug-
gesting that instead of having to force ourselves to invent a whole new science, we should look for ways
in which AP might fit into the present world view. In addition we should invent information-based and
testable alternate mechanisms for the experimental observables.
Discussion and Conclusions
Our recent results in the study of anomalous cognition (May, Spottiswoode, and James, 1994) suggest
the the quality of AC is proportional to the change in Shannon entropy. Following Vassy (1990), we
compute the change in Shannon entropy for an extra-chance, binary sequence of length n. The total
change of entropy is given by:
AS = SO - S,
where for an unbiased binary sequence of length n, So = r; and S is given by:
S = - np,log2p, - n(1 _PJ1092(1 _PJ-
Letpj = 0.5 (1 + e) and assume that e, the effect size, is small compared to one (i.e., typical RNG effect
sizes are of the order of 3 x 10-4). Using the approximation:
In (1 + e) e - E2
2)
we find that S is given by:
S n n E2
21n2'
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 20
Upov elease 2R,00/08/1, q.: Ci&-,RDP96-00791 R000200280002-5
imd ForfP8, I ory
ons o ecision ugmen a ion 14 May 1995
or that the total change of entropy for a biased binary sequence is given by;
AS = So - S = n E2
21n2'
Since our analysis of the historical RNG database shows that the effect size is proportional to 1
the total change of Shannon entropy becomes a constant that is independent of the sequence length:
AS = constant.
We have seen in our other AC experiments (May, Spottiswoode, and James, 1994) that the quality of
the data is proportional to the change of the target entropy. In RNG experiments the quality of the data
is equivalent to the excess hitting, which according to DAT is mediated by AC and should be indepen-
dent of the sequence length. We have shown above that the quality of RNG data depends upon the
change of target entropy and is independent of the sequence length. Therefore we suggest that the
change of target entropy may account for successful AC and RNG experiments.
Braud's study of AP on red blood cells and Braud and Schlitz's study on electrodermal effects imply that
there is something unique about living systems. Before we would be willing to declare that AP is a valid
mechanism for biological experiments, more than two, albeit well designed and executed, studies are
needed.
When DAT is applied to the RNG database, a simple force-like perturbational model fails, by many
orders of magnitude, as a viable candidate for the mechanism. In addition, when viewed along with the
collective circumstantial arguments against a force-like explanation, it is clear that another model is
required. Any new model must explain why quadrupling the number of bits in the sequence length fails
to produce a Z-score twice as large.
Given that one possible information mechanism (i.e., precognitive AC) can, and has been, indepen-
dently confirmed in the laboratory, and given the weight of the empirical, yet circumstantial, arguments
taken together against AP, we conclude that the anomalous results from the RNG studies arise not be-
cause of a mentally mediated force, but rather because of a human ability to be a mental opportunist by
making AC-mediated decisions to capitalize on the locally deviant circumstances.
Generally, we suggest that future studies be designed in such a way that the criteria, as outlined in this
paper and in May, Utts, Spottiswoode (1994), conform to a valid DAT analysis. Our discipline has
evolved to the point where we can no longer be satisfied with yet one more piece of evidence of a statisti-
cal anomaly. We must identify the sources of variance as suggested by May, Spottiswoode, and James
(1994); limit them as much as possible; and apply models, such as DAT, which can begin to shed light on
the physical, physiological, and psychological mechanisms of anomalous mental phenomena.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 21
APprpved ForMer e 2X00/08/1 0 - Clfh-RDP96-00791 R000200280002-5
APPI ec s on ugmentation eory 14 May 1995
References
Bierman, D. J. (1988). Testing the IDS modelwith a gifted subject., Theoretical Parapsychology, 6,31-36.
Braud, W G. and Schlitz, M. J. (1989). Possible role of Intuituve Data Sorting in electrodermal
biological psychokinesis (Bio-PK). The Journal of the American Society for Psychical Research, 83,
No. 4,289-302.
Braud, W G. (1990). Distant mental influence of rate of hemolysis of human red blood cells. TheJournal
of the American Society for Psychical Researck 84,No.1,1-24.
DeGroot, Morris H. (1985). Probability and Statistics, 2nd Edition. Reading, MA: Addison-Wesley
Publishing Co.
Dobyns, Y. H. (1993). Selection versus influence in remote REG anomalies. Journal of Scientific
Exploration. 7, No. 3, 259-270.
Honorton, C. and May, E. C. (1975). Volitional control in a psychokinetic task with auditory and visual
feedback. Research in Parapsychology, 1975, 90-91.
Honorton, C. and Ferrari, D. C. (1989) "Future lbiling:" A meta-analysis of forced-choice precognition
experiments, 1935-1987. Journal of Parapsychology, 53, 281-308.
Jahn, R. G. (1982). The persistent paradox of psychic phenomena: an engineering perspecitve.
Proceedings of the IEEE. 70, No. 2, 136-170.
Lewis, T G. (1975). Distribution Sampling for Computer Simulation. Lexington, MA: Lexington
Books.
May, E. C. (1975). PSIFI: A physiology-coupled, noise-driven random generator to extend PK studies.
Research in Parapsychology, 1975, 20-22.
May, E. C. and Honorton, C. (1975). A dynamic PK experiment with Ingo Swann. Research in
Parapsychology, 1975, 88-89.
May, E. C., Humphrey, B. S., Hubbard, G. S. (1980). Electronic System Perturbation Techniques. Final
Report. SRI International Menlo Park, CA.
May, E. C., Radin, D. I., Hubbard, G. S., and Humphrey, B. S. (1985). Psi experiments with random
number generators: an informational model. Proceedings of Presented Papers Vol L The
Parapsychological Association 28th Annual Convention, lb& University, Medford, MA, 237-266.
May, E. C. (1990). As chair for the session at the annual meeting of the Society for Scientific
Exploration in which this original work was presented, I pointed out the problem of the likelihood
ratio for the pseudo-random-number-generator data from the floor of the convention.
May, E. C., Spottiswoode, S. James R, and James, C. L. (1994). Shannon entropy as an Intrinsic Thrget
property: Thward a reductionist model of anomalous cognition. Submitted to The Journal of
Parapsychology.
May, E. C., Utts, J. M., Spottiswoode, S. J. (1994). Decision augmentation theory: Tbward a model of
anomalous mental phenomena. Submitted to The Journal ofBarapsychology.
May, E. C. and Vilenskaya, L. (1994). Overview of current parapsychology research in the former Soviet
union. Subtle Ene?gies. 3, No 3. 45-67.
Nelson, R. D., Jahn, R. G., and Dunne, B. J. (1986). Operator-related anomalies in physical systems and
information processes. Jorunal of the Societyfor Psychical Research, 53, No. 803, 261-285.
Nelson, R. D., Dobyns, Y. H., Dunne, B. J., and Jahn, R. G., and (1991). Analysis of Variance of REG
Experiments: Operator Intention, Secondary Parameters, Database Structures. PEAR
Laboratory Tbchnical Report 91004, School of Engineering, Princeton University.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 22
kffo -ed Foof eele se A0001 Rilp -nqg -00791 R000200280002-5
Icaylon OcIsjo I jRDP96
s n ugmS d ory 14 May 1995
Raby, S. (1985). Supersymmetry and cosmology. in Supersymmetry, Supergravity, and Related Topics.
Proceedings of the -X'Vth GIFT International Seminar- on Theoretical Physics, Sant Feliu. de
Guixols, Girona, Spain. World Scientific Publishing Co. Pte. Ltd. Singapore, 226-270.
Radin, D. 1. (1985). Pseudorandom Number Generators in Psi Research. Journal ofParapsychology. 49,
No 4,303-328.
Radin, D. I. and May, E. C (1986). Tbsting the Intuitive Data Sorting mode with pseudorandom
number generators: A proposed method. The Proceedings of Presented Papers of the 29th Annual
Convention of the Parapsychological Association, Sonoma State University, Rohnert Park, CA,
539-554.
Radin, D. 1. and Nelson, R. D. (1989). Evidence for consciousness-related anomalies in random
physical systems. Foundations ofPhysics. 19, No. 12, 1499-1514.
Rosenthal, R. (1991). Meta-analysis procedures for social research. Applied Social Research Methods
Series, Vol. 6, Sage Publications, Newbury Park, CA.
Schmidt. H. (1969). Precognition of a quantum process. Journal ofParapsychology. 33, No. 2, 99-108.
Schmidt. H. (1974). Comparison of PK action on two different random number generators. Journal of
Parapsychology. 38, No. 1, 47-55.
Vassy, Z. (1990). Experimental study of precognitive timing: Indications of a radically noncausal
operation. Journal ofBarapsychology. 54, 299-320.
Vilenskaya, L. and May, E. C. (1994). Anomalous mental phenomena research in Russia and the
Former Soviet Union: A follow up. Submitted to the 1994 Annual Meeting of the
Parapsychological Association.
Walker, E. H. (1987). A comparison of the intuitive data sorting and quantum mechanical observer
theories. Journal of Parapsychology, 51, 217-227.
Woodworth, R.S. and Schlosberg H. (1960). Experimental Psychology. Rev ed. New York Holt. New
York.
Approved For Release 2000/08/10 : CIA-RDP96-00791 R000200280002-5 23