Approved For Rele;3se_2000/OT08 : ~M q~pgf~qjl§9RO0320018000149.22 April 1994
Decision Augmentation I neory: owar
Decision Augmentation Theory:
Toward a Model
of
Anomalous Mental Phenomena
by
Edwin C. May, Ph.D
Science Applications International Corporation
Menlo Park, CA
Jessica M. Utts, Ph.D.
University of California, Davis
Department of Statistics
Davis, CA
and
S. James P. Spottiswoode
Science Applications International Corporation (Consultant)
Menlo Park, CA
Abstract
Decision Augmentation Theory (DAT) holds that humans integrate information obtained by anoma
lous cognition into the usual decision process. The result is that, to a statistical degree, such decisions
are biased toward volitional Outcomes. We introduce DAT and define the domain for which the model
is applicable. In anomalous mental phenomena research, DAT is applicable to the understanding of
effects that are within a few sigma of chance. We contrast the experimental consequences of DATwith
those of models that treat anomalous perturbation as a causal force. We derive mathematical expres
sions forDATand causal models for two distributions, normal and binomial. DATis testable both retro
spectively and prospectively, and we provide statistical power curves to assist in the experimental design
of such tests. We show that tile experimental consequences of DAT are different from those of causal
models except for one degenerate case.
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017
Approved For Release 2000/08/08 : ClkRD.Ppo6f%Up9ROO3200180001ig. 22 April 1994
Decision Augmentation Theory: Toward a Mode
Introduction
We do not have positive definitions of the effects that generally fall under the heading of anomalous
mental phenomena (AMP).* In the crassest of terms, AMP is what happenswhen nothing else should, at
least as nature is currently understood. In the domain of informationacquisition, or anomalous cogni
tion (AQ, it is relatively straightforward to design an experimental protocol (Honorton et al., 1990,
Hyman and Honorton, 1986) to assure that no known sensory leakage of information can occur. In the
domain of causation, or anomalous perturbation (AP), however, it is very difficult, if not impossible
(May, Humphrey, and Hubbard, 1980 and Hubbard, Bentley, Pasturel, and Issacs, 1987); thus, making
the interpretation of results equally difficult.
We can divideAP into two categories based on the magnitude of the putative effect. MacroAPinclude
phenomena that generally do not require sophisticated statistical analysis to tease out weak effects
from the data. Examples include inelastic deformations in strain gauge experiments, the obvious bend
ing of metal samples, and a host of possible "field phenomena" such as telekinesis, poltergeist, tele
portation, and materialization. Conversely, microAP covers experimental data from noisy diodes, ra
dioactive decay and other random Sources. These data show small differences from chance expectation
and require statistical analysis.
One of the consequences of the negative definitions ofAMP is that experimenters must assure t hatthe
observables, are not due to "known" effects. Traditionally, two techniques have been employed to guard
against such interactions:
(1) Complete physical isolation of the APtarget system.
(2) Counterbalanced control and effort periods.
Isolating physical systems from potential "environmental" effects is difficult, even for engineering spe
cialists. It becomes increasingly problematical the more sensitive tile MacroAP device. For example
Hubbard, Bentley, Pasturel, and Issacs (1987) monitored a large number of sensors of environmental
variables that could mimicAP effects in an extremely isolated piezoelectric strain gauge. Among these
were threeaxis accelerometers, calibrated microphones, and electromagnetic and nuclear radiation
monitors. In addition, the sensors were Mounted in a govern men tapproved enclosure to assure no
leakage (in or out) of electromagnetic radiation above a given frequency, and the enclosure itself was
levitated on an air suspension table, Finally, the entire setup was locked in a controlled access room
which was monitored by motion detectors. The system was so sensitive, for example, that it was possible
to identify the source of a perturbation of the strain gauge that was due to innocent, gentle knocking on
the door of the closed room, The financial and engineering resources to isolate such systems rapidly
become prohibitive.
The second method, which is commonly in use, is to isolate the target system within the constraints of
the available resources, and then construct protocols that include control and effort periods. Thus, we
trade complete isolation for a statistical analysN of the difference between control and effort periods.
The assumption implicit in this approach is that environmental influences of the device will be random
The Cognitive Sciences Laboratory has adopted the term anomalous menialphenornena instead of the more widely knownpsi.
Likewise, we use the terms anomalous cognition and anomalous perturbation for rSP and PK, respectively. Wehavedoneso
because we believe that these terms are more naturally descriptive of the observables andare neutral with regard to mecha
nisms. These new terms will be used throughout this paper.
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 2
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017
Decision Augmentation Theory: Toward a Model of AMP V9. 22 April 1994
and uniformly distributed in both the control and effort conditions, while AP will tend to occur in the
effort periods. Our arguments in favor of an anomaly, then, are based on statistical inference and we
must consider, in detail, the conseqUenceS Of Such analyses, one of which implies a generalized model
forAMP
Background
As the evidence forAMP becomes more widely accepted (Bem and Honorton, 1994, Utts, 1991, Radin
and Nelson, 1989) it is imperative to determine the underlying mechanisms of the phenomena. Clearly,
we are not the first to begin thinking of potential models. In the process of arnassing incontrovertible
evidence of an anomaly, many theoretical approaches have been examined; in this section we outline a
few of them. It is beyond the scope of this paper, however, to provide an exhaustive review of the
theoretical models of AMP; a good reference to an uptodate and detailed presentation is Stokes
(1987).
Brief Review of Models
Two fundamentally different types of models have been developed: those that attempt to order and
structure the raw observations inAMP experinients (i.e., phenomenological), and those that attempt to
explainAMP in terms of modifications to existing physical theories (i.e., fundamental). In the history of
the physical sciences, phenomenological models, such as the Snell's law of refraction or Ampere's law
for the magnetic field due to a current, have nearly always preceded fundamental models of the phe
nomena, such as quantum electrodynamics and Maxwell's theory. In producing useful models ofAMPit
may well be advantageous to start with phenomenological models, of which DAT is an example.
Psychologists have contributed interesting phenomenological approaches. 'Stanford (1974a and 1974b)
proposed PSImediated Instrumental Response (PMIR) as a descriptive model. PMIR states that an
organism usesAMP to optimize its environment. For example, in one of Stanford's classic experiments
(Stanford, Zenhausern, Taylor, and Dwyer 1975) subjects were offered a covert opportunity to stop a
boring task prematurely if they exhibited unconscious AP by perturbing a hidden random number gen
erator. Overall, the experiment was significant in the unconscious tasks; it was as if the participants
were unconsciously scanning the extended environment for any way to provide a more optimal situation
than participating in a boring psychological task!
As an example of a fundamental model, Walker (1984) proposed a literal interpretation of quantum
mechanics in that since Superposition of eigenstates holds, even for macrosystems, AMP might be due
to macroscopic examples of quantum phenomena. These concepts spawned a class of theories, the so
called observation theories, that were based either upon quantum formalism conceptually or directly
(Stokes, 1987). Jahn and Dunne (1986) have offered a "quantum metaphor" which illustrates many
parallels between AMP and known quantum effects. Unfortunately, these models either have free pa
rameterswith unknown values, or are merely hand waving metaphor,~ and therefore have not led to test
able predictions. Some of these models propose questionable extensions to existing theories. For ex
ample, even though Walker's interpretation of quantum mechanical formalism might suggest wavelike
properties of macrosystems, the physics data to date not only show no indication of such phenomena at
room temperature but provide considerable evidence to stiggest that macrosystems lose their quantum
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 3
Approved For Release 2000/08/08 : CIA V9f4%X89R003200180001V3. 22 April 1994
ap
e 0
Decision Augmentation Theory: Toward a M6R
coherence above 0.5 Kelvins (Washburn and Webb, 1986) and no longer exhibit quantum wavelike be
havior.
This is not to say that a comprehensive model ofAMPwill not eventually require quantum mechanics as
part of its explanation, but it is currently premature to consider such models as more than interesting
speculation. The burden of proof is oil the theorist to show why systems, which are normally considered
classical (e.g., a human brain), are, indeed, quantum mechanical. That is, what are the experimental
consequences of a quantum mechanical system over a classical one?
Our Decision Augmentation Theory is phenomenological and is a logical and formal extension of Stan
ford's elegant PMfR model. In the same rnanneras early models 'of the behavior of gases, acoustics, or
optics, it tries to subsume a large range of experimental measurements into a coherent lawful scheme.
Hopefully this process will lead the way to tile uncovering of deeper mechanisms. In fact DAT leads to
the idea that there may be only one underlying mechanism of afl AMP effects, namely a transfer of in
formation between events separated by negative time intervals.
Historical Evolution of Decision Augmentation
May, Humphrey, and Hubbard (1980) conducted a careful random ]lumber generator (RNG) experi
ment. What makes this experiment unique is the extreme engineering and methodological care that
was taken in order to isolate any potentially known physical interactions with the source of randomness.
It is beyond the scope of this paper to describe this experiment completely; however, those specific de
tails which led to the idea of Decision Augmentation are important for the sake of historical complete
ness.
May, Humphrey, and Hubbard were satisfied in that RNG study, that they had observed a genuine sta
tistical anomaly. In addition, because of an accurate mathematical model of the random device and the
engineering details of the experiment, they were equally satisfied that the deviations were not due to
any known physical interactions. They concluded, in their report, that some form of AMPmediated
data selection had occurred. They named it then Mychoenergetic Data Selection.
Following a suggestion by Dr. David R. Saunders of MARS Measurement and Associates, we noticed
in 1986 that the effect size in binary RNG studies varied on tile average as the square root of the number
of bits in the sequence. This observation led to the development of the Intuitive Data Sorting model that
appeared to describe the RNG data to that date (May, Raclin, Hubbard, Humphrey, and Utts, 1985).
The remainder of this paper describes the next step in the evolution process. We now call the model
Decision Augmentation Theory (DA 7).
Decision Augmentation TheoryA General Description
Since the case forA Cmediated information transfer is now well established, it would be exceptional if
we did not integrate this form of information gathering into the decision process. For example, we rou
tinely use realtime data gathering and historical information to assist in the decision process. Perhaps,
what is called intuition may play a important role. Why, then, should we not includeAC information?
DATholds thatAC information is included alongWith tile usual inputs that result in a final human deci
sionthatfavoursa "desired" outcome. In statistical parlance, DAT says that a slight, systematic bias is
introduced into the decision process by A C.
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 4
Approved For Release 2000/08/08 : CIARDP9600789ROO320018000 'V;. 22 April 1994
Decision Augmentation Theory: Toward a Model of AMP
This philosophical concept has the advantage of being quite general. We know of no experiment that is
devoid of at least one human decision; thus, DAT might be tile underlying basis for AMP. To illustrate
the point, we describe how the "cosmos" determines tile OWCOnle of a welldesigned, hypothetical ex
periment. To determine the sequencing of in RNG experiment, suppose that the entry point into a
table of random numbers will be chosen by the square root of tile barometric pressure as stated in the
weather report that will be published seven days hence in the New York Times. Since humans are notori
ously bad at predicting or controlling the weather, this entry point might seem independent of a human
decision; but why did we "chose" seven days in advance'? Why not six or eight? Why the New York Times
and not the London Times? DATwould suggest that the selection ofseven days, the New York Times, the
barometric pressure, and square root function were optimal choices, either individually or collectively,
and that other decisions would not lead to as significant an outcome.
Other nontechnical decisions may also be biased by A C in accordance with DAT When should we
schedule a Ganzfeld session; who should be the experimenter in a series; how should we determine a
specific order in a tripolar protoco)?
It is important to understand the domain in which a model is applicable. For example, Newton's laws
are sufficient to describe tile dynamics of mechanical objects in tile domain where the velocities arevery
much smaller than the speed of light, and where the quantum wavelength of the object is very small
compared to the physical extent of the object. If these conditions are violated, then different models
must be invoked (e.g., relativity and quantum mechanics, respectively).
The domain in which DAT is applicable is when experimental Outcomes are in a statistical regime (i.e., a
few standard deviations from chance). In other words, does the measured effect occur under the null
hypothesis? This is not a sharpedged requirement and DA T becomes less apropos the more a single
measurement deviates from meanchanceexpectation (MCE). We would not invoke DAT, for exam
ple, as an explanation of levitation if one found the authors hovering near the ceiling!
All this may be interesting philosophy, butDATcan be formulated mathematically and subjected to rig
orous examination.
Development of a Formal Model
While DAT may have implications forAMP in general, we develop the model in the framework of un
derstanding experimental results. In particular, we consider AP vs AC in the form of DAT in those ex
periments whose outcomes are in the fewsigma, statistical regirne.
We define four possible mechanisms for the results in such experiments:
(1) Mean Chance Expectation. The results are at chance. That is, the deviation of the dependent vari
able meets accepted criteria for MCE. In statistical parlance, we have measurements from an un
perturbed parent distribution with unbiased sampling.
(2) Anomalous Perturbation. Nature is modified by some anomalous interaction. That is, we expect a
causal interaction of a "force" type. In statistical parlance, wehave measurements from aperturbed
parent distribution with unbiased sampling,
(3) Decision Augmentation. Nature is unchanged but the measurements are biased. That is, AC in
formation has "distorted" the sampling. In statistical parlance, we have measurements from an
unperturbed parent distribution with biased sampling.
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017
Approved For Release 2000/0p/08: C'h0RcP
c If, g~7WPW003200180001VI 22 April 1994
Decision Augmentation Theory: oward a
(4) Combination. Nature is modified and tile measurements are biased. That is, bothAP andAC are
present. In statistical parlance, we have conducted biased sampling from a perturbed parent dis
tribution.
General Considerations
Since the formal discussion of DATisstatistical, we will describe tile overall context for the development
of the model from that perspective. Consider a random variable, X, that can take on continuous values
(e.g., the normal distribution) or discrete values (e.g., the binomial distribution). Examples of X might
be the hit rate in an RNG experiment, tile swimming velocity of cells, or the mutation rate of bacteria.
Let Ybe the average computed over n values of X, where n is the number of items that are collectively
subjected to anAMP influence as the result of a single decisionone trial. Often this maybe equivalent
to a single effort period, but it also may include repeated efforts. The key point is that, regardless of the
effort style, the average value of the dependent variable is computed over the n values resulting from
one decision point. In the examples above, n is the sequence length of a single run in an RNG experi
ment, the number of swimming cells measured during the trial, or the number of bacteriacontaining
test tubes present during the trial.
Assumptions for DAT
We assume that the parent distribution of a physical system remains unperturbed; however, the mea
surements of the physical system are systematically biased by some ACmediated informational pro
cess.
Since the deviations seen in experiments in the statistical regime tend to be small in magnitude, it is safe
to assume that the measurement biases might also be small; therefore, we assume small shifts of the
mean and variance of the sampling distribution. Figure I shows the distributions for biased and un
biased measurements.
The biased sampling distribution shown in Figure I is assumed to be normally distributed as:
Z  N(u,, a,),
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 6
Figurel. Sampling Distribution UnderDAT
Approved For Releasj 2000/
Decision Augmentation tieory:OPJggr~';*(~RFOPA(M~89ROO320018000M. 22 April 1994
where the notation means that Z is distributed asa normal distribution with a mean ofy, and a standard
deviation of or,, 
ASSUMptions for an AID Model
For comparison sake, we develop a model forAP interactions. With a few exceptions reported in the
poltergeist literature,AP appears to be a relatively "sinall" effect in laboratory experiments. That is, we
do not readily observe anomalous and obvious mental interactions with the environment. Thus, we be
gin with the assumption that a PLItative AP force Would give rise to a perturbational interaction. What
we mean is that given an ensemble of entities (e.g., binary bits, cells), a force acts, on the average, equal
Iyon each member of the ensemble. We call this type of interaction perturbational AP (PAP).
Figure 2 shows a schematic representation of probability density functions for a parent distribution un
der the PAP assumption and an unperturbed parent distribution. In the PAP model, the perturbation
induces a change in the mean of the parent distribution but does not effects its variance. We parameter
ize the mean shift in terms of a multiplier of the initial standard deviation. Thus, we define anAPeffect
size as:
~,I  Ito)
'AP (YO
where [q and [to are the means Of tile perturbed and unperturbed distributions, respectively, and where
(30 is the standard deviation of the unperturbed distribution.
For the moment, we consider eAp as a parameter which, in principle, could be a function of a variety of
variables (e.g., psychol ogical, physical, envi ronm e ntal, met ho do logical). As we develop DAT for specif
ic distributions and experiments, we will discuss this functionality of EAp.
Calculation of E(Z2)
We compute the expected value and variance of Z2 under AICE, PAP, and DATfor the normal and bino
mial distributions. The details of the calculations can be found in the Appendix; however, we summa
rize the results in this section. Table I shows the results assuming that the parent distribution is normal.
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 7
Figure2. Parent Distribution for Perturbational AP.
Approved For Release 2000/08/08 : ClfigRF8PAW89ROO32001800OV?. 22 April 1994
Decision Augmentation Theor. Toward a
Table 1.
Normal Parent Distribution
Mechanism
i
Quant MCE PA13 DAT
ty
+ '_2 2 + a'
E(Z2) 1 A"n IUZ Z
F2
Var(7,') 2 2(l + 2, 2(al + 2#&7~
n)
A
Table 2 shows the results assuming that the parent distribution is binomial. In this calculation, p0 is the
binomial event probability and ao = VFOP __PO)
Table 2.
Binomial Parent Distribution
Mechanism
Quantity
MCE PAP DAT
2 + (y'
E(Z2) 1 1 + E"(n  + LE (I JUZ Z
 2po)
A
((y4 +2,u2
Var(Z2) 2 + 1 (1 2(l + 2r,'pn)* 2
 6a') A
2 0
na 1  I
0
The variance shown assumes p0 = 0.5 and n > 1  See the Appendix for other cases.
We wish to emphasize at this point that in the development of the mathematical model, the parameter
EAp for PAP, and the parameters p, and a, in DAT may all possibly depend upon n; however, for the
moment, we assume that theyare all nindependent. Weshall discuss the consequences of this assump
tion below.
Figure 3 displays these theoretical calculations for the three inechanisnis graphically.
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 8
Figure3. Pre(lictioiisotM("/,,iAi;anou/li.
Approved For Release 2000/0B/08: CIAR
Oc?c g?MP9ROO3200180001V_6.22 April 1994
Decision Augmentation Theory: loward a 10o )epl
Within the constraints mentioned above, this formulation predicts grossly different outcomes for these
models and, therefore, is ultimately capable of separating them, even for very small perturbations.
Retrospective Tests
It is possible to apply DA T retrospectively to any body of data that meet certain constraints. It is critical
to keep in mind the meaning of nthe number of measures of the dependent variable over which to
compute an average during a single trial following a single decision. In terms of their predictions for
experimental results, the crucial distinction between DAT and the PAP model is the dependence of the
results upon n; therefore, experiments which are used to test these theories must be those in which ex
periment participants are blind to n. In a followon to this theorydefinition paper, we will retrospec
tively apply DAT to as many data sets as possible, and examine the consequences of any violations of
these criteria.
Aside from these considerations, the application of DATis straight forward. Having identified the
unit
s
of analysis and n, simply create a scatter diagram of points (Z2 P n) and compute a least square fit to a
straight line. Tables I and 2 show that for tile PAP model, tile sqUare of theAPeffect size is the slope of
the resulting fit. A student's ttest may be used to test the hypothesis that tileAPeffect size is zero, and
thus test for the validity of the PAP model. If the slope is zero, these same tables show that the intercept
may be interpreted as an A C strength parameter for DA T The followon paper will describe these tech
niques in detail.
Prospective Tests
A prospective test of DA T will not only test the AAIP hypothesis against mean chance expectation, but
will also test for a PAP contribution. In such tests, n Should certainly be a doubleblind parameter and
take on at least two values. If you wanted to check the prediction of a linear functional relationship
between n and the E(Z2) that is suggested by PAP model, the more values of n the better. It is not pos
sible to separate the PAP model from DAT at a single Value of n.
In any prospective test, it is helpful to know the IlUmber of runs,N, that are necessary to determinewith
95% confidence, which of tile two models best fits the data. Figure 4 displays the problem graphically.
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 9
Figure 4. Model Predictions for the Power Calculation.
Approved For Release 2000/08/08: C%'R
e ~ ~89RO0320018000~7j. 22 April 1994
)ffffg
Decision Augmentation Theory: Toward a o e1 o A
Under PAP, 95% of the values of Z2 Will be greater than the point indicated in Figure 4. Even if the
measuredvalue of 72is at this point, wewould like the lower limit of the 95% confidence interval for this
value to be greater than the predicted value under the DAT model. Or:
ZAP 1.645"4p  1.964 EAC(Z')'
FN FN
Solving for N in the equality, we find:
N 3.605 aAP
EAP (Z2)  EAC(Z2)
SinceaAp ~!~ (YAC, this value ofNwill alwaysbethe larger estimate than thatderivedfrom beginningwith
DAT and calculating the confidence intervals in the other direction.
Suppose, from an earlier experiment, one can estimate a singletrial effect size for a specific value of n,
say nj. To detennine whether the PAP model or DAT is the proper description of the mechanism, we
must conduct another study at an additional value of n, say n2. We use Equation 1 to compute how many
runs we must conduct at n2 to assure it separation of mechanism with 95% confidence, and we use the
variances shown in Tables land 2to compute aAp . Figure 5 shows the number of runs for an RNGlike
experiment as a function of effect size for three values Of n2.
We chosenj = 100 bits because it is typical of the numbers found in the RNG database and the values of
n2 shown are within easy reach of today's computerbased RING devices. For exam ple, assuming cz =
1.0 and assuming an effect size of 0.004, one we derived frorn a publication of PEAR data (Jahn, 1982),
then at nj = 100,p, = 0.004 X VTO0 = 0.04 and EAC (Z2) = 1,0016. Suppose n2 = 104. Then EAp(Z2) =
1.160 and cAp = 1.625. Using Equation 1, we find N = 1368 runs, which can be approximately obtained
from Figure 5. That is in this example, 1.368 runs are needed to resolve the PA P model from DAT at n2 =
104 at the 95% confidence level. Since these runs are easily obtained in most RNG experiments, an
ideal prospective test of DAT, which is based on these calculations, would be to conduct 1500 runs ran
domly counterbalanced between n = 102 and n = 104 bits/trial. If the effect size at n = 102 is near 0.004,
than we would resolve the AP vsA C question with 95% confidence.
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 10
Figure 5. Runs Required for RNG Effect Sizes
Approved For Release 2000/08/08 : CIAR 9PA
Decision Augmentation Theory: Toward a OFO W~89RO03200180001VI 22 April 1994
ocP
Figure 6 shows similar relationships for effect sizes that are more typical of biological AP as reported in
the Former Soviet Union (Mayand Vilenskaya, 1994).
Similarly, for biological oriented AP experiments, we chose nj = 2 because use two simultaneous AP
targets is easily accomplished. If we assume an effect size of 0.3 and a, = 1.0, at n2 = 10 we compute
E,Ac(Z2) = 1.180, L,:Ap(Z2) = 1.900, (Y,11, = 2.366 and N = 140, which can be approximately obtained
from Figure 6.
We have included n2 = 100 in Figure 6, because this is within reach in cellular experiments although it is
probably not practical for most biological AP experiments.
We chose nj = 2 units for convenience. For example in a plant study, the physiological responses can
easily be averaged over two plants and n2 = 10 is Within reason for a second data point. A unit could be a
test tube containing cells or bacteria; the collection of all ten test tubes would simultaneously have to be
the target of the AP effort to meet the constraints of a valid test.
The prospective tests we have described so far are conditional; that is, given an effect size, we provide a
protocol to test if the mechanism forAMP is PAP or DAT An unconditional test does not assume any
effect size; all that is necessary is to collect data at a large number of different values of n, and fit a
straight line through the resulting Z2S. The mechanism is PA P if the slope is nonzero and may be DAT if
the slope is zero.
Discussion
We now address the possible ndependence of the model parameters. A degenerate case arises if eAp is
proportional to Vn; if that were the case, we could not distinguish between the PAP model and DATby
means of tests on the n dependence of results. If it turns out that in the analysis of the data from a vari
ety of experiments, participants, and laboratories, the slope of a Z2vs n linear leastsquares fit is zero,
then either eAp = 0.0 or eAp is exactly proportional to Vn_ depending upon the precision of the fit (i.e.,
errors on the zero slope). An attempt might be made to rescue the PAP hypothesisby explaining the Vn
dependence of Z2 in the degenerate case as a fatigue or other time dependence effects. That is it might
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 11
Figure6. Runs Required for Biologica]APEffect Sizes
Approved For Release 2000/08/08: CIA RQP9,JR8;89ROO3200180001V,. 22 April 1994
Decision Augmentation Theory: Toward a hiodel o
be hypothesized that human participants might becomeAPtired as a function of n; however, it seems
improbable that a humanbased phenomena would be so widely distributed and constant and give ex
actly the /n dependency in differing protocols needed to imitate DAT We prefer to resolve_the degen
eracy by wielding Occam's razor: if the only type of AP which fits the data is indistinguishable from A C,
and given that we have ample demonstrations of A C by independent means in the laboratory, then we
do not need to invent an additional phenomenon called AP. Except for this degeneracy, a zero slope for
the fit allows us to reject all PAP rnodeN, regardless of their ndependencies.
DATis not limited to experiments that capture data from a dynamic system. DAT may also be the mech
anism in protocols which utilize qUasistatic target systems. In a quasistatic target system, a random
process occurs only when a run is initiated; amechanical dice thrower is an example. Yet,inaseriesof
unattended runs of such a device there is always a statistical variation in the mean of the dependent
variable that may be due to a variety of factors, such as Brownian motion, temperature, humidity, and
possibly the quantum mechanical uncertainty principle (Walker, 1974). T'hus, the results obtained will
ultimately depend upon when the run is initiated. It is also possible that a secondorder DATmecha
nism arises because of protocol selection; how and who determines the order in tripolar protocols. In
second order DAT there may be individuals, other than the formal subject, whose decisions effect the
experimental Outcome and are modified by A C.
Finally, we would like to close with a clear statement of what is meant by DAT: the decisions on which
experimental outcomes depend are augmented by AC to capitalize upon the unperturbed statistical
fluctuations of the target system. In our followon paper, we will examine retrospective applications to a
variety of data sets.
Acknowledgements
Since 1979, there have been many individuals who have contributed to the development of DAT We
would first like to thank David Saunders without whose rernark this work Would not have been. Beverly
Humphrey kept the philosophical integrity intact at times Linder extreme duress. We are greatly appre
ciative of Zoltdn Vassy, to whom we owe tile Zscore formalism, to George Hansen, Donald McCarthy,
and Scott Hubbard for their constructive criticisms and support.
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 12
Approved For Release 2000/08/08 : CIARDP96R0789R003200180001q9. 22 April 1994
Decision Augmentation Theory: Toward a Model of MP
References
Bern, D. J. and Honorton, C. (1994). Does psi exist? Replicable evidence for an anornalous process of
information transfer. PSYCIlDlogical Bulletin, 115, No. 1, 418.
Honorton, C., Berger, R. E., Varvoglis, M. R, Quant, M., Derr, P, Schechter, E. I., and Ferrari, D. C.
(1990) Psi Communication in the Ganzfeld. Journal of Parapsychology, 54, 99139.
Hubbard, 0. S., Bentley, P. P., Pasturel, P K., and Isaacs, J. (1987). A remote action experiment with a
piezoelectric transducer. Final Report  Objective H, Task 3 and 3a. SRI International Project
1291, Menlo Park, CA.
Hyman, R. and Honorton, C. (1986). A joint communiqu6: The psi ganzfeld controversy. Journal of
Parapsychology. 50,351364.
Jahn, R. G. (1982). The persistent paradox of psychic phenornena: an engineering perspecitve.
Proceedings of the IE~EE. 70, No. 2, 136170.
Jahn R. G. and Dunne, B. J. (1986). On the quantum mechanics of consciousness, with application to
anomalous phenomena. roundations of Physics. 16, No 8, 721772.
May, E. C., Humphrey, B. S., Hubbard, G. S. (1980). Electronic System Perturbation Techniques. Einal
Report. SRI International Menlo Park, CA.
May, E. C., Radin, D. L, Hubbard, G. S., Humphrey, B. S., and Utts, J. (1985) Psi experiments with
I
random number generators: in informational model. Proceedings of Presented Papers Vol L The
Parapsychological Association 28th Annual Convention, Tufts University, Medford, MA, 237266.
May, E. C. and Vilenskaya, L. (1994). Overview of Current Parapsychology Research in the Former
Soviet Union. Subtle Energies. 3, No 3. 4567.
Radin, D. I. and Nelson, R. D. (1989). Evidence for consciousnessrelated anomalies in random
physical systems. Boundations of Physics. 19, No. 12, 14991514.
Stanford, R. G. (1974a). An experimentally testable model for spontaneous psi events 1. Extrasensory
events. Journal of theAmetican Societyfor Physical Research, 68, 3457.
Stanford, R. G. (I 974b). An experimentally testable model for spontaneous psi events 11. Psychokinetic
events. Journal of theAmetican Societyfor Physical Research, 68, 321356.
Stanford, R. G., Zenhausern R., Taylor, A., and Dwyer, M. A. (1975). Psychokinesis as psimediated
instrumental response. Journal of the American Societyfor Physical Research, 69, 127133.
Stokes, D. M. (1987). Theoretical parapsychology. In Advances in Parapsychological Research 5.
McFarland & Company, Inc. Jefferson NC, 77189.
Utts, J. (1991). Replication and metaanalysis in parapsychology. Statistical Science. 6, No. 4, 363403.
Walker, E. H. (1974). Foundations of Paraphysical and Parapsychological phenomena. Proceedings of
an International Conference: Quantum Physics and Parapsychology. Oteri, E. Ed. Parapsychology
Foundation, Inc. New York, NY, 153.
Walker, E. H. (1984). A review of criticisms of the quantum mech anical theory of psi phenomena.
Journal of Parapsychology. 48,277332.
Washburn S. and Webb, R. A. (1986). Effects of dissipation and temperature on macroscopic quantum
tunneling in Josephson junctions. In New Techniques and Ideas in Quantum Measurement Theory.
Greenburger, D. M. Ed. New York Academy of Sciences, New York, NY, 6677.
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 13
Approved For Release 2000/0p/08: CIfi&RPj8P7WP9ROO32001800017
c
Decision Augmentation Theory oward a
Appendix
Mathematical Derivations for the Decision Augmentation Theory
in this appendix we develop the formalism for the Decision Augmentation Theory (DA7~ Weconsider
cases for the meanchanceexpectation (MCE), anomalous perturbation (AP), and anomalous cogni
tion (AC) under two assumptionsnormality and Bernoulli sampling. For each of these three models,
we compute the expected values of Zand Z2, and the variance of Z2*
Mean Chance Expectation (MCE)
Normal Distribution
We begin by considering a random variable, X, whose probability density function is normal, (i.e., N(YO,
(Yo2)). After many unbiased measure.% from this distribution, it is possible to obtain reasonable ap
proximations to yo and ao2 in the usual way. SLIpposen unhiased measures are used to compute anew
variable, Y, given by:
Y,
n Xjk
j=1
Then Yis distributed asN(/to, a,2),wherea'2 =q02/n. IfZisdefinedas
Z Yk 'U0
ff't
then Z is distributed as N(O, 1) and E(Z) is given by:
N
E MCAZ) ze dz
f
Since Var(Z) = I = E(Z2)  E2(Z), the,,
EN 2 1 f Z2e  0.12dZ
HCE(z is~ (2)
The Var(Z2) = E(Z4)  E2(Z2) = E(Z4)  1. But
* We wish to thank Zoltan Vassy for originallysuggesting the Z2 formalism.
t Ilroughout this appendix, this notation means: 2
N(a, t72) e
a 12;
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 Appendix: 1
roveo For R?j?
usion Ugmen y1p%% faCAjbgyig%qRRp9ROO32001800017
A
E'v (Z4) z4e0.5z2dz = 3.
Mcr f
So
Var" (Z2) = 2.
Mcr
(3)
Bernoulli Sampling
Let the probability of observing a one tinder Bernoulli sampling be given bypo. After n samples, the
discrete Zscore is given by:
Z k  npo
where
(70 = '/POO PO),
and k is the number of observed ones ( 0 < k < n). The expected WIlUe of Z is given by:
n
EB (Z) ~"(k  npO)Bk(nIPO), (4)
MCE t7D ~n k  0
where
k(n,po) = (n)pk(, p.).k.
B k o
The first term in Equation 4 is the E(k) which, for the binomial distribution, is npo. Thus
R
EB (k  nPO)13k(n,PO) = 0' (5)
MCF(Z) Fn 7"
k=O
The expected value of Z2 is given by:
Ell (Z 2) Var(Z) + E2 (Z),
MCF
Var(k  npo) + 0,
n(Y2.
n(72
XfCl"(Z2) 1.
Ell I VfT2 (6)
no 0
As in the normal case, the Var(Z2) = E(Z4)  E2(Z2) E(Z4)  1. But
* Johnson, N. L., and S. Kotz, Discrete Distributions, Johri Wiley & Sons, New York, p. 51, (1969).
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 Appendix: 2
Approved For Release 2000/08/08 : CIARDP9~RaT89ROO32001800017
Decision Augmentation Theory: Toward a Model o
(Z4)
EB 1 7,(k  npo)'Bk(n,po)
MCE
0 k=0
3 + ~1 2(1  6a').
ao 0
So,
Var" (Z') = 2 + 1 (1  6C2) = 2 (p0 0.5). (7)
Mell' na2 n
Anomalous Perturbation (AP)
Normal Distribution
Under the perturbation assumption described in the text, we let the rnean of the perturbed distribution
be given byuo+ e,,,qO, where % is an AP strength parameter, and in the general case may be a function
of n and time. The parent distribution for the random variable, X, becomes NOto + Ppao, a02). As in the
MCE case, the average of n independent values off, is Y N(tto+ F,"pco, a,2). Let
Y /'to + F~Pco + dy,
where
Jy Y  (U0 + E,.,ao).
For a mean of n samples, the Zscore is given by
Z = Y  Yo = cap ao + dy  E, ~n +
a, a,
where C is distributed as N(O, 1) and is given bydy / ty, Then the expected value of Z is given by
EA' (Z) = E, + P, ~'n_ + E(~) = rp ~n_. (8)
AP
and the expected value of Z2 is given by
EN (Z2) EA ([E "P Fn + ~j 2) nF,' + E(~') + 2Ep Fn E(~)
AP P ap
2
+
Eapn,
since E(~) = 0 and E(t2) = 1.
(9)
In general, Z2 is distributed as a noncentral X2 With 'I degree of freedom and noncentrality parameter
n~,p~ X2(1, nE~,p2). Thus, the variance of Z2iS given by*
Va ",(Z2) = 2(1 + 2n,' (10)
j~
ap
Bernoulli Sampling
As before, let the probability of observing a one under MCE be given bypo, and the discrete Zscorebe
given by:
Johnson, N. L., and S. Kotz, Continuous Univariate Distributions2, John Wiley & Sons, New York, p. 134,(1970).
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 Appendix: 3
Approved For Release 2000/9_8/08: CIAR 9PA 7
Decision Augmentation Theory: loward a eFo qR789ROO3200180001
0CP
Z k  np,
ao ~n
where k is the number of observed ones ( 0 < k < n). Under the perturbation assumption, we let the
mean of the distribution of the singlebit probability be given bypI = po + eapoo, where "ap is an 'A'P
strength parameter. The expected value of Z is given by:
EBP(Z)
A '(k  npO)Bk(n,pj,
k=0
where
(n)pk(l p,)nk
Bk(n,pl) = k I
The expected value of Z becomes
[ n
E "I, P V  7,kBk(n,PI) nP
~n k=0

(P I  P.) Jn
= e. P
a0 'r'Pn
Since e,,p E(Z)/N/n, so e,,p is also the binornial effect size. The expected value of Z2 is given by:
EBp(Z') Var(Z) + E'(Z),
A
Var(k  np,) + E a1pn,
n alo
P~O  PI) + E2
2 apn,
Expanding in terms of pi po + eapuo,
1l 1) + L 2po). (12)
,(Z2) + F2 (n
E
A ap ao
If p0 = 0.5 (i.e., a binary case) and n > 1, then Equation 12 reduces to the E(Z2) in the normal case,
Equation 9.
We begin the calculation of Var(Z2) by using the equation for thefth moment of a binomial distribution
M ~Lj [((I + pe')"j 1 01
dti
Since Var(Z2) = E(Z4) E2(Z2), We Must evaluate E(Z4). Or,
E'B'(Z") 7,(k  nPO)'Bk(n,pj).
n2GO kO
Expanding n 2 GO 4(k npo) 4. using the appropriate moments, and subtracting E2(Z2), yields
Var(Z2) = CO + C, n + C, n '. (13)
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 Appendix: 4
ApproveO ForR Ifase~Rpg~.0f~n,cq%gM%qRRp9ROO32001800017
Decision AUgM nfa Ion
Where
Co = 2 36E' + IOEI + 8~aP(j  2po)(1  2E2) + F2P
ap ap 6
UO ap Cr2
0
3
(1 _ E2 )
C, 4Eap ap + 4L~P (I  2p,), and
CO
3 (1  7,,2ap)
C [E2  3 ] 2 +12L 2P (1
48  6 " (1  2p0) + + L 2p~)(12p'  12p~ + 1).
ap (70 a2 G3 0
0 0
Under the condition thatrat, < I (a frequent occurrence in the perturbation approximation for AP), we
ignore any terms of higher order than,,,p2. Then the variance reduces to
Var(Z') = 2  36E' + 8 L' (1  2p0) + 6 LL'P + 4E' n +
ap ao U2 ap
0
6 + 36E2 + (1  7E,2,P) + Lc" (I  2po)(12p'  12p0 + 1)
iF ap 2 3 0
(70 (70
Wenotice thatwhen r = 0, thevariance reducesto tile MCEcase for Bernoulli sampling. Whenn > 1, P
< 1, andpo = 0,5, the variance reduces to that derived under tile normal distribution assumption. Or,
VarA11P(Z2) = 2(1 + 2nF2 (14)
IT
Anomalous Cognition (AC)
The primary assumption for AC is that the parent distribution remains unchanged, (i.e., N(,uo, q02). It
further assumes that because of a ACmediated bias the sampling distribution is distorted leading to a
Zdistribution asN(,u,,, aa,2). In the most general case,kia,and aac may be functions of n and time. Let
be given by
The expected value of Z is given by (by definition)
E'A'1(Z) = YI
The expected valueof Z2 isgiven by definition as
(15)
EN (Z2) = 'U2 + (7' (16)
AC i2c ac,
The Var(Z2) can be calculated by noticing that
Z2 (I,p,2
_ X2
2" a2
Ta ac
So the Var(Z2) is given by
(L2
Var _) = 2 1 + 2LL)
(72 (72
ac. ac
VarANC(Z2) +2,U 2 a.2').
= 2(aa4, ac (17)
Approved For Release 2000/08108 : CIARDP9600789ROO32001800017 Appendix: 5
Approved For Relpase ~0001CW108: %WeP,8~7ffiPW0032001800017
Decision Augmentation T eory: lowardCal
As in the normal case, the primary assumption is that the parent distribution remains unchanged, and
that because of a psimediated bias the sampling distribution is distorted leading to a discrete Zdis
tribution characterized bylt,,(n) and q,,2(n). Thus, by definition, the expected values of Z and Z2 are
given by
EHC(Z)
E'JC(ZI) = 'U2~ + a2
A a a r.
For any value of n, estimates of these parameters are calculated frorn N data points as
Ar
N 7 zj, and
j=1
M Z?
2 N I  ~2
aac (N 1)
The Var(Z2) for the discrete case is identical to the continuous case. Therefore
Var,",(Z2) = 2 (aa~, + 2,j 2, (j2j. (19)
a a
Approved For Release 2000/08/08 : CIARDP9600789ROO32001800017 Appendix: 6