Imagine you have a given stochastic process, a function that makes random
decisions, and that you want to calculate the average of a given quantity
coming out of this process. One run of the process gives you one sample of the
desired quantity. To estimate your average, you could then make a list of
samples, and run the process again and again until the variance of this list is
low enough. But what value of this "low" threshold is good when you don't even
know the mean *a priori*?

Student's t-test provides an answer to this question using the unbiased estimator of the standard deviation. (If you do the math, you will note that it is not as simple as replacing the actual variance by its estimator in a Chebyshev inequality.) Here is how it codes in Python:

```
def student_test(samples, prec, confidence=0.9):
"""
Student's t-test.
Parameters
----------
samples : list
List of numbers generated from your stochastic process.
prec : scalar
Desired distance between the empirical and real mean.
confidence : scalar
Desired probability of correct answer.
Returns
-------
result : bool
True if and only if your samples pass Student's t-test.
"""
assert 0. < confidence < 1.
n = len(samples)
sigma = std(samples)
quantile = stats.t.ppf(confidence, n - 1)
return quantile * sigma / sqrt(n) < prec
```

This function will return `True`

when the empirical mean of your list of
samples is less than `prec`

away from the theoretical one (statistically,
this is true under the hypothesis that all samples are i.i.d.
and either follow a normal distribution or are numerous enough so that the law
of large numbers applies). The `confidence`

parameter tunes the
tolerance to failure: when samples are drawn from a normal distribution, it
sets exactly the probability that the tests returns a correct answer. Of
course, higher values of `prec`

or `confidence`

tend to require
more samples.

```
def run_until_good_mean_estimate(f):
"""
Generate samples from a stochastic process `f` until there are enough
samples to provide a good estimate of the mean of `f()`.
Parameters
----------
f : function
Function with no argument returning a real number.
Returns
-------
samples : list
List of return values from the stochastic process.
"""
prec, samples = 1., []
while not student_test(samples, prec):
samples.append(f())
mu = numpy.mean(samples)
prec = mu / 100.
print "Mean f(): %.2f +/- %.2f" % (mu, prec)
return samples
```

Here, we fed back one percent of the empirical mean as precision parameter for
Student's t-test. As a rule of thumb, when a better value for this parameter is
not known *a priori*, this should yield a result with two significant digits of
precision.

## Discussion ¶

There are no comments yet. Feel free to leave a reply using the form below.