Often a set of data is collected, or an experiment carried out, not simply with a view to
summarising the results and estimating suitable parameters but rather in order to test an idea. This
idea or hypothesis may arise by purely theoretical reasoning, or it may be suggested by the results
of earlier experiments.
The way statistics is used to set up our hypothesis to test is a little strange. First we start with
what is called the "Null hypothesis." This is the assumption that there is no effect of e.g.
experimental treatment, difference in conditions etc. We test this against an alternative
hypothesis: that is the hypothesis we are attempting to support with our data. Generally we hope
that our data shows sufficient differences from the expectations of the null hypothesis to reject it
and so accept our alternative hypothesis. E.g. from null hypothesis we expect no effect of drug
upon heart rate. Our data shows an increase. If that increase is sufficiently large then we may
conclude that no, the null hypothesis was wrong, there is an effect of this drug which does cause
an increase in heart rate.
(Not always the case we hope for a difference, one may hope that there is no difference.--we can
show there is no effect. Eg. tobacco company may wish to show that smoking their cigarettes
does not cause an in increase of a certain type of cancer. Rather than hope to reject the null
hypothesis, we may hope to be able to "fail to reject" the null hypothesis.)