If a statistical variable understudy can be represented on a ratio or at least an
interval scale of measurement, then it is said to have proper units. With some
assumption made about the distribution of the variable (say normality, for example),
the variable is fit to be tested parametrically. The t-test, the z-test etc are a few well-
known parametric testing methods. These tests lay different restrictions on the data.
If a statistical variable is of the nominal or ordinal type only, then it does not qualify to
be parametrically tested. Sometimes we may have obtained a sample which may
not be a part of a well-defined population. In this case, there are no population
parameters to fall back upon since the population is itself nonexistent or not well
defined. Here we conduct a non-parametric test, which is essentially distribution-
free. In this case, there are no requirements of normality or homogeneity in the data.
This also means there can be a few outliers and their effect will be ignored.
An advantage of non-parametric tests is that sometimes, they give quick answers
with little computation work. However, since these tests are non-parametric, it is
difficult to quantitatively justify the observed differences.
Many critical statistical procedures such as regression, hypothesis testing and
ANOVA require the population to be normally distributed. This means if we can
"normalize" the data, we can use use these powerful statistical analysis tools. There
are tests that do not require an assumption of normality, but they are not as sensitive
as the ones based on normal distribution. Parametric tests are many and are more
powerful than their nonparametric counterparts. Not to forget the implementation of
control charts for mean, range etc in statistical quality control, which are also based
on normal distribution.
(Kindly ACCEPT my answer. BONUS is welcome. Please ask for me again. Thanks.)