In every elementary statistics textbook on inference you will find the following question

How to draw an inference on a correlation estimate between two variables ?

An inverse hyperbolic tangent transformation is applied to the correlation coefficient, which then is shown to be distributed as a normal random variable.This transformed variable is then used to compute confidence intervals on the original scale.

If one is curious, a natural follow up question would be,

Given any random variable X from a specific family of distributions, does there exist a single transformation, Y = g(X),  such that Y has, nearly a normal distribution ?

This seemingly straightforward question is not so straightforward if you start thinking seriously about it. Here is a paper titled,” Transformation theory : How Normal is a family of distributions ?”  by Brad Efron, the pioneer of  bootstrapping technique who answers the question in a masterly fashion.

The paper is 15 pages long and the author tackles this question in extreme detail.  One of the surprising findings of the paper is that you can say a lot about the existence  of a transformation with out actually knowing the transformation. The key mathematical object that is used throughout the discussion is a diagnostic function that is motivated in terms of local transformation to normality. The diagnostic function measures how quickly the local transformation to normality is changing as the parameter of the distribution changes.

The author defines the following 6 classes of distributions

  • NSF : Normal Transformation Family

  • STF : Symmetric Transformation Family

  • GTF : General Transformation Family

  • NSTF : Normal Scaled Transformation Family

  • SSTF : Symmetric Scaled Transformation Family

  • GSTF : General Scaled Transformation Family

For each of the above families, the diagnostic function is used to explore various aspects of the family. The paper gives the method to retrieve all the relevant components of the transformation via the diagnostic function. This paper is extremely important for one reason : One can use the bootstrapped distribution to get to all the components that go in to General scaled transformation family Why is it necessary to know the components of GSTF? Because it is important to take these in to consideration while estimating confidence intervals.

Most of the R packages out there have functions that have amazing math behind them. If you ever happen to use boot.ci function for estimating nonparametric confidence intervals, then there is a fair chance you will like this paper. One of the powerful techniques that boot.ci function uses is Bca (bias-corrected and accelerated bootstrap) . It is used to compute better confidence intervals. If you are curious about the math behind Bca, going through this paper would be helpful.