Jump to content

Contraharmonic mean

From Wikipedia, the free encyclopedia

In mathematics, a contraharmonic mean is a function complementary to the harmonic mean. The contraharmonic mean is a special case of the Lehmer mean, , where p = 2.

Definition

[edit]

The contraharmonic mean of a set of positive real numbers[1] is defined as the arithmetic mean of the squares of the numbers divided by the arithmetic mean of the numbers:

Two-variable formulae

[edit]

From the formulas for the arithmetic mean and harmonic mean of two variables we have:

Notice that for two variables the average of the harmonic and contraharmonic means is exactly equal to the arithmetic mean:

A(H(a, b), C(a, b)) = A(a, b)

As a gets closer to 0 then H(ab) also gets closer to 0. The harmonic mean is very sensitive to low values. On the other hand, the contraharmonic mean is sensitive to larger values, so as a approaches 0 then C(ab) approaches b (so their average remains A(ab)).

There are two other notable relationships between 2-variable means. First, the geometric mean of the arithmetic and harmonic means is equal to the geometric mean of the two values:

The second relationship is that the geometric mean of the arithmetic and contraharmonic means is the root mean square:

The contraharmonic mean of two variables can be constructed geometrically using a trapezoid.[2]

Additional constructions

[edit]

The contraharmonic mean can be constructed on a circle similar to the way the Pythagorean means of two variables are constructed.[3] The contraharmonic mean is the remainder of the diameter on which the harmonic mean lies.[4]

History

[edit]

The contraharmonic mean was discovered by the Greek mathematician Eudoxus in the 4th century BCE.[5]

Properties

[edit]

It is easy to show that this satisfies the characteristic properties of a mean of some list of values :

The first property implies the fixed point property, that for all k > 0,

C(k, k, ..., k) = k

The contraharmonic mean is higher in value than the arithmetic mean and also higher than the root mean square: where x is a list of values, H is the harmonic mean, G is geometric mean, L is the logarithmic mean, A is the arithmetic mean, R is the root mean square and C is the contraharmonic mean. Unless all values of x are the same, the ≤ signs above can be replaced by <.

The name contraharmonic may be due to the fact that when taking the mean of only two variables, the contraharmonic mean is as high above the arithmetic mean as the arithmetic mean is above the harmonic mean (i.e., the arithmetic mean of the two variables is equal to the arithmetic mean of their harmonic and contraharmonic means).

Relationship to arithmetic mean and variance

[edit]

The contraharmonic mean of a random variable is equal to the sum of the arithmetic mean and the variance divided by the arithmetic mean.[6] Since the variance is always ≥0 the contraharmonic mean is always greater than or equal to the arithmetic mean.

The ratio of the variance and the mean was proposed as a test statistic by Clapham.[7] This statistic is the contraharmonic mean less one.

Other relationships

[edit]

Any integer contraharmonic mean of two different positive integers is the hypotenuse of a Pythagorean triple, while any hypotenuse of a Pythagorean triple is a contraharmonic mean of two different positive integers.[8]

It is also related to Katz's statistic[9] where m is the mean, s2 the variance and n is the sample size.

Jn is asymptotically normally distributed with a mean of zero and variance of 1.

Uses in statistics

[edit]

The problem of a size biased sample was discussed by Cox in 1969 on a problem of sampling fibres. The expectation of size biased sample is equal to its contraharmonic mean,[10] and the contraharmonic mean is also used to estimate bias fields in multiplicative models, rather than the arithmetic mean as used in additive models.[11]

The contraharmonic mean can be used to average the intensity value of neighbouring pixels in graphing, so as to reduce noise in images and make them clearer to the eye.[12]

The probability of a fibre being sampled is proportional to its length. Because of this the usual sample mean (arithmetic mean) is a biased estimator of the true mean. To see this consider where f(x) is the true population distribution, g(x) is the length weighted distribution and m is the sample mean. Taking the usual expectation of the mean here gives the contraharmonic mean rather than the usual (arithmetic) mean of the sample.[13] This problem can be overcome by taking instead the expectation of the harmonic mean (1/x). The expectation and variance of 1/x are and has variance where E is the expectation operator. Asymptotically E[1/x] is distributed normally.

The asymptotic efficiency of length biased sampling depends compared to random sampling on the underlying distribution. if f(x) is log normal the efficiency is 1 while if the population is gamma distributed with index b, the efficiency is b/(b − 1). This distribution has been used in modelling consumer behaviour[14] as well as quality sampling.

It has been used in image analysis,[15] and recently alongside the exponential distribution in transport planning in the form of its inverse.[16]

See also

[edit]

References

[edit]
  1. ^ See "Means of Complex Numbers" (PDF). Texas College Mathematics Journal. 1 (1). January 1, 2005. Archived from the original (PDF) on September 9, 2006.
  2. ^ Umberger, Shannon. "Construction of the Contraharmonic Mean in a Trapezoid". University of Georgia.
  3. ^ Nelsen, Roger B. Proofs without Words/Exercises in Visual Thinking. p. 56. ISBN 0-88385-700-6.
  4. ^ Slaev, Valery A.; Chunovkina, Anna G.; Mironovsky, Leonid A. (2019). Metrology and Theory of Measurement. De Gruyter. p. 217. ISBN 9783110652505.
  5. ^ Antoine, C. (1998). Les Moyennes. Paris: Presses Unversitaires de France.
  6. ^ Kingley, Michael C.S. (1989). "The distribution of hauled out ringed seals an interpretation of Taylor's law". Oecologia (79): 106–110.
  7. ^ Clapham, Arthur Roy (1936). "Overdispersion in grassland communities and the use of statistical methods in plant ecology". The Journal of Ecology (14): 232.
  8. ^ Pahikkala, Jussi (2010). "On contraharmonic mean and Pythagorean triples". Elemente der Mathematik. 65 (2): 62–67.
  9. ^ Katz, L. (1965). United treatment of a broad class of discrete probability distributions. Proceedings of the International Symposium on Discrete Distributions. Montreal.
  10. ^ Zelen, Marvin (1972). Length-biased sampling and biomedical problems. Biometric Society Meeting. Dallas, Texas.
  11. ^ Banerjee, Abhirup; Maji, Pradipta (2013). Rough Sets for Bias Field Correction in MR Images Using Contraharmonic Mean and Quantitative Index. IEEE Transactions on Medical Imaging.
  12. ^ Mitra, Sabry (October 2021). "Contraharmonic Mean Filter". Kajian Ilmiah Informatika dan Komputer. 2 (2): 75–79.
  13. ^ Sudman, Seymour (1980). Quota sampling techniques and weighting procedures to correct for frequency bias.
  14. ^ Keillor, Bruce D.; D'Amico, Michael; Horton, Veronica (2001). "Global Consumer Tendencies". Psychology and Marketing. 18 (1): 1–19.
  15. ^ Pathak, Monika; Singh, Sukhdev (2014). "Comparative analysis of image denoising techniques". International Journal of Computer Science and Engineering Technology. 5 (2): 160–167.
  16. ^ Amreen, Mohammed; Venkateswarlu, Bandi (2024). "A New Way for Solving Transportation Issues Based on the Exponential Distribution and the Contraharmonic Mean". Journal of Applied Mathematics and Informatics. 42 (3): 647–661.
[edit]