Zipf's law
Probability mass function
Zipf PMF for N = 10 on a log–log scale. The horizontal axis is the index k . (Note that the function is only defined at integer values of k. The connecting lines do not indicate continuity.) 

Cumulative distribution function
Zipf CDF for N = 10. The horizontal axis is the index k . (Note that the function is only defined at integer values of k. The connecting lines do not indicate continuity.) 

Parameters  (real) (integer) 

Support  
pmf  where H_{N,s} is the Nth generalized harmonic number 
CDF  
Mean  
Mode  
Entropy  
MGF  
CF 
Zipf's law (/ˈzɪf/) is an empirical law formulated using mathematical statistics that refers to the fact that many types of data studied in the physical and social sciences can be approximated with a Zipfian distribution, one of a family of related discrete power law probability distributions. The law is named after the American linguist George Kingsley Zipf (1902–1950), who popularized it and sought to explain it (Zipf 1935, 1949), though he did not claim to have originated it.^{[1]} The French stenographer JeanBaptiste Estoup (1868–1950) appears to have noticed the regularity before Zipf.^{[2]}^{[not verified in body]} It was also noted in 1913 by German physicist Felix Auerbach^{[3]} (1856–1933).
Contents
Motivation[edit]
Zipf's law states that given some corpus of natural language utterances, the frequency of any word is inversely proportional to its rank in the frequency table. Thus the most frequent word will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word, etc.: the rankfrequency distribution is an inverse relation. For example, in the Brown Corpus of American English text, the word "the" is the most frequently occurring word, and by itself accounts for nearly 7% of all word occurrences (69,971 out of slightly over 1 million). True to Zipf's Law, the secondplace word "of" accounts for slightly over 3.5% of words (36,411 occurrences), followed by "and" (28,852). Only 135 vocabulary items are needed to account for half the Brown Corpus.^{[4]}
The same relationship occurs in many other rankings unrelated to language, such as the population ranks of cities in various countries, corporation sizes, income rankings, ranks of number of people watching the same TV channel,^{[5]} and so on. The appearance of the distribution in rankings of cities by population was first noticed by Felix Auerbach in 1913.^{[3]} Empirically, a data set can be tested to see whether Zipf's law applies by checking the goodness of fit of an empirical distribution to the hypothesized power law distribution with a KolmogorovSmirnov test, and then comparing the (log) likelihood ratio of the power law distribution to alternative distributions like an exponential distribution or lognormal distribution.^{[6]} When Zipf's law is checked for cities, a better fit has been found with exponent s = 1.07; i.e. the largest settlement is the size of the largest settlement. While Zipf's law holds for the upper tail of the distribution, the entire distribution of cities is lognormal and follows Gibrat's law.^{[7]} Both laws are consistent because a lognormal tail can typically not be distinguished from a Pareto (Zipf) tail.
Theoretical review[edit]
Zipf's law is most easily observed by plotting the data on a loglog graph, with the axes being log (rank order) and log (frequency). For example, the word "the" (as described above) would appear at x = log(1), y = log(69971). It is also possible to plot reciprocal rank against frequency or reciprocal frequency or interword interval against rank.^{[1]} The data conform to Zipf's law to the extent that the plot is linear.
Formally, let:
 N be the number of elements;
 k be their rank;
 s be the value of the exponent characterizing the distribution.
Zipf's law then predicts that out of a population of N elements, the frequency of elements of rank k, f(k;s,N), is:
Zipf's law holds if the number of elements with a given frequency is a random variable with power law distribution ^{[8]}
It has been claimed that this representation of Zipf's law is more suitable for statistical testing, and in this way it has been analyzed in more than 30,000 English texts. The goodnessoffit tests yield that only about 15% of the texts are statistically compatible with this form of Zipf's law. Slight variations in the definition of Zipf's law can increase this percentage up to close to 50%.^{[9]}
In the example of the frequency of words in the English language, N is the number of words in the English language and, if we use the classic version of Zipf's law, the exponent s is 1. f(k; s,N) will then be the fraction of the time the kth most common word occurs.
The law may also be written:
where H_{N,s} is the Nth generalized harmonic number.
The simplest case of Zipf's law is a "^{1}⁄_{f} function." Given a set of Zipfian distributed frequencies, sorted from most common to least common, the second most common frequency will occur ½ as often as the first. The third most common frequency will occur ⅓ as often as the first. The fourth most common frequency will occur ¼ as often as the first. The n^{th} most common frequency will occur ^{1}⁄_{n} as often as the first. However, this cannot hold exactly, because items must occur an integer number of times; there cannot be 2.5 occurrences of a word. Nevertheless, over fairly wide ranges, and to a fairly good approximation, many natural phenomena obey Zipf's law.
Mathematically, the sum of all relative frequencies in a Zipf distribution is equal to the harmonic series, and
In human languages, word frequencies have a very heavytailed distribution, and can therefore be modeled reasonably well by a Zipf distribution with an s close to 1.
As long as the exponent s exceeds 1, it is possible for such a law to hold with infinitely many words, since if s > 1 then
where ζ is Riemann's zeta function.
In a more basic explanation, the most searched word/object will be 1/1 because it is the most searched, the second be 1/2 because it is half as searched than the first one, and third 1/3 for the third searched object. Every graph (That depicts most searched of anything) will follow a similar pattern. It can be represented by 1/n. The variable 'n' will represent what place it has been searched from the second most searched object to the 3 millionth. N: Stands for number
Statistical explanation[edit]
Although Zipf’s Law holds for most languages, even for nonnatural languages like Esperanto,^{[10]} the reason is still not well understood.^{[11]} However, it may be partially explained by the statistical analysis of randomly generated texts. Wentian Li has shown that in a document in which each character has been chosen randomly from a uniform distribution of all letters (plus a space character), the "words" follow the general trend of Zipf's law (appearing approximately linear on loglog plot).^{[12]} Vitold Belevitch in a paper, On the Statistical Laws of Linguistic Distribution offered a mathematical derivation. He took a large class of wellbehaved statistical distributions (not only the normal distribution) and expressed them in terms of rank. He then expanded each expression into a Taylor series. In every case Belevitch obtained the remarkable result that a firstorder truncation of the series resulted in Zipf's law. Further, a secondorder truncation of the Taylor series resulted in Mandelbrot's law.^{[13]}^{[14]}
The principle of least effort is another possible explanation: Zipf himself proposed that neither speakers nor hearers using a given language want to work any harder than necessary to reach understanding, and the process that results in approximately equal distribution of effort leads to the observed Zipf distribution.^{[15]}^{[16]}
Similarly, preferential attachment (intuitively, "the rich get richer" or "success breeds success") that results in the YuleSimon distribution has been shown to fit word frequency versus rank in language^{[17]} and population versus city rank^{[18]} better than Zipf's law. It was originally derived to explain population versus rank in species by Yule, and applied to cities by Simon.
Related laws[edit]
Zipf's law in fact refers more generally to frequency distributions of "rank data," in which the relative frequency of the nthranked item is given by the Zeta distribution, 1/(n^{s}ζ(s)), where the parameter s > 1 indexes the members of this family of probability distributions. Indeed, Zipf's law is sometimes synonymous with "zeta distribution," since probability distributions are sometimes called "laws". This distribution is sometimes called the Zipfian distribution.
A generalization of Zipf's law is the Zipf–Mandelbrot law, proposed by Benoît Mandelbrot, whose frequencies are:
The "constant" is the reciprocal of the Hurwitz zeta function evaluated at s. In practice, as easily observable in distribution plots for large corpora, the observed distribution can better be modelled as a sum of separate distributions for different subsets or subtypes of words that follow different parameterizations of the ZipfMandelbrot distribution, in particular the closed class of functional words exhibit "s" lower than 1, while openended vocabulary growth with document size and corpus size require "s" greater than 1 for convergence of the Generalized Harmonic Series.^{[1]}
Zipfian distributions can be obtained from Pareto distributions by an exchange of variables.^{[8]}
The Zipf distribution is sometimes called the discrete Pareto distribution^{[19]} because it is analogous to the continuous Pareto distribution in the same way that the discrete uniform distribution is analogous to the continuous uniform distribution.
The tail frequencies of the Yule–Simon distribution are approximately
for any choice of ρ > 0.
In the parabolic fractal distribution, the logarithm of the frequency is a quadratic polynomial of the logarithm of the rank. This can markedly improve the fit over a simple powerlaw relationship.^{[20]} Like fractal dimension, it is possible to calculate Zipf dimension, which is a useful parameter in the analysis of texts.^{[21]}
It has been argued that Benford's law is a special bounded case of Zipf's law,^{[20]} with the connection between these two laws being explained by their both originating from scale invariant functional relations from statistical physics and critical phenomena.^{[22]} The ratios of probabilities in Benford's law are not constant. The leading digits of data satisfying Zipf's law with s = 1 satisfy Benford's law.
Benford's law: 


1  0.30103000  
2  0.17609126  0.7735840 
3  0.12493874  0.8463832 
4  0.09691001  0.8830605 
5  0.07918125  0.9054412 
6  0.06694679  0.9205788 
7  0.05799195  0.9315169 
8  0.05115252  0.9397966 
9  0.04575749  0.9462848 
Applications[edit]
In information theory, a symbol (event, signal) of probability contains bits of information. Hence, Zipf law for natural numbers: is equivalent with number containing bits of information. To add information from a symbol of probability into information already stored in a natural number , we should go to such that , or equivalently . For instance, in standard binary system we would have , what is optimal for probability distribution. Using rule for a general probability distribution is the base of Asymmetric Numeral Systems family of entropy coding methods used in data compression, which state distribution is also governed by Zipf law.
See also[edit]
 Bradford's law
 Benford's law
 Demographic gravitation
 Frequency list
 Gibrat's law
 Heaps' law
 Hapax legomenon
 Lorenz curve
 Lotka's law
 Pareto distribution
 Pareto principle, a.k.a. the "80–20 rule"
 Principle of least effort
 Price's law
 Ranksize distribution
 King effect
 Stigler's law of eponymy
 1% rule (Internet culture)
References[edit]
 ^ ^{a} ^{b} ^{c} Powers, David M W (1998). "Applications and explanations of Zipf's law". Association for Computational Linguistics: 151–160. External link in
title=
(help)  ^ Christopher D. Manning, Hinrich Schütze Foundations of Statistical Natural Language Processing, MIT Press (1999), ISBN 9780262133609, p. 24
 ^ ^{a} ^{b} Auerbach F. (1913) Das Gesetz der Bevölkerungskonzentration. Petermann’s Geographische Mitteilungen 59, 74–76
 ^ Fagan, Stephen; Gençay, Ramazan (2010), "An introduction to textual econometrics", in Ullah, Aman; Giles, David E. A., Handbook of Empirical Economics and Finance, CRC Press, pp. 133–153, ISBN 9781420070361. P. 139: "For example, in the Brown Corpus, consisting of over one million words, half of the word volume consists of repeated uses of only 135 words."
 ^ M. Eriksson, S.M. Hasibur Rahman, F. Fraille, M. Sjöström, ”Efficient Interactive Multicast over DVBT2  Utilizing Dynamic SFNs and PARPS”, 2013 IEEE International Conference on Computer and Information Technology (BMSB’13), London, UK, June 2013. Suggests a heterogeneous Zipflaw TV channelselection model
 ^ Clauset, A., Shalizi, C. R., & Newman, M. E. J. (2009). PowerLaw Distributions in Empirical Data. SIAM Review, 51(4), 661–703. doi:10.1137/070710111
 ^ Eeckhout J. (2004), Gibrat's law for (All) Cities. American Economic Review 94(5), 14291451.
 ^ ^{a} ^{b} Adamic, Lada A. (2000) "Zipf, Powerlaws, and Pareto  a ranking tutorial", originally published at http://www.parc.xerox.com/istl/groups/iea/papers/ranking/ranking.html
 ^ MorenoSánchez, I; FontClos, F; Corral, A (2016). "LargeScale Analysis of Zipf’s Law in English Texts". PLoS ONE. doi:10.1371/journal.pone.0147073.
 ^ Bill Manaris; Luca Pellicoro; George Pothering; Harland Hodges (13 February 2006). INVESTIGATING ESPERANTO’S STATISTICAL PROPORTIONS RELATIVE TO OTHER LANGUAGES USING NEURAL NETWORKS AND ZIPF’S LAW (PDF). Artificial Intelligence and Applications. Innsbruck, Austria. pp. 102–108.
 ^ Léon Brillouin, La science et la théorie de l'information, 1959, réédité en 1988, traduction anglaise rééditée en 2004
 ^ Wentian Li (1992). "Random Texts Exhibit Zipf'sLawLike Word Frequency Distribution". IEEE Transactions on Information Theory. 38 (6): 1842–1845. doi:10.1109/18.165464.
 ^ Neumann, Peter G. "Statistical metalinguistics and Zipf/Pareto/Mandelbrot", SRI International Computer Science Laboratory, accessed and archived 29 May 2011.
 ^ Belevitch V (18 December 1959). "On the statistical laws of linguistic distributions". Annales de la Société Scientifique de Bruxelles. I. 73: 310–326.
 ^ Zipf GK (1949). Human Behavior and the Principle of Least Effort. Cambridge, Massachusetts: AddisonWesley. p. 1.
 ^ Ramon Ferrer i Cancho & Ricard V. Sole (2003). "Least effort and the origins of scaling in human language". Proceedings of the National Academy of Sciences of the United States of America. 100 (3): 788–791. PMC 298679 . PMID 12540826. doi:10.1073/pnas.0335980100.
 ^ http://arxiv.org/pdf/1412.4846.pdf
 ^ http://arxiv.org/pdf/1506.08535.pdf
 ^ N. L. Johnson; S. Kotz & A. W. Kemp (1992). Univariate Discrete Distributions (second ed.). New York: John Wiley & Sons, Inc. ISBN 0471548979., p. 466.
 ^ ^{a} ^{b} Johan Gerard van der Galien (20031108). "Factorial randomness: the Laws of Benford and Zipf with respect to the first digit distribution of the factor sequence from the natural numbers". Retrieved 8 July 2016.
 ^ Ali Eftekhari (2006) Fractal geometry of texts. Journal of Quantitative Linguistic 13(23): 177 – 193.
 ^ L. Pietronero, E. Tosatti, V. Tosatti, A. Vespignani (2001) Explaining the uneven distribution of numbers in nature: The laws of Benford and Zipf. Physica A 293: 297 – 304.
Further reading[edit]
Primary:
 George K. Zipf (1949) Human Behavior and the Principle of Least Effort. AddisonWesley.
 George K. Zipf (1935) The Psychobiology of Language. HoughtonMifflin. (see citations at http://citeseer.ist.psu.edu/context/64879/0 )
Secondary:
 Alexander Gelbukh and Grigori Sidorov (2001) "Zipf and Heaps Laws’ Coefficients Depend on Language". Proc. CICLing2001, Conference on Intelligent Text Processing and Computational Linguistics, February 18–24, 2001, Mexico City. Lecture Notes in Computer Science N 2004, ISSN 03029743, ISBN 3540416870, SpringerVerlag: 332–335.
 Damián H. Zanette (2006) "Zipf's law and the creation of musical context," Musicae Scientiae 10: 318.
 Frans J. Van Droogenbroeck (2016) "Handling the Zipf distribution in computerized authorship attribution"
 Kali R. (2003) "The city as a giant component: a random graph approach to Zipf's law," Applied Economics Letters 10: 717720(4)
 Gabaix, Xavier (August 1999). "Zipf's Law for Cities: An Explanation" (PDF). Quarterly Journal of Economics. 114 (3): 739–67. ISSN 00335533. doi:10.1162/003355399556133.
 Axtell, Robert L; Zipf distribution of US firm sizes, Science, 293, 5536, 1818, 2001, American Association for the Advancement of Science
 Ramu Chenna, Toby Gibson; Evaluation of the Suitability of a Zipfian Gap Model for Pairwise Sequence Alignment,
International Conference on Bioinformatics Computational Biology: 2011.
 Shyklo A. (2017); Simple Explanation of Zipf's Mystery via New RankShare Distribution, Derived from Combinatorics of the Ranking Process, Available at SSRN: https://ssrn.com/abstract=2918642.
External links[edit]
Wikimedia Commons has media related to Zipf's law. 
 Strogatz, Steven (20090529). "Guest Column: Math and the City". The New York Times. Retrieved 20090529.—An article on Zipf's law applied to city populations
 Seeing Around Corners (Artificial societies turn up Zipf's law)
 PlanetMath article on Zipf's law
 Distributions de type "fractal parabolique" dans la Nature (French, with English summary)
 An analysis of income distribution
 Zipf List of French words
 Zipf list for English, French, Spanish, Italian, Swedish, Icelandic, Latin, Portuguese and Finnish from Gutenberg Project and online calculator to rank words in texts
 Citations and the Zipf–Mandelbrot's law
 Zipf's Law examples and modelling (1985)
 Complex systems: Unzipping Zipf's law (2011)
 Benford’s law, Zipf’s law, and the Pareto distribution by Terence Tao.