Similarity of Symbol Frequency Distributions with Heavy Tails

Quantifying the similarity between symbolic sequences is a traditional problem in information theory which requires comparing the frequencies of symbols in different sequences. In numerous modern applications, ranging from DNA over music to texts, the distribution of symbol frequencies is characteri...

Full description

Bibliographic Details
Main Authors: Martin Gerlach, Francesc Font-Clos, Eduardo G. Altmann
Format: Article
Language:English
Published: American Physical Society 2016-04-01
Series:Physical Review X
Online Access:http://doi.org/10.1103/PhysRevX.6.021009
id doaj-52fc53344cf44720988f7cea8c0d5381
record_format Article
spelling doaj-52fc53344cf44720988f7cea8c0d53812020-11-24T22:37:25ZengAmerican Physical SocietyPhysical Review X2160-33082016-04-016202100910.1103/PhysRevX.6.021009Similarity of Symbol Frequency Distributions with Heavy TailsMartin GerlachFrancesc Font-ClosEduardo G. AltmannQuantifying the similarity between symbolic sequences is a traditional problem in information theory which requires comparing the frequencies of symbols in different sequences. In numerous modern applications, ranging from DNA over music to texts, the distribution of symbol frequencies is characterized by heavy-tailed distributions (e.g., Zipf’s law). The large number of low-frequency symbols in these distributions poses major difficulties to the estimation of the similarity between sequences; e.g., they hinder an accurate finite-size estimation of entropies. Here, we show analytically how the systematic (bias) and statistical (fluctuations) errors in these estimations depend on the sample size N and on the exponent γ of the heavy-tailed distribution. Our results are valid for the Shannon entropy (α=1), its corresponding similarity measures (e.g., the Jensen-Shanon divergence), and also for measures based on the generalized entropy of order α. For small α’s, including α=1, the errors decay slower than the 1/N decay observed in short-tailed distributions. For α larger than a critical value α^{*}=1+1/γ≤2, the 1/N decay is recovered. We show the practical significance of our results by quantifying the evolution of the English language over the last two centuries using a complete α spectrum of measures. We find that frequent words change more slowly than less frequent words and that α=2 provides the most robust measure to quantify language change.http://doi.org/10.1103/PhysRevX.6.021009
collection DOAJ
language English
format Article
sources DOAJ
author Martin Gerlach
Francesc Font-Clos
Eduardo G. Altmann
spellingShingle Martin Gerlach
Francesc Font-Clos
Eduardo G. Altmann
Similarity of Symbol Frequency Distributions with Heavy Tails
Physical Review X
author_facet Martin Gerlach
Francesc Font-Clos
Eduardo G. Altmann
author_sort Martin Gerlach
title Similarity of Symbol Frequency Distributions with Heavy Tails
title_short Similarity of Symbol Frequency Distributions with Heavy Tails
title_full Similarity of Symbol Frequency Distributions with Heavy Tails
title_fullStr Similarity of Symbol Frequency Distributions with Heavy Tails
title_full_unstemmed Similarity of Symbol Frequency Distributions with Heavy Tails
title_sort similarity of symbol frequency distributions with heavy tails
publisher American Physical Society
series Physical Review X
issn 2160-3308
publishDate 2016-04-01
description Quantifying the similarity between symbolic sequences is a traditional problem in information theory which requires comparing the frequencies of symbols in different sequences. In numerous modern applications, ranging from DNA over music to texts, the distribution of symbol frequencies is characterized by heavy-tailed distributions (e.g., Zipf’s law). The large number of low-frequency symbols in these distributions poses major difficulties to the estimation of the similarity between sequences; e.g., they hinder an accurate finite-size estimation of entropies. Here, we show analytically how the systematic (bias) and statistical (fluctuations) errors in these estimations depend on the sample size N and on the exponent γ of the heavy-tailed distribution. Our results are valid for the Shannon entropy (α=1), its corresponding similarity measures (e.g., the Jensen-Shanon divergence), and also for measures based on the generalized entropy of order α. For small α’s, including α=1, the errors decay slower than the 1/N decay observed in short-tailed distributions. For α larger than a critical value α^{*}=1+1/γ≤2, the 1/N decay is recovered. We show the practical significance of our results by quantifying the evolution of the English language over the last two centuries using a complete α spectrum of measures. We find that frequent words change more slowly than less frequent words and that α=2 provides the most robust measure to quantify language change.
url http://doi.org/10.1103/PhysRevX.6.021009
work_keys_str_mv AT martingerlach similarityofsymbolfrequencydistributionswithheavytails
AT francescfontclos similarityofsymbolfrequencydistributionswithheavytails
AT eduardogaltmann similarityofsymbolfrequencydistributionswithheavytails
_version_ 1716493767033225216