Scale Invariance, Power Laws, and Regular Variation (Part III)

This is part III in a series on scale invariance, power laws, and regular variation, so you should definitely click on over to parts I and II if you haven’t read those yet.

In part II, we showed that the class of regularly varying distributions formalizes the notion of “approximately scale-invariant,” just like power-law tails formalize the notion of scale-invariant. The fact that regularly-varying distributions are exactly those distributions that are asymptotically scale-free suggests that they, in a sense, should be able to be analyzed (at least asymptotically) like they are simply power-law distributions. In fact, this can be formalized explicitly, and regularly varying distributions can be analyzed nearly as if they were power-law (Pareto) distributions as far as the tail is concerned. This makes them remarkably easy to work with and highlights that the added generality from working with the class of regularly-varying distributions, as opposed to working specifically with Pareto distributions, comes without too much added complexity.

Regularly-varying distributions are approximately power laws

To begin, it is important to formalize exactly what we mean when we say that regularly-varying distributions have tails that are approximately power-law. In order to do this, we need to first introduce the concept of a “slowly-varying function.”

Definition 6 A function {L : {\mathbb R}_+ \rightarrow {\mathbb R}_+} is said to be slowly varying if {\lim_{x \rightarrow \infty} \frac{L(xy)}{L(x)} = 1} for all {y > 0.}

Slowly-varying functions are simply regularly varying functions of index zero. So, intuitively, they can be thought of as functions that grow/decay asymptotically slower than any polynomial. For example, {\log x}, {\log\log x}, etc. This can be formalized as follows.

Lemma 7 If the function {L : {\mathbb R}_+ \rightarrow {\mathbb R}_+} is slowly varying, then

\displaystyle \lim_{x \rightarrow \infty} x^{\rho} L(x) = \left\{ \begin{array}{cl} 0 & \text{ for } \rho < 0 \\ \infty & \text{ for } \rho > 0 \end{array} \right..

This allows us to state (and prove) the following representation theorem for regularly-varying distributions.

Theorem 8 A function {f: {\mathbb R}_+ \rightarrow {\mathbb R}_+} is a regularly varying function with index {\rho} if and only if {f(x)=x^{\rho} L(x),} where {L(x)} is a slowly varying function.

Proof: Suppose that {f \in RV(\rho).} Define {L(x) = \frac{f(x)}{x^{\rho}}.} It is enough to show that {L} is slowly varying, which follows easily

\displaystyle \lim_{x \rightarrow \infty} \frac{L(xy)}{L(x)} = \lim_{x \rightarrow \infty} \frac{f(xy)}{f(x)} \frac{x^{\rho}}{(xy)^{\rho}} = 1.

To prove the other direction, we need to show that given {f(x)=x^{\rho} L(x),} where {L(x)} is a slowly-varying function, {f \in RV(\rho).} For {y > 0,}

\displaystyle \lim_{x \rightarrow \infty}\frac{f(xy)}{f(x)} = \lim_{x \rightarrow \infty} \frac{(xy)^{\rho}}{x^{\rho}} \frac{L(xy)}{L(x)} = y^{\rho},

which implies, by definition, that {f \in RV(\rho).} \Box

The key implication of Theorem 8 is that regularly-varying distributions can be thought of as distributions with approximately power-law tails in a very strong sense. That is, they differ from a power-law tail only by a slowly-varying function {L(x)}, which can intuitively be treated as a constant. This intuition is the key to working with regularly-varying distributions, and leads to many beautiful properties.

Perhaps one of the most appealing aspects of working with power-law and Pareto distributions is that when one needs to manipulate them to calculate moments, conditional probabilities, convolutions, and other such things all that is required is the integration or differentiation of polynomials, which is often quite straightforward. This is in stark contrast to distributions such as the Normal and LogNormal, which can be very difficult to work with in this way.

Properties of regularly-varying distributions

One of the nicest properties of regularly varying distributions is that, in a sense, you can treat them as if they are simply polynomials when integrating or differentiating them — as long as you only care about the tail — and so they are not much more difficult to work with than Pareto distributions.

The properties of regularly-varying functions with respect to integration and differentiation are typically referred to as Karamata’s theorem. I’ll start with the theorem regarding integration of regularly-varying functions, since its statement is a bit cleaner.

Theorem 9 (Karamata’s theorem)

  1. For {\rho > -1,} {f \in RV(\rho)} if and only if

    \displaystyle \int_{0}^{x} f(t) dt \ \sim \ \frac{x f(x)}{\rho+1}.

  2. For {\rho < -1,} {f \in RV(\rho)} if and only if

    \displaystyle \int_{x}^{\infty} f(t) dt \ \sim \ \frac{x f(x)}{-(\rho+1)}.


To interpret Karamata’s theorem, one can simply think about what would happen if {f(t)=x^\rho}. In that case, for {\rho \neq -1},

\displaystyle \int_0^x f(t) dt = \frac{x^{\rho+1}}{\rho+1} = \frac{x f(x)}{\rho+1},

\displaystyle \int_x^\infty f(t)dt = \frac{x^{\rho+1}}{-(\rho+1)} = \frac{x f(x)}{-(\rho+1)}.

Thus, the theorem highlights that, asymptotically, the integrals are behaving as if the function was a polynomial as far as the tail is concerned (the {=} is replaced by a {\sim}). However, there are added constraints on {\rho} that define when this property holds.

Not surprisingly, regularly-varying distributions also asymptotically behave as if they were polynomials with respect to differentiation. In particular, if {f(x)=x^{\alpha}}, then {f'(x)=\alpha x^{\alpha-1}}, and so {\alpha f(x) = x f'(x)}. The following result shows that exactly this relationship holds for regularly-varying distributions with {=} replaced by {\sim}.

Theorem 10 Suppose that the function {f} is absolutely continuous with derivative {f'.} If {f \in RV(\rho)} and {f'} is eventually monotone, then {xf'(x) \sim \rho f(x).} Moreover, if {\rho \neq 0,} then {|f'(x)| \in RV(\rho-1).}

Regularly-varying distributions have many other appealing properties too, which we’ll discuss in our upcoming book. But, hopefully these already highlight that they can be quite easy to work with, despite their generality.


One thought on “Scale Invariance, Power Laws, and Regular Variation (Part III)

  1. Pingback: Rigor + Relevance | Scale Invariance, Power Laws, and Regular Variation (Part II)

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s