# Muckenhoupt's proof of the Hardy inequality in dimension 1

Let $\mu$, $\nu$ be nonnegative measurable functions on $(0,+\infty)$,
which we call *weights*. We say Hardy’s inequality holds for the
weights $\mu$ and $\nu$ if the following statement is true: there
exists a finite constant $\lambda > 0$ such that for any locally
integrable function $f \colon [0,+\infty) \to [0,+\infty)$ it holds
that
\begin{equation}
\label{eq:Hardy}
\lambda \int_0^\infty \left( \int_0^x f(y) \d y \right)^2 \mu(x) \d x
\leq
\int_0^\infty f(x)^2 \nu(x) \d x.
\end{equation}
Notice that both sides of the inequality are well defined for all such
$f$, even if they may be $+\infty$ in some cases. The supremum of all
the constants $\lambda > 0$ satisfying the above is called the
*Hardy constant* for the weights $\mu$, $\nu$ (and it must be
finite except in the uninteresting case in which $\mu$ is $0$ almost
everywhere). It is easy to see that the inequality in fact must hold
also for the Hardy constant, so the Hardy constant is really the best
constant one can have in the above inequality. We keep denoting it by
$\lambda$ below. Inequality \eqref{eq:Hardy} can also be written
equivalently as
\begin{equation}
\label{eq:Hardy2}
\lambda \int_0^\infty u(x)^2 \mu(x) \d x
\leq
\int_0^\infty (u’(x))^2 \nu(x) \d x,
\end{equation}
for all absolutely continuous functions $u \colon [0,+\infty) \to \R$
with $u(0)=0$. When we write it in this way it is clearer that Hardy’s
inequality is also a close relative of the Poincaré
inequality. Another clearly equivalent form is
\begin{equation}
\label{eq:Hardy3}
\lambda \int_0^\infty (u(x)-u(0))^2 \mu(x) \d x
\leq
\int_0^\infty (u’(x))^2 \nu(x) \d x,
\end{equation}
for all absolutely continuous functions $u \colon [0,+\infty) \to \R$
(without restriction on the value at $0$).

A famous result by Tomaselli, Talenti \& Artola says the following:

**Theorem 1**.
Hardy’s inequality holds for $\mu$ and $\nu$ if and only if
\begin{equation}
\label{eq:B}
B := \sup_{r > 0} \left( \int_r^\infty \mu(x) \d x \right)
\left( \int_0^r \frac{1}{\nu(x)} \d x \right) < +\infty.
\end{equation}
(Here, it is understood that $1/\nu(x) = +\infty$ whenever
$\nu(x) = 0$. Also, in the product inside the supremum we take
$0 \cdot \infty$ to mean $0$.) The best constant $\lambda$ in
Hardy's inequality satisfies
\begin{equation*}
\frac{1}{4 B} \leq \lambda \leq \frac{1}{B}.
\end{equation*}

This result gives an explicit way to check whether the inequality holds, and gives the optimal constant up to a factor of $4$! There's a very clean proof of it by Muckenhoupt which I discuss here (in that paper you can also find references to the papers of Tomaselli, Talenti and Artola, which are not so well known; in fact Theorem 1 is sometimes known as Muckenhoupt's theorem). This kind of inequalities crop up often in kinetic theory; for example, we used it in relation to the Smoluchowski equation, and a discrete version in a study of the Becker-Döring equation. Of course, there are many generalisations of the inequality \eqref{eq:Hardy}: with different exponents, in domains other than $(0,+\infty)$, in higher dimensions, for $\mu$, $\nu$ measures instead of functions, and discrete versions. Many of these can be found in this nice book by Opic and Kufner. More recently, a nonlocal version has been proved by Frank and Seiringer, which happens to be very interesting for some equations in mathematical biology and other applied domains. The name “Hardy's inequality” or “Hardy's inequality with weights” is used for different versions of it, and is not completely standard across the literature. There seems to be some agreement that the inequality \begin{equation} \label{eq:HardyHardy} \int_0^\infty \left( \frac{1}{x} \int_0^x f(y) \d y \right)^p \d x \leq \left(\frac{p}{p-1}\right)^p \int_0^\infty f(x)^p \d x, \end{equation} which holds for all $1 < p < +\infty$ and all nonnegative measurable functions $f \colon (0,+\infty) \to \R$, is the “basic” Hardy inequality, and it was indeed proved by Hardy, with contributions from other mathematicians. Notice that in the case $p=2$, \eqref{eq:HardyHardy} is a particular case of \eqref{eq:Hardy} with $\mu(x) = 1/x^2$, $\nu(x) = 1$. The value given for the constant is the optimal one in this case.

The proof given by Muckenhoupt goes like this:

**Proof.**
We will use the form \eqref{eq:Hardy2} of the inequality, since it a
bit more convenient for this argument.
First, in order to see that \eqref{eq:B} is a necessary condition,
assume that Hardy's inequality \eqref{eq:Hardy2} holds for some
finite $\lambda > 0$. If $1/\nu$ is integrable on $[0,R]$ for a
given $R > 0$ then we define
\begin{equation*}
u(x) := \int_0^{\min\{x, R\}} \frac{1}{\nu(x)} \d x,
\end{equation*}
which is constant from $x=R$ to $+\infty$. Bounding below the left
hand side of \eqref{eq:Hardy2} by the same integral over the region
$(R, \infty)$ for some $R > 0$ we have
\begin{equation*}
\lambda \left( \int_R^\infty \mu(x) \right)
\left( \int_0^R \frac{1}{\nu(x)} \d x \right)^2 \d x
\leq
\int_0^R \frac{1}{\nu(x)} \d x,
\end{equation*}
that is,
\begin{equation}
\label{eq:1}
\left( \int_R^\infty \mu(x) \right)
\left( \int_0^R \frac{1}{\nu(x)} \d x \right) \d x
\leq
\frac{1}{\lambda}.
\end{equation}
On the other hand, assume that for a given $R > 0$ we have
$\int_0^R \frac{1}{\nu(x)} \d x = +\infty$. Then for any function
$\overline{\nu}$ such that $\nu(x) \leq \overline{\nu}(x)$ for all
$x$, and such that $1/\overline{\nu}$ is integrable over compact
sets of $[0,+\infty)$, we define
\begin{equation*}
u(x) :=
\begin{cases}
\int_0^x \frac{1}{\overline{\nu}(x)} \d x &\qquad \text{for
$0 \leq x < R$},
\\
\int_0^R \frac{1}{\overline{\nu}(x)} \d x &\qquad \text{for
$R \leq x$},
\end{cases}
\end{equation*}
much in the same way as before. With the same estimate we see then
that
\begin{equation*}
\lambda \left( \int_R^\infty \mu(x) \right)
\left( \int_0^R \frac{1}{\overline{\nu}(x)} \d x \right)^2 \d x
\leq
\int_0^R \frac{\nu(x)}{\overline{\nu}(x)^2} \d x
\leq
\int_0^R \frac{1}{\overline{\nu}(x)} \d x.
\end{equation*}
Since we can choose $\overline{\nu}$ such that
$\int_0^R \frac{1}{\overline{\nu}}$ is as large as we like, we
deduce that
\begin{equation}
\label{eq:2}
\int_R^\infty \mu(x) \d x = 0.
\end{equation}
We have proved that for all $R > 0$, either \eqref{eq:1} holds or
\eqref{eq:2} holds, so \eqref{eq:B} is a necessary condition for
Hardy’s inequality to hold, and $B < 1/\lambda$.
Now, assume that \eqref{eq:B} holds, and let us show Hardy's
inequality \eqref{eq:Hardy2} holds. Take a generic function
$\alpha$, to be fixed later. For all $x \geq 0$ we have, by the
Cauchy-Schwarz inequality,
\begin{equation*}
u(x)^2 = \left( \int_0^x u'(y) \d y \right)^2
\leq \beta(x) \int_0^x (u'(y))^2 \nu(y) \alpha(y) \d y,
\end{equation*}
where
\begin{equation*}
\beta(x) := \int_0^x \frac{1}{\nu(y) \alpha(y)} \d y.
\end{equation*}
So
\begin{multline*}
\int_0^\infty u(x)^2 \mu(x) \d x \leq \int_0^\infty \int_0^x
\mu(x) \beta(x) (u'(y))^2 \nu(y) \alpha(y) \d y \d x
\\
= \int_0^\infty (u'(y))^2 \nu(y) \alpha(y) \int_y^\infty \mu(x)
\beta(x) \d x \d y.
\end{multline*}
In order to prove our inequality it is then enough to find a
function $\alpha$ such that
\begin{equation*}
\alpha(y) \int_y^\infty \mu(x) \beta(x) \d x
\leq 4B
\qquad \text{for all $y > 0$.}
\end{equation*}
Now there’s a clever choice of $\alpha$, which I wouldn’t know how
to find if I didn’t know it already! It is
\begin{equation*}
\alpha(y)^2 := \int_0^y \frac{1}{\nu(x)} \d x,
\end{equation*}
which gives
\begin{equation*}
\beta(x) = \int_0^x \frac{1}{\nu(y) \alpha(y)} \d y
= \int_0^x \frac{(\alpha(y)^2)’}{\alpha(y)} \d y
= 2 \int_0^x \alpha’(y) \d y = 2 \alpha(x),
\end{equation*}
so we need to show that
\begin{equation}
\label{eq:3}
2 \alpha(y) \int_y^\infty \mu(x) \alpha(x) \d x
\leq 4B
\qquad \text{for all $y > 0$.}
\end{equation}
Call $M(x) := \int_x^\infty \mu(y) \d y$. Since \eqref{eq:B} is just
$M(x) \alpha(x)^2 \leq B$,
\begin{equation*}
\int_y^\infty \mu(x) \alpha(x) \d x
\leq
\sqrt{B} \int_y^\infty \frac{\mu(x)}{\sqrt{M(x)}} \d x
= - 2 \sqrt{B} \int_y^\infty \left({\sqrt{M(x)}}\right)’ \d x
= 2 \sqrt{B} \sqrt{M(y)}.
\end{equation*}
Using this on the left of \eqref{eq:3} and again that
$\sqrt{M(x)} \alpha(x) \leq \sqrt{B}$ gives \eqref{eq:3}.
∎

## Some examples

Muckenhoupt’s theorem gives us very good estimates of the Hardy constant in many cases. As an example, consider exponential weights: \begin{equation*} \mu(x) = \nu(x) = e^{-x}, \qquad x > 0. \end{equation*} One easily sees that $B=1$ in this case, so Theorem 1 says that \begin{equation*} \int_0^\infty (u(x)-u(0))^2 e^{-x} \d x \leq 4 \int_0^\infty (u’(x))^2 e^{-x} \d x \end{equation*} for all absolutely continuous functions $u \: [0, +\infty) \to \R$. In fact, the Hardy constant $\lambda = 1/4$ is actually optimal here! We can show this by considering essentially the same test function which was mentioned in the proof of the theorem: for $r < 1/2$ take \begin{equation*} u(x) := e^{\alpha x},\qquad x \geq 0. \end{equation*} Plugging this in the inequality and letting $\alpha \to 1/2$ shows that one cannot do better than $\lambda = 1/4$. (We could have taken exactly the test function given in the proof, but this one is slightly easier to work with.) So in fact lower bound on $\lambda$ in Muckenhoupt's theorem is optimal here, and the test function used in the proof even gives a proof of that! I don't know when this holds for other weights, that is, when the lower bound in Theorem 1 happens to be optimal---drop me a line if you do!

For another example, consider \begin{equation*} \mu(x) = \nu(x) = e^{-x^2}, \qquad x > 0. \end{equation*} It is easy to see that the constant $B$ is finite in this case too, by using that as $r \to +\infty$, \begin{equation*} \int_r^\infty e^{x^2} \d x \sim \frac{1}{2 r} e^{r^2}, \qquad \int_0^r e^{x^2} \d x \sim \frac{1}{2 r} e^{r^2}, \end{equation*} but it is harder to estimate the best constant in this case since the integral cannot be calculated explicitly. In general, for the weights \begin{equation*} \mu(x) = \nu(x) = e^{-x^k}, \qquad x > 0. \end{equation*} one can see that Hardy’s inequality holds for all $k \geq 1$, but not for $0 < k < 1$.