• Keine Ergebnisse gefunden

Note that useful...

N/A
N/A
Protected

Academic year: 2022

Aktie "Note that useful..."

Copied!
51
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Regularization prescriptions and

convex duality: density estimation and Renyi entropies

Ivan Mizera

University of Alberta

Department of Mathematical and Statistical Sciences Edmonton, Alberta, Canada

Linz, October 2008

joint work with Roger Koenker

(University of Illinois at Urbana-Champaign) Gratefully acknowledging the support of the

Natural Sciences and Engineering Research Council of Canada

(2)

Density estimation (say)

A useful heuristics: maximum likelihood Given the datapoints X1,X2, . . . , Xn, solve

Yn

i=1

f(Xi) # max

f

!

or equivalently

− Xn

i=1

log f(Xi) # min

f

!

under the side conditions

f > 0,

Z

f = 1

(3)

Note that useful...

0 5 10 15 20 25

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

(4)

Dirac catastrophe!

(5)

Preventing the disaster for general case

• Sieves (...)

(6)

Preventing the disaster for general case

• Sieves (...)

• Regularization

− Xn

i=1

log f(Xi) # min

f ! f > 0,

Z

f = 1

(7)

Preventing the disaster for general case

• Sieves (...)

• Regularization

− Xn

i=1

log f(Xi) # min

f ! J(f) 6 Λ, f > 0,

Z

f = 1

(8)

Preventing the disaster for general case

• Sieves (...)

• Regularization

− Xn

i=1

log f(xi) + λJ(f) # min

f ! f > 0,

Z

f = 1

(9)

Preventing the disaster for general case

• Sieves (...)

• Regularization

− Xn

i=1

log f(xi) + λJ(f) # min

f

! f > 0,

Z

f = 1

J(·) - penalty (penalizing complexity, lack of smoothness etc.) for instance, J(f) =

Z

|(logf)00| = T V((log f)0) or also J(f) =

Z

|(log f)000| = T V((logf)00)

Good (1971), Good and Gaskins (1971), Silverman (1982), Leonard (1978), Gu (2002), Wahba, Lin, and Leng (2002) See also:

Eggermont and LaRiccia (2001) Ramsay and Silverman (2006)

(10)

See also in particular

Roger Koenker and Ivan Mizera (2007)

Density estimation by total variation regularization Roger Koenker and Ivan Mizera (2006)

The alter egos of the regularized maximum likelihood density estimators: deregularized maximum-entropy, Shannon, R´enyi, Simpson, Gini, and stretched strings

Roger Koenker, Ivan Mizera, and Jungmo Yoon (200?) What do kernel density estimators optimize?

Roger Koenker and Ivan Mizera (2008):

Primal and dual formulations relevant for the numerical estimation of a probability density via regularization

Roger Koenker and Ivan Mizera (200?) Quasi-concave density estimation

(11)

Preventing the disaster for special cases

• Shape constraint: monotonicity

− Xn

i=1

log f(Xi) # min

f

! f > 0,

Z

f = 1

(12)

Preventing the disaster for special cases

• Shape constraint: monotonicity

− Xn

i=1

log f(Xi) # min

f

! f decreasing, f > 0,

Z

f = 1 Grenander (1956), Jongbloed (1998),

Groeneboom, Jongbloed, and Wellner (2001),...

(13)

Preventing the disaster for special cases

• Shape constraint: monotonicity

− Xn

i=1

log f(Xi) # min

f

! f decreasing, f > 0,

Z

f = 1 Grenander (1956), Jongbloed (1998),

Groeneboom, Jongbloed, and Wellner (2001),...

• Shape constraint: (strong) unimodality

− Xn

i=1

log f(Xi) # min

f

! f > 0,

Z

f = 1

(14)

Preventing the disaster for special cases

• Shape constraint: monotonicity

− Xn

i=1

log f(Xi) # min

f

! f decreasing, f > 0,

Z

f = 1 Grenander (1956), Jongbloed (1998),

Groeneboom, Jongbloed, and Wellner (2001),...

• Shape constraint: (strong) unimodality

− Xn

i=1

log f(Xi) # min

f

! −logf convex, f > 0,

Z

f = 1 Eggermont and LaRiccia (2000), Walther (2000)

Rufibach and Dumbgen (2006)

Pal, Woodroofe, and Meyer (2006)

(15)

Note

Shape constraint: no regularization parameter to be set...

... but of course, we need to believe that the shape is plausible

(16)

Note

Shape constraint: no regularization parameter to be set...

... but of course, we need to believe that the shape is plausible Regularization via TV penalty...

... vs log-concavity shape constraint:

The differential operator is the same,

only the constraint is somewhat different Z

|(logf)00| 6 Λ, in the dual |(logf)00| 6 Λ Log-concavity: (logf)00 6 0

(17)

Note

Shape constraint: no regularization parameter to be set...

... but of course, we need to believe that the shape is plausible Regularization via TV penalty...

... vs log-concavity shape constraint:

The differential operator is the same,

only the constraint is somewhat different Z

|(logf)00| 6 Λ, in the dual |(logf)00| 6 Λ Log-concavity: (logf)00 6 0

Only the functional analysis may be a bit more difficult...

... so let us do the shape-constrained case first

(18)

The hidden charm of log-concave distributions

A density f is called log-concave if −log f is convex.

(Usual conventions: −log 0 = ∞, convex where finite, ...)

(19)

The hidden charm of log-concave distributions

A density f is called log-concave if −log f is convex.

(Usual conventions: −log 0 = ∞, convex where finite, ...)

Schoenberg 1940’s, Karlin 1950’s (monotone likelihood ratio) Karlin (1968) - monograph about their mathematics

Barlow and Proschan (1975) - reliability Flinn and Heckman (1975) - social choice

Caplin and Nalebuff (1991a,b) - voting theory Devroye (1984) - how to simulate from them Mizera (1994) - M-estimators

(20)

The hidden charm of log-concave distributions

A density f is called log-concave if −log f is convex.

(Usual conventions: −log 0 = ∞, convex where finite, ...)

Schoenberg 1940’s, Karlin 1950’s (monotone likelihood ratio) Karlin (1968) - monograph about their mathematics

Barlow and Proschan (1975) - reliability Flinn and Heckman (1975) - social choice

Caplin and Nalebuff (1991a,b) - voting theory Devroye (1984) - how to simulate from them Mizera (1994) - M-estimators

Uniform, Normal, Exponential, Logistic, Weibull, Gamma...

- all log-concave

If f is log-concave, then

- it is unimodal (“strongly”)

(21)

The hidden charm of log-concave distributions

A density f is called log-concave if −log f is convex.

(Usual conventions: −log 0 = ∞, convex where finite, ...)

Schoenberg 1940’s, Karlin 1950’s (monotone likelihood ratio) Karlin (1968) - monograph about their mathematics

Barlow and Proschan (1975) - reliability Flinn and Heckman (1975) - social choice

Caplin and Nalebuff (1991a,b) - voting theory Devroye (1984) - how to simulate from them Mizera (1994) - M-estimators

Uniform, Normal, Exponential, Logistic, Weibull, Gamma...

- all log-concave

If f is log-concave, then

- it is unimodal (“strongly”)

- the convolution with any unimodal density is unimodal

- the convolution with any log-concave density is log-concave

(22)

A convex problem

Let g = −logf; let K be the cone of convex functions.

The original problem is transformed:

Xn

i=1

g(Xi) # min

g ! g ∈ K,

Z

e−g = 1

(23)

A convex problem

Let g = −logf; let K be the cone of convex functions.

The original problem is transformed:

Xn

i=1

g(Xi) + Z

e−g # min

g ! g ∈ K

(24)

A convex problem

Let g = −logf; let K be the cone of convex functions.

The original problem is transformed:

Xn

i=1

g(Xi) + Z

e−g # min

g ! g ∈ K

and generalized: let ψ be convex and nonincreasing (like e−x) Xn

i=1

g(Xi) + Z

e−g # min

g ! g ∈ K

(25)

A convex problem

Let g = −logf; let K be the cone of convex functions.

The original problem is transformed:

Xn

i=1

g(Xi) + Z

e−g # min

g ! g ∈ K

and generalized: let ψ be convex and nonincreasing (like e−x) Xn

i=1

g(Xi) + Z

ψ(g) # min

g ! g ∈ K

(26)

Primal and dual

Recall: K is the cone of convex functions;

ψ is convex and nonincreasing The strong Fenchel dual of

1 n

Xn

i=1

g(Xi) + Z

ψ(g)dx # min

g ! g ∈ K (P)

is

− Z

ψ(−f)dx # max

f

! f = d(Pn − G)

dx , G ∈ K (D) Extremal relation: f = −ψ0(g).

(27)

Remarks

ψ(y) = sup

x∈domψ

(yx − ψ(x)) is the conjugate of ψ

if primal solutions g are sought in some space, then dual solutions G are sought in a dual space

for instance, if g ∈ C(X), and X is compact, then G ∈ C(X), the space of (signed) Radon measures on X.

The equality f = d(Pn − G)

dx is thus a feasibility constraint (for other G, the dual objective is −∞)

K is the dual cone to K - a collection of (signed) Radon measures such that R

gdG > 0 for any convex g.

Dual: good for computation...

(28)

Dual: good not only for computation

Couldn’t we have here heavy-tailed distribution too?

...possibly going beyond log-concavity?

Recall: the strong Fenchel dual of 1

n

Xn

i=1

g(Xi) + Z

ψ(g)dx # min

g ! g ∈ K (P)

is

− Z

ψ(−f)dx # max

f ! f = d(Pn − G)

dx , G ∈ K (D) Extremal relation: f = −ψ0(g).

(29)

Instance: maximum likelihood, α = 1

For ψ(x) = e−x, we have 1

n

Xn

i=1

g(Xi) + Z

e−g # min

g ! g ∈ K (P)

− Z

flogf dx # max

f

! f = d(Pn − G)

dx , G ∈ K (D) ... a maximum entropy formulation

Extremal relation: f = e−g

g required convex → f log-concave

How about entropies alternative to Shannon entropy?

(30)

R´ enyi system

R´enyi (1961,1965): entropies defined with the help of (1 − α)−1 log(

Z

fα(x)dx),

with Shannon entropy being a limiting form for α = 1.

Various entropies correspond to various known divergences:

α = 1: Shannon entropy, Kullback-Leibler divergence α = 2: R´enyi-Simpson-Gini entropy, Pearson’s χ2

α = 1/2: Hellinger’s distance

α = 0: reversed Kullback-Leibler New heuristics:

(31)

ψ and ψ

for various α

! = 2

! = 1

! = 1/2

! = 0

! = 2

! = 1

! = 1/2

! = 0

(32)

Some properties for all α

The density estimators with R´enyi entropies, as defined above, are:

• supported by the convex hull of the data

• the expected value of the estimated density is equal to the sample mean of the data

• the function g, appearing in the primal, is a polyhedral convex function (that is, it is determined by its values at the data points Xi, and is the maximal convex function minorizing those)

• and the estimates are well-defined: the minimum of the

(33)

Instance: α = 2

− Z

f2(y)dy = max

f

! f = d(Pn − G)

dy , G ∈ K. (D) 1

n

Xn

i=1

g(Xi) + 1 2

Z

g2dx # min

g ! g ∈ K (P)

Minimum Pearson χ2, maximum R´enyi-Simpson-Gini entropy Extremal relation: f = −g

g required convex → f concave

That yields a class more restrictive than log-concave

(34)

But perhaps for others...

Replacing g by −f gives

−1 n

Xn

i=1

f(Xi) + 1 2

Z

f2dx # min

g ! subject to g ∈ K the objective function of “least squares estimator”

Groeneboom, Jongbloed, and Wellner (2001) A folk tune (in the penalized context):

Aidu and Vapnik (1989), Terrell (1990)

... and more generally, the primal form for α > 1 is equivalent to the objective function of “minimum density power divergence estimators”, introduced by Basu, Harris,

(35)

De profundis: α = 0

Not explicitly a member of the R´enyi family - nevertheless, a limit

Z

logfdy = max

f

! f = d(Pn − G)

dy , G ∈ K, (D) 1

n

Xn

i=1

g(Xi) − Z

logg dx = min

g∈C(X)

! g ∈ K. (P) Empirical likelihood (Owen, 2001)

Extremal relation g = 1/f

the primal thus estimates the “sparsity function”

g required convex → 1/f convex

- that would yield a very nice family of functions...

(36)

The hierarchy of ρ -convex functions

Hardy, Littlewood, and P´olya (1934): means of order ρ Avriel (1972): ρ-convex functions

ρ < 0: fρ convex ρ = 0: log-concave ρ > 0: fρ concave

The class of ρ-convex densities grows with decreasing ρ:

if ρ1 < ρ2 then every ρ2-convex is ρ1-convex

Every ρ-convex density is quasi-convex: has convex level sets Our α corresponds to ρ = α − 1 - that is:

if we do the estimating prescription whose dual involves

(37)

So the winner is: α = 1 / 2

“Moderate progress within the limits of law”,

“Hellinger selector”:

Z √

fdx # max

f ! subject to f = d(Pn − G)

dx , G ∈ K (D) 1

n

Xn

i=1

g(Xi) + Z 1

g dx # min

g∈C(X)

! g ∈ K (P)

Extremal relation: f = g−2

g required convex → f−1/2 convex (f is −1/2-convex) - all log-concave

- all t family

the primal thus estimates f−1/2 (...rootosparsity)

(38)

Weibull, n = 200;

left Shannon, right Hellinger

!4 !2 0 2 4 6 8 10 12

0 0.2 0.4 0.6 0.8 1 1.2 1.4

!4 !2 0 2 4 6 8 10 12

0 0.2 0.4 0.6 0.8 1 1.2 1.4

(39)

Another Weibull, n = 200;

left Shannon, right Hellinger

!1 !0.5 0 0.5 1 1.5 2 2.5 3 3.5

0 0.5 1 1.5 2 2.5 3 3.5

!1 !0.5 0 0.5 1 1.5 2 2.5 3 3.5

0 0.5 1 1.5 2 2.5 3 3.5

(40)

Four points at the vertices of the square

(41)

Student data on criminal fingers

!6

!4

!2 0 2 4 6

(42)

Once again, but with logarithmic contours

!4

!2 0 2 4 6

(43)

Simulated data: uniform distribution

!1.5

!1

!0.5 0 0.5 1 1.5

(44)

A panoramic view

1 2

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

(45)

Computation

Main problem: enforcing convexity optimization

Easy in dimension 1; in dimension 2, the most promising way seems to be to employ a finite-difference scheme: estimate the Hessian, the matrix of second derivatives, by finite differences...

...and then enforce this matrix to be positive semidefinite That means: semidefinite programming...

...but with (slightly) nonlinear objective function.

In dimension two, one can express the semidefiniteness of the matrix by a rotated quadratic cone...

...and also the reciprocal value can be tricked in that way.

Thus: Hellinger selector turns out to be computationally easier than (Shannon) maximum likelihood...

We acknowledge using a Danish commercial implementation called Mosek by Erling Andersen, and an open source code by

(46)

Summary

• We can estimate a density restricted to a broader domain than log-concave - to include also heavy-tailed distributions.

• Generalizing the formulation dual to the maximum likelihood in the family of R´enyi entropies indexed by α, we obtain an interesting family of divergence-based primal/dual estimators.

• Each yields the estimates in its corresponding ρ-convex class, in a natural way.

• Our choice is α = 1/2, which in dual picks a feasible density closest to the uniform, on the convex hull of the data, in Hellinger distance.

• And yields −1/2-convex densities, which include all log- concave densities, but also t-family, that is, algebraical tails;

seems like all practically important quasi-concave densities.

(47)

Duality heuristics

Recall: penalized estimation, discretized setting Primal:

−1 n

Xn

i=1

g(xi) + J(−Dg) + Z

ψ(g) = min

g ! where (typically) J(−Dg) = λ

Z

|g(k)|pp

Dual:

− Z

ψ(f) − J(h) = max

f,h

! f = d(Pn + Dh)

dx 0

where ψ is again the conjugate to ψ J is the conjugate to J

(48)

Instances

Silverman (1982), Leonard (1978): p = 2, k = 3

Gu (2002), Wahba, Lin, and Leng (2002): p = 2, k = 2 Davies and Kovac (2004), Hartigan (2000), Hartigan and Hartigan (1985): p = 1, k = 1

Koenker and Mizera (2006a,b,c): p = 1, k = 1, 2, 3

Recall: the conjugate of a norm is the indicator of the unit ball in the dual norm. If J(−Dg) = λR

|g0|, then the dual is equivalent to

− Z

ψ(f) = max

f,h

! f = d(Pn + Dh)

dx 0 khk 6 λ If ψ(u) = eu, (which means that ψ(u) = ulog u)

then the primal is a maximum likelihood prescription

(49)

Stretching (“tauting”) strings

−5 −4 −3 −2 −1 0 1 2 3 4 5

−0.2 0 0.2 0.4 0.6 0.8 1 1.2

(50)

“tube” may be somewhat ambiguous...

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

(51)

...but nevertheless, there is one that matches

!5 !4 !3 !2 !1 0 1 2 3 4

0 0.05 0.1 0.15 0.2 0.25

Referenzen

ÄHNLICHE DOKUMENTE

Application to the Likelihood Ratio, Information theory (Vajda (2006), Liese and Vajda (1987));. A measure of Bayes estimator (Vajda, Liese

In a few recent papers, several results have been shown linking the source condition for convex regularization introduced in [15] to the convergence in Hausdorff distance of level-

Roger Hughes proposed a macroscopic model for pedestrian dynamics, in which individuals seek to minimize their travel time but try to avoid regions of high density.. One of the

Ill-posed problems, inverse problems, noisy right hand side, noisy operator, regularized total least squares, multi-parameter regularization, error

Introduction The Model Simulation Likelihood Estimation Likelihood Function MLE Asymptotic Properties

Specifically, we employ a special module from the OeNB Euro Survey in 2020 to assess what kind of measures individuals took to mitigate negative effects of the pandemic and how

Conclusion Age, sex, parameters of bone mineral density, the geo- metric parameters of the femur and the level of vitamin D in serum are signifi cant risk factors for lower

2 University of Applied Sciences Upper Austria, Faculty for Engineering and Applied Sciences, Stelzhamerstraße 23, A-4600 Wels, Austria... total