www.oeaw.ac.at

**On the optimal order of** **integration in Hermite spaces**

**with finite smoothness**

**J. Dick, C. Irrgeher, G. Leobacher, F.**

**Pillichshammer**

**RICAM-Report 2016-28**

### On the optimal order of integration in Hermite spaces with finite smoothness

### Josef Dick

^{∗}

### , Christian Irrgeher, Gunther Leobacher and Friedrich Pillichshammer

^{†}

### August 1, 2016

**Abstract**

We study the numerical approximation of integrals with respect to the standard Gaussian measure for integrands which lie in certain Hermite spaces of functions.

The decay rate of the associated sequence is specified by a single parameter and
determines the smoothness classes and for certain integer values of the parameter,
the inner product can be expressed via *L*_{2} norms of the derivatives of the function.

We map higher order digital nets from the unit cube to a suitable subcube ofR* ^{s}*
via a linear transformation and show that such rules achieve, apart from powers of
log

*N*, the optimal rate of convergence of the integration error. Numerical examples illustrate the performance of these quadrature rules and show their power compared to other quadrature rules.

**Keywords:** Numerical integration, worst-case error, higher order digital nets, Hermite
polynomials

**2010 MSC:** 65D30, 65D32, 65Y20

**1** **Introduction**

In this paper we study numerical integration of functions over the*s-dimensional real space*
R* ^{s}* of the form

*I** _{s}*(f) =

Z

R^{s}

*f*(x)ϕ* _{s}*(x) dx, (1)

where *ϕ** _{s}* denotes the density of the

*s-dimensional standard Gaussian measure,*

*ϕ*

*(x) = 1*

_{s}(2π)* ^{s/2}* exp

−* x*·

*2*

**x**

for * x*∈R

^{s}*.*

∗This research was supported under Australian Research Council’s Discovery Projects funding scheme (project number DP150101770).

†The authors are supported by the Austrian Science Fund (FWF): Projects F5508-N26 (Leobacher), F5509-N26 (Irrgeher and Pillichshammer) and F5506-N26 (Irrgeher), respectively, which are parts of the Special Research Program “Quasi-Monte Carlo Methods: Theory and Applications”.

We assume that the integrands *f* belong to a certain reproducing kernel Hilbert space
H* _{s,α}* of smoothness

*α*whose construction is based on Hermite polynomials and which is therefore called a Hermite space of smoothness

*α. The exact definition of this space,*which was introduced by Irrgeher and Leobacher [8], will be given in Section2.

In order to approximate*I** _{s}*(f) we use linear algorithms of the form

*A*

*(f) =*

_{N,s}*N*

X

*i=1*

*w*_{i}*f*(x* _{i}*),

which are based on nodes **x**_{1}*, . . . , x*

*∈ R*

_{N}*and real weights*

^{s}*w*

_{1}

*, . . . , w*

*and study the worst-case absolute error*

_{N}*e(A*

_{N,s}*,*H

*s,α*) of

*A*

*N,s*over the unit ball of the Hermite space, i.e.

*e(A*_{N,s}*,*H* _{s,α}*) = sup

*f*∈H*s,α*

kfk*s,α*≤1

|I* _{s}*(f)−

*A*

*(f)|*

_{N,s}*.*

The *N*-th minimal worst-case error *e(N,*H* _{s,α}*) is the infimum of

*e(A*

_{N,s}*,*H

*) over all linear algorithms*

_{s,α}*A*

*that use*

_{N,s}*N*function values.

For *F, G* : *D* ⊆ N → R we say *F*(N) . *G(N*) if there exists some *c >* 0 such that
*F*(N) ≤ *c G(N*) for all *N* ∈ *D. If the positive quantity* *c* depends on some parameter,
say *s, then we may indicate this by writing* .*s*. We may use the symbol also the other
way round &with the obvious meaning.

Our main result states that*e(N,*H* _{s,α}*) is, up to some log

*N*-factors, of the exact order of magnitude

*N*

^{−α}. More precisely, we show that

1

*N** ^{α}* .

*s,α*

*e(N,*H

*).*

_{s,α}*s,α*

(log*N*)^{s}^{2α+3}^{4} ^{−}^{1}^{2}

*N*^{α}*.* (2)

For the upper bound we present an explicit algorithm.

The paper is organized as follows: In the next section we will introduce the function space setting under consideration. We recall the definition of Hermite polynomials, give the definition of Hermite spaces and discuss their smoothness properties. Section 3 is devoted to the numerical integration problem. After some further introductory words we will prove the lower bound from (2) in Subsection 3.1 (see Theorem 1). The upper bound from (2) will be presented in Subsections 3.2 (Theorem 2) and 3.3 (Corollary 1).

In Section 4 we numerically compute the worst-case error of the presented algorithm as well as of two other types of quadrature rules and compare their performances.

**2** **Hermite spaces of functions of finite smoothness**

For *k*∈N0 the *k-th Hermite polynomial is given by*
*H**k*(x) = (−1)^{k}

√*k!* exp(x^{2}*/2)* d^{k}

dx* ^{k}* exp(−x

^{2}

*/2),*

which is sometimes also called normalized probabilistic Hermite polynomial, since

Z

R

*H**k*(x)^{2}*ϕ(x) dx*= 1,

where *ϕ*is the standard normal density, *ϕ(x) =* ^{√}^{1}

2πexp(−x^{2}*/2). For example,*
*H*0(x) = 1, H_{1}(x) = *x, H*2(x) = ^{√}^{1}

2(x^{2}−1), H_{3}(x) = ^{√}^{1}

6(x^{3}−3x), . . . .

Here we follow the definition given in [1], but we remark that there are slightly different
ways to introduce Hermite polynomials (see, e.g., [13]). For*s* ≥2,* k*= (k

_{1}

*, . . . , k*

*)∈N*

_{s}*0, and*

^{s}*= (x*

**x**_{1}

*, . . . , x*

*)∈R*

_{s}*we define*

^{s}*s-dimensional Hermite polynomials by*

*H** k*(x) =

*s*

Y

*j=1*

*H**k**j*(x*j*).

It is well-known (see [1]) that the sequence of Hermite polynomials{H*_{k}*(x)}

*N*

**k∈***0 forms an orthonormal basis of the function space*

^{s}*L*

^{2}(R

^{s}*, ϕ*

*) of Gauss square-integrable functions.*

_{s}We know that for all * k*∈N

*0 the bound*

^{s}|H*_{k}*(x)

^{q}

*ϕ*

*s*(x)| ≤1 for all

*∈R*

**x***(3) holds, which is a slightly weaker version of Cramer’s bound (c.f. Sansone [12]). The next lemma states a stronger bound on the Hermite polynomials.*

^{s}**Lemma 1.** *For all* * k*∈N

*0*

^{s}*and for all*

*∈R*

**x**

^{s}*we have*

|H* k*(x)

^{q}

*ϕ*

*s*(x)| ≤

*s*

Y

*j=1*

min

1,

√*π*
*k*_{j}^{1/12}

*.* (4)

The proof of this lemma will be deferred to the appendix. From this lemma it follows that

*σ** _{s}*(k) := kH

*√*

_{k}*ϕ** _{s}*k∞ ≤min

1,

√*π*
*k*^{1/12}_{j}

*.* (5)

For every square-integrable *f* :R* ^{s}* →Rthe

**k-th Hermite coefficient of***f*is defined as

*f*b(k) =

^{R}

R^{s}*f(x)H**_{k}*(x)ϕ

*(x) dx.*

_{s}**Definition 1.** Let*s*∈N and let *r*:N* ^{s}*0 →(0,∞) be a function satisfying

X

* k∈*N

*0*

^{s}*r(k)σ** _{s}*(k)

^{2}

*<*∞.

Then the Hermite space corresponding to *r* is the Hilbert space
H* _{r}* :=

*f* :R* ^{s}*→R:

*f*is continuous,

Z

R^{s}

*f(x)*^{2}*ϕ** _{s}*(x) dx

*<*∞,kfk

_{r}*<*∞

*,*

where kfk^{2}* _{r}* :=

^{P}

_{k∈}_{N}

^{s}0*r(k)*^{−1}*f(k)*^{b} ^{2}. The inner product in H* _{r}* is thus given by
hf, gi

*=*

_{r}^{X}

* k∈*N

*0*

^{s}1

*r(k)f*^{b}(k)b*g(k).*

This definition of a Hermite space is slightly more general than that given in [8]. There
it was required ^{P}_{k∈}_{N}^{s}

0*r(k)<*∞. From Lemma 1it follows that ^{P}_{k∈}_{N}^{s}

0*r(k)<*∞ implies

P

* k∈*N

*0*

^{s}*r(k)σ*

*(k)*

_{s}^{2}

*<*∞.

To see that H* _{r}* is indeed closed under this norm one needs to show that for

*f*∈ H

*the Hermite series for*

_{r}*f*converges to a continuous function. But, applying the Cauchy- Schwarz inequality,

X

* k∈*N

*0*

^{s}|*f*^{b}(k)H*_{k}*(x)ϕ

*(x)*

_{s}^{1/2}| ≤

X

* k∈*N

*0*

^{s}*r(k)σ** _{s}*(k)

^{2}

1 2

X

* k∈*N

*0*

^{s}*r(k)*^{−1}*f(k)*^{b} ^{2}*H**_{k}*(x)

^{2}

*ϕ*

*(x)*

_{s}*σ*

*(k)*

_{s}^{2}

1 2

≤

X

* k∈*N

^{s}_{0}

*r(k)σ** _{s}*(k)

^{2}

1 2

kfk_{s,α}*<*∞.

Thus ^{P}_{k∈}_{N}^{s}

0

*f*b(k)H_{k}*ϕ*^{1/2}* _{s}* is a series of continuous functions which converges uniformly, so
its limit is continuous. Therefore also

^{P}

_{k∈}_{N}

^{s}0

*f(k)H*b *_{k}* =

*ϕ*

^{−1/2}

_{s}^{P}

_{k∈}_{N}

^{s}0

*f*b(k)H_{k}*ϕ*^{1/2}* _{s}* is contin-
uous.

We are now going to define the Hermite space of smoothness*α, which are characterized*
by a special choice of the *r(k) for* * k*∈N

*0. Let*

^{s}*s, α*∈N. For all

*∈N*

**k***0 we define*

^{s}*r**s,α*(k) =

*s*

Y

*j=1*

*r**α*(k*j*) (6)

with

*r** _{α}*(k) =

1 if *k* = 0

(^{P}^{α}_{τ=0}*β** _{τ}*(k))

^{−1}if

*k*≥1 and for integers

*τ*≥1,

*β** _{τ}*(k) =

*k!*

(k−τ)! if *k* ≥*τ,*
0 otherwise.

Note that we have
*r** _{α}*(k) =

min(α,k)

X

*τ=0*

*k!*

(k−*τ)!*

−1

≤ (k−min(α, k))!

*k!* =

1

*k!* if 1≤*k* ≤*α,*

(k−α)!

*k!* if *k* ≥*α.*

It is easily shown that lim*k→∞**r** _{α}*(k)k

*= 1. Hence*

^{α}*r*

*(k)*

_{α}*1*

_{α}*k** ^{α}* for

*k*∈N

*.*Thus

^{P}

_{k∈}_{N}

^{s}0*r** _{s,α}*(k)σ

*(k)*

_{s}^{2}

*<*∞for all

*α*∈N, and we may consider the associated Hermite space.

**Definition 2.** We call the Hermite space H* _{s,α}* corresponding to

*r*

*as defined in (6) a*

_{s,α}*Hermite space with smoothness*

*α. We write*k.k

*and h·,·i*

_{s,α}*for the norm and inner product, respectively, of H*

_{s,α}*.*

_{s,α}The name *Hermite space with smoothness* *α* will be justified below. In the following
we recall some commonly used conventions for operations with multiindices

• We denote the partial derivative by *∂**x**i* := _{∂x}^{∂}

*i* for any *i*= 1, . . . , s.

• We denote the mixed partial derivatives with respect to * x* by

*∂*_{x}*^{τ}* :=

*∂*

^{|τ}

^{|}

*∂x**^{τ}* =

*∂*

^{τ}^{1}· · ·

*∂*

^{τ}

^{s}*∂x*^{τ}_{1}^{1}· · ·*∂x*^{τ}_{s}^{s}

for any * τ* = (τ

_{1}

*, . . . , τ*

*)∈N*

_{s}*0, where |τ|=*

^{s}*τ*

_{1}+· · ·+

*τ*

*.*

_{s}• For vectors * n*= (n

_{1}

*, . . . , n*

*) and*

_{s}*= (k*

**k**_{1}

*, . . . , k*

*) we use the following notation:*

_{s}**n! =**

*s*

Y

*j=1*

*n** _{j}*!,

**n**

**k**!

=

*s*

Y

*j=1*

*n**j*

*k*_{j}

!

*,* |n|=

*s*

X

*j=1*

|n* _{j}*|,

*·*

**n***=*

**k***s*

X

*j=1*

*n*_{j}*k*_{j}*.*

Furthermore, * n*≥

*means that*

**k***n*

*≥*

_{j}*k*

*for all*

_{j}*j*∈ {1,2, . . . , s}.

For *f* ∈ H* _{s,α}* we have the Hermite expansion, see [8],

*f*(x) =

^{X}

* k∈*N

*0*

^{s}*f(k)H*b *_{k}*(x) for all

*∈R*

**x***and for any*

^{s}*∈N*

**τ***0 with*

^{s}*≤*

**τ***we have that*

**α***∂*_{x}^{τ}*f* ∼ ^{X}

**k≥τ**

*f*b(k)

s **k!**

(k−* τ*)!

*H*

**k−τ***.*

Using an analogous expression for *g, we obtain using Parseval’s theorem that*
hf, gi* _{s,α}* =

^{X}

* k∈*N

^{s}_{0}

1

*r**s,α*(k)*f(k)*^{b} *g(k)*_{b}

= ^{X}

* k∈*N

^{s}_{0}

*s*

Y

*j=1*
*α*

X

*τ=0*

*β** _{τ}*(k

*)*

_{j}!

*f*b(k)*g(k)*_{b}

= ^{X}

* k∈*N

^{s}_{0}

X

* τ*∈{0,...,α}

^{s}

*s*

Y

*j=1*

*β*_{τ}* _{j}*(k

*)*

_{j}

*f(k)*b *g(k)*_{b}

= ^{X}

* τ*∈{0,...,α}

^{s}X

**k≥τ**

**k!**

(k−* τ*)!

*f*

^{b}(k)

*g(k)*

_{b}

= ^{X}

* τ*∈{0,...,α}

^{s}Z

R^{s}

*∂*_{x}^{τ}*f(x)∂*_{x}^{τ}*g(x)ϕ** _{s}*(x) dx.

Thus the inner product of H* _{s,α}* can also be written as
hf, gi

*=*

_{s,α}^{X}

**τ∈{0,...,α}**^{s}

Z

R^{s}

*∂*_{x}^{τ}*f*(x)∂_{x}^{τ}*g(x)ϕ** _{s}*(x) dx

*.*

In other words, for our special function*r** _{s,α}*the corresponding Hermite space is a Sobolev-
type space of functions on R

*with smoothness*

^{s}*α.*

**Remark 1.** Hermite spaces have already been introduced in [8] with the stronger re-
quirement of summability of the corresponding sequence. The authors there consider the
sequence ˜*r** _{s,α}*(k) =

**k**^{−α}, which asymptotically is same as the choice of

*r*

*in this paper.*

_{s,α}But due to the stronger (and unnecessary) requirement of summability in [8] there is the
restriction of *α >*1, which is relaxed to *α*≥1 here.

Besides the case of polynomially decaying coefficients, Hermite spaces with *with ex-*
*ponentially decaying coefficients* were also considered. Multivariate integration for such
Hermite spaces has been analyzed in [7]. It is also shown there that the elements of those
function spaces are analytic.

The results in [8] and [7] make heavy use of the facts that Hermite spaces are repro- ducing kernel Hilbert spaces with canonical kernel

*K** _{s,α}*(x,

**y) =**^{X}

* k∈*N

^{s}_{0}

*r(k)H**_{k}*(x)H

*(y) for all*

_{k}*∈R*

**x,****y**

^{s}*.*(7) The eigenfunctions of the reproducing kernel are the Hermite polynomials and the eigen- values are precisely the numbers

*r(k).*

It is a curious fact that we do not make direct use of this fact here.

**3** **Integration**

We are interested in numerical approximation of the values of integrals
*I**s*(f) =

Z

R^{s}

*f(x)ϕ** _{s}*(x) dx for

*f*∈ H

*s,α*

*.*

Without loss of generality, see, e.g., [11, Section 4.2] or [14], we can restrict ourselves to
approximating *I** _{s}*(f) by means of

*linear algorithms*of the form

*A** _{N,s}*(f) =

*N*

X

*i=1*

*w*_{i}*f*(x* _{i}*) for

*f*∈ H

*(8)*

_{s,α}with integration nodes **x**_{1}*, . . . , x*

*∈ R*

_{N}*and weights*

^{s}*w*

_{1}

*, . . . , w*

*∈ R. An important subclass of linear algorithms are quasi Monte Carlo algorithms which are obtained by choosing the weights*

_{N}*w*

*= 1/N for all 1≤*

_{i}*i*≤

*N*.

For*f* ∈ H* _{s,α}* let

err(f) := *I** _{s}*(f)−

*A*

*(f).*

_{N,s}The *worst-case error* of the algorithm *A**N,s* is then defined as the worst performance of
*A** _{N,s}* over the unit ball of H

*, i.e.,*

_{s,α}*e(A*_{N,s}*,*H* _{s,α}*) = sup

*f∈H**s,α*

kfk*s,α*≤1

|err(f)|*.* (9)

Moreover, we define the *N*-th minimal worst-case error,
*e(N,*H* _{s,α}*) = inf

*A**N,s*

*e(A*_{N,s}*,*H* _{s,α}*)

where the infimum is taken over all linear algorithms using *N* function evaluations.

Numerical integration in the Hermite space has already been studied in [8]. There
it has been shown that for every *N* ∈ N there exist points **x**_{1}*, . . . , x*

*∈ R*

_{N}*such that the worst-case error of the quasi-Monte Carlo (QMC) algorithm*

^{s}*Q*

*(f) =*

_{N,s}

_{N}^{1}

^{P}

^{N}

_{i=1}*f*(x

*) satisfies*

_{i}*e(Q*_{N,s}*,*H*s,α*).* ^{s,α}* 1

√*N.*

This result, which is [8, Corollary 3.9], has been shown by means of an averaging argument.

The convergence rate however is very weak and does not depend on the smoothness *α.*

Even very large smoothness does not give information about an improved convergence rate. The aim of this paper is to improve this error estimate.

**3.1** **Lower bound on the worst-case error**

First we prove a lower bound on the integration error.

**Theorem 1.** *Let* *s, α* ∈ N*. Then for all* *N* ∈ N *the* *N-th minimal worst-case error for*
*integration in the Hermite space* H_{s,α}*is bounded from below by*

*e(N,*H* _{s,α}*)&

*s,α*

1
*N*^{α}*.*

*Proof.* It is sufficient to prove the result for dimension *s* = 1, since higher dimensional
integration is at least as hard as integration in one dimension. Let P ={x_{1}*, x*_{2}*, . . . , x** _{N}*}
denote the set of quadrature points used in algorithm

*A*

*N*=

*A*

*N,1*. (In the following we drop the index for the dimension to simplify the notation.) For

*m*∈ N we define D

*m*={1,2, . . . ,2

*}. For*

^{m}*i*∈D

*m*let

*h** _{i,m}*(x) =

(2^{m}*x*−*i*+ 1)* ^{α}*(i−2

^{m}*x)*

*for*

^{α}

^{i−1}_{2}

*m*≤

*x <*

_{2}

^{i}*m*

*,*

0 otherwise.

Let supp^{◦}(h* _{i,m}*) denote the open support of the function

*h*

*and note that supp*

_{i,m}^{◦}(h

*)⊆*

_{i,m}*i*−1
2^{m}*,* *i*

2^{m}

⊆[0,1]

for all *i* ∈ D*m**, m* ∈ N0. Note further that for fixed *m* the functions *h** _{i,m}* for

*i*∈ D

*m*are orthogonal in

*L*

^{2}(R

*, ϕ).*

Let*t*∈N be such that 2* ^{t−1}* ≤2N <2

*. Define*

^{t}*h(x) =* ^{X}

*i∈*D*t*

P∩supp^{◦}(h*i,t*)=∅

*h** _{i,t}*(x).

By definition we have *h(x** _{i}*) = 0 for all

*i*∈ {1,2, . . . , N} and hence also

*A*

*(h) = 0.*

_{N}Moreover,

Z

R

*h** _{i,m}*(x)ϕ(x) dx=

Z ^{i}

2*m*
*i−1*
2*m*

*h** _{i,m}*(x)ϕ(x) dx

=

Z ^{i}

2*m*
*i−1*
2*m*

(2^{m}*x*−*i*+ 1)* ^{α}*(i−2

^{m}*x)*

^{α}*ϕ(x) dx*

= 1
2^{m}

Z 1 0

*z** ^{α}*(1−

*z)*

^{α}*ϕ*

*z*+*i*−1
2^{m}

dz

≥ 1
2^{m}

Z 1 0

*z** ^{α}*(1−

*z)*

^{α}*ϕ(1) dz*

= 1
2^{m}

(α!)^{2}
(2α+ 1)!

√1
2πe*,*

where we used the value *B(α*+ 1, α+ 1) of the beta function. Thus we have

Z

R

*h(x)ϕ(x) dx*= ^{X}

*i∈*D* ^{t}*
P∩supp

^{◦}(h

*i,t*)=∅

Z

R

*h** _{i,t}*(x)ϕ(x) dx

≥ ^{X}

*i∈*D*t*

P∩supp^{◦}(h*i,t*)=∅

1
2^{t}

(α!)^{2}
(2α+ 1)!

√1 2πe

≥ 2* ^{t}*−

*N*2

^{t}(α!)^{2}
(2α+ 1)!

√1 2πe

*>* 1
2

(α!)^{2}
(2α+ 1)!

√1

2πe*,* (10)

where we also used that P ∩supp^{◦}(h* _{i,m}*) is empty for at least 2

*−*

^{t}*N*many indices

*i*∈D

*t*. It remains to estimate the norm of the function

*h*from above. Using orthogonality we have that

khk^{2}* _{α}* =

*α*

X

*τ=0*

Z

R

*h*^{(τ)}(x)^{}^{2}*ϕ(x) dx*= ^{X}

*i∈*D*t*

P∩supp^{◦}(h*i,t*)=∅

*α*

X

*τ=0*

Z

R

(h^{(τ)}* _{i,t}* (x))

^{2}

*ϕ(x) dx.*(11)

In order to analyze the integrals in (11) we define *g* :R→R as
*g(x) =*

*x** ^{α}*(1−

*x)*

*for*

^{α}*x*∈(0,1),

0 otherwise.

Then

Z

R

(h^{(τ)}* _{i,t}* (x))

^{2}

*ϕ(x) dx*=

Z ^{i}

2*t*
*i−1*
2*t*

d^{τ}

dx^{τ}*g*(2^{t}*x*−*i*+ 1)

!2

*ϕ(x) dx*

≤ 2^{2τ t}

√2π

Z ^{i}

2*t*
*i−1*
2*t*

(g^{(τ)}(2^{t}*x*−*i*+ 1))^{2}dx

= 2^{2τ t}
2* ^{t}*√

2π

Z 1 0

(g^{(τ)}(z))^{2}dz
(note that ^{R}_{0}^{1}(g^{(τ)}(z))^{2}dz <∞for all *τ* = 0,1, . . . , α) and further

*α*

X

*τ=0*

Z

R

(h^{(τ)}* _{i,t}*(x))

^{2}

*ϕ(x) dx*≤

*c*

*2*

_{α}^{2αt−t}

*,*

where *c** _{α}*=

^{q}2/πmax

_{τ=0,...,α}^{R}

_{0}

^{1}(g

^{(τ)}(z))

^{2}dz <∞ depends only on

*α. Thus we have*khk

^{2}

*.*

_{α}*α*

X

*i∈*D*t*

P∩supp^{◦}(h*i,t*)=∅

2^{2αt−t}≤2^{2αt}*.* (12)

Combining (10), the fact that*A**N*(h) = 0 and (12) we finally obtain
*e(A*_{N}*,*H* _{α}*)≥ |I(h)−

*A*

*(h)|*

_{N}khk* _{α}* &

*α*

1
2* ^{αt}* &

*α*

1
*N*^{α}*.*

**Remark 2.** We conjecture that the lower estimate in Theorem 1 can be improved to
*N*^{−α}(log*N)*^{(s−1)/2}.

**3.2** **A relation to integration in the ANOVA space**

**Definition 3.** The *ANOVA space of smoothness* *α* defined over [0,1)* ^{s}* (also known as

*unanchored Sobolev space) is given by*

H_{s,α}^{sob}([0,1)* ^{s}*) :=

*s*

O

*j=1*

n*g* : [0,1)→R : *g*^{(r)} absolutely continuous

for *r* ∈ {0*. . . , α*−1}, g^{(α)}∈*L*^{2}[0,1)^{o} (13)
with inner product

hg, hi_{sob,s,α} := ^{X}

u⊆{1,...,s}

X

* τ*u∈{0,...,α−1}

^{|u|}

×

Z

[0,1]^{s−|u|}

Z

[0,1]^{|u|}

*∂*_{z}^{(τ}^{u}^{,α}^{−u}^{)}*g*(z) dz_{u}

! Z

[0,1]^{|u|}

*∂*_{z}^{(τ}^{u}^{,α}^{−u}^{)}*h(z) dz*_{u}

!

dz−u

where**z**_{u} denotes the|u|-dimensional vector with components*z** _{j}* for

*j*∈uand

*−u denotes the (s− |u|)-dimensional vector with the components*

**z***z*

*for*

_{j}*j /*∈ u. Moreover, (τ

_{u}

*, α*−u) denotes the

*s-dimensional vector for which the*

*j-th component is*

*α*for

*j /*∈ u and

*τ*

*for*

_{j}*j*∈u, where

**τ**_{u}= (τ

*)*

_{j}*j∈u*. The norm is k·k

_{sob,s,α}=

^{q}h·,·i

_{sob,s,α}.

For short we write H^{sob}* _{s,α}* := H

^{sob}

*([0,1)*

_{s,α}*). Note that H*

^{s}^{sob}

*consists of functions with domain [0,1)*

_{s,α}*instead of R*

^{s}*.*

^{s}**Remark 3.** Also the ANOVA space of smoothness*α*is a reproducing kernel Hilbert space
with kernel function

*K*_{s,α}^{sob}(x,**y) =**

*s*

Y

*j=1*

*K*_{α}^{sob}(x_{j}*, y** _{j}*)

for* x*= (x

_{1}

*, x*

_{2}

*, . . . , x*

*)∈[0,1)*

_{s}*and similarly for*

^{s}*is given by*

**y, and where the one-dimensional kernel***K*_{α}^{sob}(x, y) =

*α*

X

*r=0*

*B**r*(x)B*r*(y)

(r!)^{2} + (−1)^{α+1}*B*2α(|x−*y|)*
(2α)!

for *x, y* ∈[0,1), where *B** _{r}* denotes the Bernoulli polynomial of degree

*r.*

The worst-case absolute integration error of an algorithm*A** _{N,s}* as in (8) is

*e(A*

_{N,s}*,*H

^{sob}

*) = sup*

_{s,α}*g∈H*^{sob}* _{s,α}*
kgk

_{sob,s,α}≤1

Z

[0,1]^{s}

*g(x) dx*−*A** _{N,s}*(g)

*.*

Now we relate the integration problem in the Hermite space H* _{s,α}* to the integration
problem in H

^{sob}

*.*

_{s,α}Let*Q**N,s* be a QMC-rule for integration in the ANOVA space H^{sob}* _{s,α}* which is based on
a point set {z

_{1}

*,*

**z**_{2}

*, . . . ,*

**z***} in [0,1)*

_{N}*, i.e.,*

^{s}*Q** _{N,s}*(g) = 1

*N*

*N*

X

*n=1*

*g(z** _{i}*) for

*g*∈ H

^{sob}

_{s,α}*.*

For any * b* = (b, . . . , b) ∈ (0,∞)

*we denote by B*

^{s}*the mapping from [0,1]*

_{b}*to [−b,*

^{s}

**b]**given by

B*b*(z) = 2bz−* b.* (14)

Note that the mapping B* _{b}* is just a scaling and translation of the

*s-dimensional unit cube*which is fully determined by the parameter

*b. The volume of the*

*s-dimensional interval*[−b,

**b] is then (2b)***.*

^{s}For integration in the Hermite spaceH* _{s,α}*we consider integration rules of the following
form: let {z

_{1}

*,*

**z**_{2}

*, . . . ,*

**z***} ⊆ [0,1)*

_{N}*be the point set used in*

^{s}*Q*

*. Then we use the integration rule*

_{N,s}*A** _{N,s}*(f) = (2b)

^{s}*N*

*N*

X

*i=1*

*f(B** _{b}*(z

*))ϕ*

_{i}*(B*

_{s}*(z*

_{b}*)) for*

_{i}*f*∈ H

_{s,α}*,*(15) with

*b*= 2√

*α*log*N* for all *i*∈ {1,2, . . . , N}.

**Theorem 2.** *Let* *α* ∈ N *and* *A*_{N,s}*be the quadrature rule defined in* (15). Then for the
*worst-case error of* *A**N,s* *in the Hermite space* H*s,α* *we have*

*e(A*_{N,s}*,*H* _{s,α}*).

*s,α*(log

*N*)

^{s}^{2α+1}

^{4}

*e(Q*

_{N,s}*,*H

^{sob}

*) + 1*

_{s,α}*N*

^{α}*.*

For the proof of Theorem 2 we need some tools that will be provided in the next subsection. The proof will then be given in Subsection 3.2.2.

In Subsection 3.3 we provide a construction of point sets with low worst-case error
*e(Q*_{N,s}*,*H^{sob}* _{s,α}*).

**3.2.1** **Auxiliary results**

**Lemma 2.** *Let* *f* ∈ H_{s,α}*with* *α*∈N*. Then* |f(x)^{q}*ϕ** _{s}*(x)|.

*s,α*kfk

_{s,α}*for all*

*∈R*

**x**

^{s}*.*

*Proof.*For any

*f*∈ H

*s,α*we know that

*f(x) =*

^{P}

_{k∈}_{N}

^{s}0

*f*b(k)H*_{k}*(x) for all

*∈ R*

**x***. Using the Cauchy-Schwarz inequality and Lemma 1,*

^{s}|f(x)^{q}*ϕ** _{s}*(x)| ≤

^{X}

* k∈*N

^{s}_{0}

|*f*^{b}(k)| |H*_{k}*(x)

^{q}

*ϕ*

*(x)|r*

_{s}*(k)*

_{s,α}^{−1/2}

*r*

*(k)*

_{s,α}^{1/2}

≤

X

* k∈*N

*0*

^{s}1

*r** _{s,α}*(k)|

*f*

^{b}(k)|

^{2}

1/2

X

* k∈*N

*0*

^{s}|H*_{k}*(x)

^{q}

*ϕ*

*(x)|*

_{s}^{2}

*r*

*(k)*

_{s,α}

1/2

=kfk_{s,α}*π** ^{s/2}* 1 +

∞

X

*k=1*

1

*k*^{1/6}*r**α*(k)

!*s/2*

*.*

We have

1 +

∞

X

*k=1*

1

*k*^{1/6} *r** _{α}*(k)≤1 +

*α*

X

*k=1*

1
*k*^{1/6}

1
*k!* +

∞

X

*k=α+1*

1
*k*^{1/6}

(k−*α)!*

*k!*

≤1 + e−1 +

∞

X

*k=α+1*

1
*k*^{7/6}

≤e +

Z ∞
*α*

dt

*t*^{7/6} = e + 6
*α*^{1/6}
and hence the desired result follows.

**Lemma 3.** *Let* *f* ∈ H_{s,α}*. For any τ* ={0, . . . , α}

^{s}*we have*

*∂*_{x}*^{τ}* (f ·

*ϕ*

*) (x) =*

_{s}*ϕ*

*(x)*

_{s}^{X}

**j≤τ**

(−1)^{|τ−j|} **τ*** τ* −

**j**! q

(τ −* j*)!

*H*

*−j(x)*

_{τ}*∂*

_{x}

^{j}*f*(x). (16)

*Proof.*We show this by induction on

*s. It is obvious that (16) holds for*

*=*

**τ****0. Now**we denote by

**e***the multiindex whose*

_{i}*i-th entry is 1 with the remaining entries set to 0.*

Then for any *i*= 1, . . . , swe have for * τ* =

**e***that*

_{i}*∂*_{x}*^{τ}*(f ·

*ϕ*

*) (x) =*

_{s}*∂*

_{x}*(f(x)ϕ*

_{i}*(x))*

_{s}=*∂**x**i**f(x)ϕ** _{s}*(x)−

*x*

*i*

*f*(x)ϕ

*(x)*

_{s}= (H**0**(x)*∂*_{x}^{e}^{i}*f*(x)−*H***e***i*(x)f(x))*ϕ**s*(x).

Next we assume that (16) holds for some * τ*. Then for any

*i*= 1, . . . , s we get that

*∂*_{x}^{τ}^{+e}* ^{i}*(f(x)ϕ

*(x)) =*

_{s}=*∂*_{x}_{i}

X

**j≤τ**

(−1)^{|τ}^{−j|} **τ*** τ* −

**j**! q

(τ −**j)!H***_{τ}*−j(x)

*∂*

_{x}

^{j}*f(x)ϕ*

*(x)*

_{s}

= ^{X}

**j≤τ**

(−1)^{|τ}^{−j|} **τ*** τ* −

**j**! q

(τ −**j)!**∂_{x}_{i}^{}*H**_{τ}*−j(x)∂

_{x}

^{j}*f*(x)ϕ

*(x)*

_{s}^{}

= ^{X}

**j≤τ**

(−1)^{|τ}^{−j|} **τ*** τ* −

**j**! q

(τ −**j)! (∂**_{x}^{e}^{i}*H** τ−j*(x)−

*x*

_{i}*H*

*−j(x))*

_{τ}*∂*

_{x}

^{j}*f*(x)ϕ

*(x)*

_{s}+^{X}

**j≤τ**

(−1)^{|τ}^{−j|} **τ*** τ* −

**j**! q

(τ −* j*)!

*H*

*−j(x)*

_{τ}*∂*

^{j+e}

^{i}*f(x)ϕ*

*(x)*

_{s}= ^{X}

**j≤τ**

(−1)^{|τ}^{−j+e}^{i}^{|} **τ*** τ* −

**j**! q

(τ −* j*+

**e***)!*

_{i}*H*

*−j+e*

_{τ}*(x)*

_{i}*∂*

_{x}

^{j}*f(x)ϕ*

*(x)*

_{s}+ ^{X}

0<j≤τ+e*i*

(−1)^{|τ}^{−j+e}^{i}^{|} **τ*** τ* −

*+*

**j**

**e**

_{i}! q

(τ −* j* +

**e***)!*

_{i}*H*

*−j+e*

_{τ}*(x)*

_{i}*∂*

_{x}

^{j}*f*(x)ϕ

*(x).*

_{s}If we use that for all * j* ≤

*and*

**τ***i*= 1, . . . , s,

**τ*** τ* −

**j**!

+ **τ**

* τ* −

*+*

**j**

**e**

_{i}!

= * τ* +

**e***i*

* τ* +

**e***−*

_{i}

**j**!

*,*

we end up with

*∂*_{x}^{τ+e}* ^{i}*(f ·

*ϕ*

*)(x) =*

_{s}*ϕ*

*(x)*

_{s}^{X}

**j≤τ+e***i*

(−1)^{|τ−j+e}^{i}^{|} * τ* +

**e**

_{i}*+*

**τ**

**e***−*

_{i}

**j**! q

(τ +**e*** _{i}*−

**j)!H**

_{τ+e}*−j(x)∂*

_{i}

_{x}

^{τ}*f*(x).

**3.2.2** **Proof of Theorem** **2**

We will now prove the upper bound given in Theorem 2. To this end let *f* ∈ H* _{s,α}*. Then
the absolute integration error can be estimated by using the triangle inequality, i.e.,

err(f) =

Z

R^{s}

*f*(x)ϕ* _{s}*(x) dx−(2b)

^{s}*N*

*N*

X

*i=1*

*f*(x* _{i}*)ϕ

*(x*

_{s}*)*

_{i}

≤

Z

R* ^{s}*\[−b,b]

*f*(x)ϕ

*(x) dx*

_{s}

+

Z

[−b,b]*f(x)ϕ** _{s}*(x) dx− (2b)

^{s}*N*

*N*

X

*i=1*

*f(x** _{i}*)ϕ

*(x*

_{s}*)*

_{i}

= err_{1}(f) + err_{2}(f), (17)

where

err_{1}(f) :=

Z

R* ^{s}*\[−b,b]

*f(x)ϕ*

*(x) dx*

_{s}

describes the error of approximating the integral outside of [−b,* b] by zero and*
err

_{2}(f) :=

Z

[−b,b]

*f(x)ϕ** _{s}*(x) dx− (2b)

^{s}*N*

*N*

X

*i=1*

*f(x** _{i}*)ϕ

*(x*

_{s}*)*

_{i}

is the integration error which results by applying the QMC rule to the function * x* 7→

*f*(x)ϕ* _{s}*(x) restricted to the interval [−b,

**b].****Estimate of** err_{1}(f): With Lemma 2 we get
err_{1}(f)≤

Z

R* ^{s}*\[−b,b]

|f(x)ϕ* _{s}*(x)|dx

=

Z

R* ^{s}*\[−b,b]|f(x)

^{q}

*ϕ*

*(x)|*

_{s}^{q}

*ϕ*

*(x) dx .*

_{s}*kfk*

^{s,α}

_{s,α}Z

R* ^{s}*\[−b,b]

exp(−x·* x/4)*
(2π)

*dx .*

^{s/2}*s,α*kfk

_{s,α}Z

[0,∞)* ^{s}*\[0,b]

exp(−x·**x/4)***π** ^{s/2}* dx.

Furthermore we have that

Z

[0,∞)* ^{s}*\[0,b]

exp(−x·**x/4)***π** ^{s/2}* dx=

Z

[0,∞)^{s}

exp(−x·**x/4)***π** ^{s/2}* dx−

Z

[0,b]

exp(−x·**x/4)***π** ^{s/2}* dx

= 1

√*π*

Z ∞ 0

exp(−x^{2}*/4) dx*

!*s*

− 1

√*π*

Z *b*
0

exp(−x^{2}*/4) dx*

!*s*

= 1− 1

√*π*

Z *b*
0

exp(−x^{2}*/4) dx*

!*s*

≤1−^{}1−e^{−b}^{2}^{/4}^{}^{s}

≤*s* e^{−b}^{2}^{/4}

= *s*
*N*^{α}*,*

where we used that *b*= 2√

*α*log*N*. This shows that
err_{1}(f).*s,α*kfk* _{s,α}* 1

*N*^{α}*.* (18)

**Estimate of** err_{2}(f): To estimate err_{2}(f) we will derive an upper bound which includes
the worst-case error of integration in the ANOVA space H^{sob}* _{s,α}*. To this end we first trans-
form the problem from [−b,

*1]*

**b] to [0,***, i.e.*

^{s}err_{2}(f) =

Z

[−b,b]

*f(x)ϕ** _{s}*(x) dx− (2b)

^{s}*N*

*N*

X

*i=1*

*f(x** _{i}*)ϕ

*(x*

_{s}*)*

_{i}

= (2b)^{s}

Z

[0,1]^{s}

*f(B** _{b}*(z))ϕ

*(B*

_{s}*(z)) dz− 1*

_{b}*N*

*N*

X

*i=1*

*f(B** _{b}*(z

*))ϕ*

_{i}*(B*

_{s}*(z*

_{b}*))*

_{i}

*.* (19)

Now we need the following lemma:

**Lemma 4.** *Let* *f* ∈ H_{s,α}*and* *b >* 0. Then the function *g* : [0,1)* ^{s}* → R

^{s}*, given by*

*g*= (f·

*ϕ*

*)◦ B*

_{s}

_{b}*with*B

_{b}*as in*(14), belongs to H

^{sob}

_{s,α}*and furthermore,*

kgk_{sob,s,α} .*s,α**b** ^{s(α−1/2)}*kfk

_{s,α}*.*(20)

*Proof.* Let *f* ∈ H* _{s,α}*. Using the Cauchy-Schwarz inequality we get
kgk

^{2}

_{sob,s,α}=

^{X}

u⊆{1,...,s}

X

* τ*u∈{0,...,α−1}

^{|u|}

Z

[0,1]^{s−|u|}

Z

[0,1]^{|u|}

*∂*_{z}^{(τ}^{u}^{,α}^{−u}^{)}*g(z) dz*_{u}

!2

dz−u

≤ ^{X}

u⊆{1,...,s}

X

* τ*u∈{0,...,α−1}

^{|u|}

Z

[0,1]^{s}

*∂*_{z}^{(τ}^{u}^{,α}^{−u}^{)}*g(z)*^{}^{2} dz

= ^{X}

* τ*∈{0,...,α}

^{s}Z

[0,1]^{s}

(∂_{z}^{τ}*g(z))*^{2} dz.

Since *g* = (f·*ϕ**s*)◦ B*b*, we obtain
kgk^{2}_{sob,s,α} ≤ ^{X}

* τ*∈{0,...,α}

^{s}Z

[0,1]^{s}

(∂_{z}*^{τ}*(f·

*ϕ*

*s*)(B

*b*(z)))

^{2}dz

= 1

(2b)^{s}

X

**τ∈{0,...,α}**^{s}

(2b)^{2|τ}^{|}

Z

[−b,b]

(∂_{x}*^{τ}*(f ·

*ϕ*

*)(x))*

_{s}^{2}dx,