• Keine Ergebnisse gefunden

Integration and

N/A
N/A
Protected

Academic year: 2022

Aktie "Integration and"

Copied!
22
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

www.oeaw.ac.at

Integration and

approximation in cosine spaces of smooth functions

C. Irrgeher, P. Kritzer, F. Pillichshammer

RICAM-Report 2015-42

(2)

Integration and approximation in cosine spaces of smooth functions

Christian Irrgeher

, Peter Kritzer

, Friedrich Pillichshammer

Abstract

We study multivariate integration and approximation for functions belonging to a weighted reproducing kernel Hilbert space based on half-period cosine functions in the worst-case setting. The weights in the norm of the function space depend on two sequences of real numbers and decay exponentially. As a consequence the functions are infinitely often differentiable, and therefore it is natural to expect exponential convergence of the worst-case error. We give conditions on the weight sequences under which we have exponential convergence for the integration as well as the approximation problem. Furthermore, we investigate the dependence of the errors on the dimension by considering various notions of tractability. We prove sufficient and necessary conditions to achieve these tractability notions.

Keywords: numerical integration, function approximation, cosine space, worst-case er- ror, exponential convergence, tractability

2010 MSC: 41A63, 41A25, 65C05, 65D30, 65Y20

1 Introduction

In this paper we study two instances of multivariate linear problems. We are interested in approximating linear operators Ss : Hs → Gs, where Hs is a certain Hilbert space of s-variate functions defined on [0,1]s and where Gs is a normed space, namely:

• Numerical integration of functions f ∈ Hs: In this case, we have Gs=R and Ss(f) = INTs(f) =

Z

[0,1]sf(x) dx, or

L2-approximation of functions f ∈ Hs: In this case, we have Gs =L2([0,1]s) and Ss(f) = APPs(f) = f.

C. Irrgeher is supported by the Austrian Science Fund (FWF): Project F5509-N26, which is a part of the Special Research Program "Quasi-Monte Carlo Methods: Theory and Applications".

P. Kritzer is supported by the Austrian Science Fund (FWF): Project F5506-N26, which is a part of the Special Research Program "Quasi-Monte Carlo Methods: Theory and Applications".

F. Pillichshammer is partially supported by the Austrian Science Fund (FWF): Project F5509-N26, which is a part of the Special Research Program "Quasi-Monte Carlo Methods: Theory and Applications".

(3)

Without loss of generality, see, e.g., [15] or [12, Section 4], we approximateSs by a linear algorithm An,s usingn information evaluations which are given by linear functionals from the class Λ∈ {Λall,Λstd}. More precisely, we approximateSs by algorithms of the form

An,s(f) =

Xn j=1

αjLj(f) for all f ∈ Hs,

where Lj ∈ Λ andαj ∈ Gs for all j = 1,2, . . . , n. For Λ = Λall we have Lj ∈ Hs whereas for Λ = Λstd we have Lj(f) = f(xj) for allf ∈ Hs, and for some xj ∈[0,1]s. Obviously, for multivariate integration only the class Λstd makes sense. Furthermore, we remark that in this paper we consider only function spaces for which Λstd ⊂Λall.

We measure the error of an algorithm An,s in terms of the worst-case error, which is defined as

e(An,s, Ss) := sup

f∈Hs

kfkHs≤1

kSs(f)−An,s(f)kGs,

wherek·kHs,k·kGs denote the norms inHs andGs, respectively. The nth minimal (worst- case) error is given by

e(n, Ss) := inf

An,s

e(An,s, Ss),

where the infimum is taken over all admissible algorithms An,s. When we want to em- phasize that thenth minimal error is taken with respect to algorithms using information from the class Λ ∈ {Λall,Λstd}, we write e(n, Ss; Λ). For n = 0, we consider algorithms that do not use information evaluations and therefore we use A0,s ≡0. The error of A0,s

is called the initial (worst-case) error and is given by e(0, Ss) := sup

f∈Hs

kfkHs≤1

kSs(f)kGs =kSsk.

Since we will study a class of weighted reproducing kernel Hilbert spaces with expo- nentially decaying weights, which will be introduced in Section1.2, we are concerned with spaces Hs of smooth functions. We remark that reproducing kernel Hilbert spaces of a similar flavor were previously considered in [2, 3, 5, 6, 7, 8, 9]. In this case it is natu- ral to expect that, by using suitable algorithms, we should be able to obtain errors that converge to zero very quickly as n increases, namely exponentially fast. By exponential convergence (EXP) for the worst-case error we mean that there exist a numberq ∈(0,1) and functions p, C, M :N→(0,∞) such that

e(n, Ss)≤C(s)q(n/M(s))p(s) for all s, n∈N. (1) If the function p in (1) can be taken as a constant function, i.e., p(s) = p > 0 for all s ∈ N, we say that we achieve uniform exponential convergence (UEXP) for e(n, Ss).

Furthermore, we denote by p(s) and p the largest possible rates p(s) and p such that EXP and UEXP holds, respectively.

When studying algorithmsAn,s, we do not only want to control how their errors depend on n, but also how they depend on the dimension s. This is of particular importance

(4)

for high-dimensional problems. To this end, we define, for ε ∈ (0,1) and s ∈ N, the information complexity by

n(ε, Ss) := min{n : e(n, Ss)≤ε e(0, Ss)}

as the minimal number of information evaluations needed to reduce the initial error by a factor ofε. Exponential convergence implies that asymptotically, with respect toεtending to zero, we need O(log1/p(s)ε−1) information evaluations to compute anε-approximation.

However, it is not clear how long we have to wait to see this nice asymptotic behavior especially for large s. This, of course, depends on how C(s), M(s) and p(s) depend on s, and this is the subject of tractability. Thus, we intend to study how the information complexity depends on logε−1 and s by considering the following tractability notions, which were already considered in [2, 3, 5,6, 8]. The nomenclature was introduced in [9].

We say that we have Exponential Convergence-Weak Tractability (EC-WT) if

s+loglimε−1→∞

log n(ε, Ss) s+ log ε−1 = 0

with the convention log 0 = 0, i.e., we rule out the cases for which n(ε, s) depends exponentially on s and log ε−1. If there exist numbers c, τ, σ >0 such that

n(ε, Ss)≤c sσ(1 + log ε−1)τ for all s∈N, ε∈(0,1), (2) we say thatExponential Convergence-Polynomial Tractability (EC-PT)holds. This means that the information complexity depends at most polynomially on s and log ε−1. If the upper bound is independent of the dimension s, i.e., (2) holds with σ = 0, then we say that we have Exponential Convergence-Strong Polynomial Tractability (EC-SPT).

Furthermore, the infimum of all τ for which EC-SPT holds is called the exponent of EC-SPT and it is denoted by τ.

For further general remarks on exponential convergence and these tractability notions we refer to [2, 3, 5, 6, 8, 9]. Moreover, we remark that in many papers tractability is studied for problems where we do not have exponential, but usually polynomial, error convergence. For this kind of problems, tractability is concerned with the question how the information complexity depends on s and ε−1, rather than logε−1, (for a detailed survey of such results, we refer to [12, 13, 14]).

1.1 Relations between multivariate integration and approxima- tion

It is well known that multivariate approximation using information from Λstdis not easier than multivariate integration, see, e.g., [11]. More precisely, for any algorithm An,s(f) =

Pn

k=1αkf(xk) for multivariate approximation using the nodes x1, . . . ,xn ∈ [0,1)s and αkL2([0,1]s), define βk :=R[0,1]sαk(x) dx and the algorithm

Aintn,s(f) =

Xn k=1

βkf(xk)

for multivariate integration. Then it can easily be checked that e(Aintn,s,INTs)≤e(An,s,APPs).

(5)

Since this holds for all algorithms An,s we conclude that e(n,INTs)≤e(n,APPs; Λstd).

If the initial errors of integration and approximation are equal, these observations imply that for ε∈(0,1) ands ∈Nwe have

n(ε,INTs)≤n(ε,APPs; Λstd). (3)

1.2 The half-period cosine spaces with infinite smoothness

We now introduce the function spaces for which we will consider multivariate integration and approximation. First, we choose two weight sequences of real positive numbers, a ={aj} and b={bj} such that

0< a :=a1a2. . . and b := inf

j bj >0.

Moreover, we fix a parameter ω ∈(0,1) and define for any k= (k1, . . . , ks)∈Ns

0, ωk :=ωPsj=1ajkbjj =

Ys j=1

ωajkbjj .

The half-period cosine space with infinite smoothness is a reproducing kernel Hilbert space H(Ks,a,b,ωcos ) of (half-period) cosine series with reproducing kernel

Ks,a,b,ωcos (x,y) := X

k∈Ns 0

ωk2|

k|∗

2 2|

k|∗

2

Ys j=1

cos(πkjxj) cos(πkjyj), forx,y∈[0,1]s, (4) where we define |k| :=|{j ∈[s] : kj 6= 0}| to be the number of nonzero components in k= (k1, . . . , ks)∈Ns

0, and [s] := {1, . . . , s}. The inner product in H(Ks,a,b,ωcos ) is given by hf, giKcoss,a,b = X

k∈Ns

0

ω−1k fe(k)g(k),e

where the kth cosine coefficient for a function f : [0,1]s→R is given by fe(k) :=

Z

[0,1]sf(x) 2|k2|∗

Ys j=1

cos(πkjxj) dx.

The corresponding norm is defined by kfkKs,cosa,b =qhf, fiKs,cosa,b, in particular, kfk2Kcos

s,a,b = X

k∈Ns 0

|f(k)|e 2 ωk

= X

h∈Zs

2−|h||f(|h|)|e 2 ωh

, (5)

where |h|= (|h1|, . . . ,|hs|). Furthermore, we can write f ∈ H(Ks,a,b,ωcos ) as f(x) = X

k∈Ns

0

f(k)e

2|

k|∗

2

Ys j=1

cos(πkjxj)

,

(6)

and the sequence {ek}k∈Ns

0 with ek(x) =

2|

k|∗

2

Ys j=1

cos(πkjxj)

ωk12 is an orthonormal basis of H(Ks,a,b,ωcos ).

From (5) we see that the functions in H(Ks,a,b,ωcos ) are characterized by the decay rate of their cosine coefficients, which is regulated by the weights ωk. Since {ωk} decreases exponentially as k grows, the cosine coefficients of the functions in H(Ks,a,b,ωcos ) also de- crease exponentially fast, which influences the smoothness of the functions. Indeed, it can be shown that f ∈ H(Ks,a,b,ωcos ) belongs to the Gevrey class with index 1/b and so it follows that for the case b ≥ 1 the functions of H(Ks,a,b,ωcos ) are analytic. Note that the half-period cosine space with infinite smoothness fits into the general setting which is discussed in [6]. Furthermore we refer to the recent paper [4] in which the integration problem is discussed in a half-period cosine space with weights of polynomial decay.

Finally, we remark that we can assume a ≥ 1 without loss of generality, because we always can modify ω such that a ≥1.

2 Integration in half-period cosine spaces

Let INTs :H(Ks,a,b,ωcos )→Rwith INTs(f) =R[0,1]sf(x) dx. In order to approximate INTs

with respect to the absolute value | · |we use linear algorithms An,s, which are algorithms based on n function evaluations of the form

An,s(f) =

Xn k=1

αkf(xk) for f ∈ H(Ks,a,b,ωcos ), where each αk∈R and xk ∈[0,1]s for k = 1,2, . . . , n.

The worst-case error of an algorithm An,s is defined as e(An,s,INTs) := sup

f∈H(Kcoss,a,b) kfkKcos

s,a,b≤1

|INTs(f)−An,s(f)|.

The nth minimal worst-case error is denoted by e(n,INTs) = inf

An,s

e(An,s,INTs),

where the infimum is taken over all linear algorithms An,s of the above given form. With [1, Proposition 2.11] it is easy to check the initial error ise(0,INTs) = 1.

2.1 Upper and lower error bounds

An n-point midpoint rule is a linear integration rule of the form Tn(f) = 1

n

n−1X

j=0

f

2j+ 1 2n

.

The following lemma on midpoint rules applied to cosine functions has a simple proof which we omit for the sake of brevity.

(7)

Lemma 1. We have

1 n

n−1X

j=0

cos

πl2j+ 1 2n

=

1 if l= 2tn for t∈Z, 0 otherwise.

For integration in the multivariate case, we use the cartesian product of one-dimensional midpoint rules. Let n1, . . . , ns ∈ N and let n = n1n2· · ·ns. For j = 1,2, . . . , s let Tn(j)j (f) = n1

j

Pnj−1

i=0 f2i+12n

j

be one-dimensional nj-point midpoint rules. Then we apply the s-dimensional Cartesian product ruleTn,s =Tn1 ⊗ · · · ⊗Tns, i.e.,

Tn,s(f) = 1 n

nX1−1 i1=0

. . .

nXs−1 is=0

f2i2n1+11 , . . . ,2i2ns+1s for f ∈ H(Ks,a,b,ωcos ). (6) Proposition 1. Let Tn,s be the s-dimensional Cartesian product of nj-point midpoint rules given by (6) and let n =n1n2· · ·ns. Then we have

e2(Tn,s,INTs)≤ −1 +

Ys j=1

1 +ωaj(2nj)bjC(a, b)

with C(a, b) := 2ω−aPl=1ωalb <∞.

Proof. With [1, Proposition 2.11] and (4) we get for the worst-case error of Tn,s in H(Ks,a,b,ωcos ),

e2(Tn,s,INTs) = X

l∈Ns 0\{0}

ωl2|l|0

1 n1

nX1−1 j1=0

. . . 1 ns

nXs−1 js=0

cos

πl1

2j1 + 1 2n1

· · ·cos

πls

2js+ 1 2ns

2

=−1 +

Ys i=1

1 + 2

X l=1

ωailbi

1 ni

nXi−1 j=0

cos

πl2j+ 1 2ni

2

.

Now it follows from Lemma 1that e2(Tn,s,INTs)≤ −1 +

Ys j=1

1 + 2

X l=1

ωaj(2lnj)bj

!

≤ −1 +

Ys j=1

1 + 2ωaj(2nj)bjω−a

X l=1

ωalb∗

!

.

The following proposition states a lower bound on the nth minimal worst-case error which will be useful to derive the necessary conditions in Theorem 1.

Proposition 2. Let t1, . . . , ts∈N. Then the nth minimal worst-case error satisfies e(n,INTs)≥ωPsj=1aj(2tj)bj

Ys j=1

(5(1 +tj))−1 for any 1≤nQsj=1(1 +tj).

Proof. The lower bound can be shown in the same way as [8, Lemma 1] or [5, Theorem 2].

(8)

2.2 The main results for integration

The following theorem gives necessary and sufficient conditions on the weight sequences a and b for EXP, UEXP, and the notions of EC-WT, EC-PT, and EC-SPT.

Theorem 1. Consider integration defined over H(Ks,a,b,ωcos ) with weight sequences a and b.

1. EXP holds for all a and b considered, and p(s) = B(s)1 with B(s) :=Psj=1b1

j. 2. The following assertions are equivalent:

(i) The sequence {b−1j }j≥1 is summable, i.e., B :=Pj=1 b1

j <∞;

(ii) we have UEXP;

(iii) we have EC-PT;

(iv) we have EC-SPT.

If one of the assertions holds then p = 1/B and the exponent τ of EC-SPT is B. 3. EC-WT implies that limj→∞aj2bj =∞.

4. A sufficient condition for EC-WT is that limj→∞aj =∞.

Proof. Proof of Point 1: We first show that we always have EXP. For ε∈(0,1), define

m = max

j=1,2,...,s

1 aj

logC(a, b)log(1+εs 2)

log ω−1

B(s)

with C(a, b) = 2ω−aPk=1ωakb. Now let n1, n2, . . . , ns be given by nj :=jm1/(B(s)bj)k for j = 1,2, . . . , s and set n=n1n2· · ·ns. Since ⌊x⌋ ≥x/2 for all x≥1, we have

aj(2nj)bjajm1/B(s) for every j = 1,2, . . . , s. Then we obtain with Proposition 1,

e2(Tn,s,INTs)≤ −1 +

Ys j=1

1 +ωajm1/B(s)C(a, b).

From the definition of m we have for allj = 1,2, . . . , s, ωajm1/B(s)C(a, b)≤ log(1 +ε2)

s .

This proves

e(Tn,s,INTs)≤ −1 + 1 + log(1 +ε2) s

!s!1/2

−1 + exp(log(1 +ε2))1/2 =ε. (7)

(9)

Furthermore, note that n =

Ys j=1

nj =

Ys j=1

jm1/(B(s)bj)kmB(s)1 Psj=11/bj =m =OlogB(s)1 +ε−1, (8)

with the factor in the O notation independent of ε−1 but dependent on s. From (7) and (8) it follows directly that we always have EXP with p(s)≥1/B(s).

It remains to show that we have indeed p(s) = 1/B(s) for the largest possible rate of EXP. To this end, assume that EXP holds. Let t = (t1, . . . , ts) ∈ Ns and choose n =−1 +Qsj=1(1 +tj). Then Proposition 2implies

C(s)q(n/M(s))p(s)ωPsj=1aj(2tj)bj

Ys j=1

(5(1 +tj))−1, which implies

Ps

j=1aj(2tj)bj

Qs

j=1(1 +tj)p(s) + logC(s) +Psj=1log(5(1 +tj))

Qs

j=1(1 +tj)p(s)logω−1

≥ 1− 1

Qs

j=1(1 +tj)

!p(s)

logq−1

logω−1M(s)−p(s). (9) For fixed s, when ktks,∞ = max1≤j≤stj → ∞, then the second term of the left hand side of (9) goes to zero, and it follows that

lim inf

ktks,∞→∞

Ps

j=1aj(2tj)bj

Qs

j=1(1 +tj)p(s) ≥ logq−1

logω−1M(s)−p(s) >0.

For a positive number t take now

tj :=lt1/bjm for all j = 1,2, . . . , s.

Clearly, limt→∞

lt1/bjm/t1/bj = 1. Then we have

Ps

j=1aj(2tj)bj

Qs

j=1(1 +tj)p(s) =t1−p(s)Psj=1b−1j

Ps

j=1aj(2lt1/bjm/t1/bj)bj

Qs

j=1(lt1/bjm/t1/bj+t−1/bj)p(s). (10) Now if t → ∞, then

Ps

j=1aj(2lt1/bjm/t1/bj)bj

Qs

j=1(lt1/bjm/t1/bj+t−1/bj)p(s) −→

Xs j=1

aj2bj.

However, the expression (10) must be positive when t tends to infinity, so we must have p(s)Psj=1b−1j ≤1 or, equivalently, p(s)≤1/B(s). Thus, altogether p(s) = 1/B(s).

Proof of Point 2: Obviously, EC-SPT implies EC-PT. In [8] it is shown that EC-PT implies UEXP. Now we assume that UEXP holds, then we can show in exactly the same way as above that

lim inf

ktks,∞→∞

Ps

j=1aj(2tj)bj

Qs

j=1(1 +tj)p ≥ logq−1

logω−1M(s)−p >0.

(10)

From this, it is then shown analogously to the proof of the first point thatpPsj=1b−1j ≤1.

Since this holds for alls, we conclude, forstending to infinity, thatpPj=11/bj =pB ≤1.

Furthermore, if we assume that B <∞, and if we replace B(s) by B in the proof of Point 1, we see that UEXP holds with p≥1/B. Thus we have thatp = 1/B.

It remains to show thatB <∞implies EC-SPT. So assume that B =Pj=11/bj <∞ and let n1, . . . , ns be given by

nj =

logC(a, b)π62 log(1+εj2 2)

aj2bj logω−1

1/bj

with C(a, b) = 2ω−aPk=1ωakb. Note that nj is defined such that ωaj(2nj)bjC(a, b)≤ 6

π2

log(1 +ε2) j2 . Therefore

e2(Tn,s,INTs)≤ −1 +

Ys j=1

1 + 6 π2

log(1 +ε2) j2

!

≤ −1 + exp

6

π2 log(1 +ε2)

Xs j=1

j−2

≤ −1 + explog(1 +ε2)=ε2.

We now estimate nj and then n = Qsj=1nj. Clearly, nj ≥ 1 for all j ∈ N. We prove that nj = 1 for large j. Indeed,nj = 1 if

aj2bjlogω−1 ≥log C(a, b)π2 6

j2 log(1 +ε2)

!

. (11)

Let δ > 0. From Pj b1

j < ∞ it follows with the Cauchy condensation test that also

P

j 2j

b2j <∞ and hence limj2j/b2j = 0. Hence, we find that b−12j2j+1δ for j large enough.

For large enough k with 2jk ≤2j+1 we then obtain 1

bk

≤ 1

b2jδ 2j+1δ

k

or, equivalently, 2bk ≥2k/δ. Hence, there exists a positive β1 such that ak2bkβ12k/δ for all k ∈N.

Then the inequality (11) holds for all jj, where j is the smallest positive integer for which

jδ

log 2 log 1

β1logω−1 log C(a, b)π2 6

[j]2 log(1 +ε2)

!!

.

Clearly,

j = δ

log 2 log log ε−1+O(1) as ε→0.

(11)

Without loss of generality we can restrict ourselves to ε ≤e−e, where e = exp(1), so that log log ε−1 ≥1. Then there exists a number C0 ≥1, independent of ε and s, such that

nj = 1 for all j >

$

C0+ δ

log 2 log log ε−1

%

.

We now estimate nj for jjC0+log 2δ log log ε−1k. Note that

log 2

1−ω π2

6

j2 log(1 +ε2)

!

= log C(a, b) π2 6 log(1 +ε2)

!

+ log(j2).

Then aj2bj ≥2j/δ also implies that

K(a,b) := sup

j∈N

log(j2) aj2bj <∞.

Furthermore, there exists a number C1 ≥1, independent of ε and s such that log C(a, b) π2

6 log(1 +ε2)

!

C1+ 2 log1

ε for all ε∈(0,1).

This yields

nj ≤1 + K(a,b) +C1+ 2 logε−1 logω−1

!1/bj

for all j

$

C0+ δ

log 2 log log1 ε

%

.

Let

k= min s,

$

C0+ δ

log 2 log log1 ε

%!

.

Then for C = max (K(a,b) +C1,−2e + logω−1) we have max 1,K(a,b) +C1+ 2 logε−1

logω−1

!

C+ 2 logε−1 logω−1 and

n=

Ys j=1

nj =

Yk j=1

nj

Yk j=1

1 + C+ 2 logε−1 logω−1

!1/bj

= C+ 2 logε−1 logω−1

!Pk

j=11/bj Yk j=1

1 + logω−1 C+ 2 logε−1

!1/bj

C+ 2 logε−1 logω−1

!B

2k. Note that

2k≤2C0 exp

δlog log1 ε

= 2C0 logδ 1 ε.

(12)

Therefore for any positive δ there is a positive number Cδ independent of ε−1 and s such that

n(ε, s)nCδ logB+δ

1 + 1 ε

for all ε∈(0,1), s∈N. This means that we have EC-SPT with τ at mostB.

Proof of Point 3: Assume that EC-WT holds and that (aj2bj)j≥0 is bounded, say aj2bjA < ∞ for all j ∈ N. From setting t1 = t2 =· · · = 1 in Proposition 2 it follows that for all n <2s we have

e(n,INTs)≥10−sωPsj=1aj2bj ≥10−sωAs =ηs,

where η := ωA/10 ∈ (0,1). Hence, for ε = ηs/2 we have e(n,INTs) > ε for all n < 2s. This implies that n(ε,INTs)≥2s and

logn(ε,INTs)

s+ logε−1slog 2

s+ log 2 +slogη−1 −→ log 2

1 + logη−1 >0 as s→ ∞, but this contradicts EC-WT. Therefore, it must hold that limj→∞aj2bj =∞.

Proof of Point 4: By Point 3 of Theorem 2, the condition limj→∞aj = ∞ implies EC-WT for the approximation problem with Λstd, which, by (3), also implies EC-WT for the integration problem.

3 L

2

-approximation in half-period cosine spaces

Let APPs :H(Ks,a,b,ωcos )→ L2([0,1]s) with APPs(f) = f. In order to approximate APPs

with respect to the norm inL2([0,1]s) we use linear algorithmsAn,s, which are algorithms based on n information evaluations of the form

An,s(f) =

Xn k=1

αkLk(f) for f ∈ H(Ks,a,b,ωcos ),

where each αk is a function from L2([0,1]s) and each Lk is a continuous linear functional defined on H(Ks,a,b,ωcos ) from a permissible class Λ of information.

The worst-case error of an algorithm An,s is defined as e(An,s,APPs) := sup

f∈H(Ks,cosa,b) kfkKcos

s,a,b≤1

kf −An,s(f)kL2([0,1]s).

The nth minimal worst-case error for the information class Λ∈ {Λstd,Λall}is denoted by e(n,APPs; Λ) = inf

An,s

e(An,s,APPs),

where the infimum is taken over all linear algorithms An,s using information from Λ. For n = 0, we simply approximatef by zero, and the initial error is e(0,APPs; Λ) = 1.

(13)

3.1 Some auxiliary results

We proceed in a similar way to [2] and [7]. For M >1 we define the setA(s, M) = {h∈ Ns

0 : ωh−1 < M} and we will study approximating f ∈ H(Ks,a,b,ωcos ) by algorithms of the form

An,s,M(f)(x) = X

h∈A(s,M)

1 n

Xn k=1

f(xk)2|

h|∗

2

Ys j=1

cos(πhjxk,j)

2|

h|∗

2

Ys j=1

cos(πhjxj), (12) where x= (x1, . . . , xs) ∈ [0,1]s. The choice of M and xk = (xk,1, . . . , xk,s) ∈ [0,1)s will be given later.

We first show upper bounds on the worst-case error of An,s,M. The following analysis is similar to that in [7, 10]. Using Parseval’s identity we obtain

kf−An,s,M(f)k2L2([0,1]s) =

= X

h/∈A(s,M)

|fe(h)|2+ X

h∈A(s,M)

fe(h)− 1 n

Xn k=1

f(xk)2|h2|∗

Ys j=1

cos(πhjxk,j)

2

. (13) The first term can be estimated by

X

h∈A(s,M)/

|f(h)|e 2 = X

h∈A(s,M)/

|f(h)|e 2ωhω−1h ≤ 1

Mkfk2Kcos

s,a,b. (14)

We now consider the second term in (13). Leth ∈Ns

0. Forf ∈ H(Ks,a,b,ωcos ) we define fh(x) := f(x)2|h|/2Qsj=1cos(πhjxj). So we get

fe(h)− 1 n

Xn k=1

f(xk)2|h|/2

Ys j=1

cos(πhjxk,j)

2

=

Z

[0,1]sfh(x) dx− 1 n

Xn k=1

fh(xk)

2

. As for the integration problem, we choose the pointsx1, . . . ,xn used in the algorithm An,s,M according to a centered regular grid Gn,s with different mesh-sizes n1, . . . , ns ∈ N for successive variables, i.e.,

Gn,s =

( 2k1+ 1 2n1

, . . . , 2ks+ 1 2ns

!

: kj = 0,1, . . . , nj −1 for all j = 1,2, . . . , s

)

,

where n=Qsj=1nj is the cardinality of Gn,s. By Gn,s we denote the set Gn,s ={h∈Ns

0 : hj ≡0 (mod 2nj) for all j = 1,2, . . . , s}.

Since fh can be represented by a pointwise convergent cosine series, we have that

Z

[0,1]sfh(x) dx− 1 n

Xn k=1

fh(xk)

X

l∈Ns

0\{0}

ffh(l)2|

l|∗

2

Ys j=1

1 nj

nj

X

kj=1

cos πlj

2kj + 1 2nj

!

X

l∈Gn,s \{0}

|ffh(l)|2|l2|∗.

Referenzen

ÄHNLICHE DOKUMENTE

Regulation (EU) No 806/2014 of the European Parliament and of the Council establishing uniform rules and a uniform procedure for the resolution of credit institutions and

So far as economic considerations or variables do appear be associated with attitudes of support or opposition to the EC and European unification, they seem to have different

Model 2 includes the same dummy variables for secondary formal debt instruments but replaces the bank loan dummy with a dummy variable for broad bank debt (bank loan, overdraft,

Batten, Sowerbutts and Tanaka (2020) “Climate change: Macroeconomic impact and implications for monetary policy”, in Ecological, Societal, and Technological Risks and the

– All women presenting for EC within 120 h after UPSI should be offered a Cu-IUD if eligible since insertion of a Cu-IUD undoubtedly is the most ef- fective EC – with no reduction

The Port Network Authority is now upgrading both infrastructures and organization of Trieste Campo Marzio rail terminal in order to ensure efficiency. in operational activities

1 Directive 2001/107/EC of 21 January 2002 amending Council Directive 85/611/EEC on the coordina- tion of laws, regulations and administrative provisions relating to undertakings

13 Using the perceived acceptance from payments recorded in the OeNB’s 2016 payment diary survey, no difference can be identified, either: The percentage of payments where cards