Fixed domain asymptotics and consistent estimation for gaussian random field models in spatial statistics and computer experiments

177 145 0
Fixed domain asymptotics and consistent estimation for gaussian random field models in spatial statistics and computer experiments

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

FIXED DOMAIN ASYMPTOTICS AND CONSISTENT ESTIMATION FOR GAUSSIAN RANDOM FIELD MODELS IN SPATIAL STATISTICS AND COMPUTER EXPERIMENTS WANG DAQING (B.Sc. University of Science and Technology of China) A THESIS SUBMITTED FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF STATISTICS AND APPLIED PROBABILITY NATIONAL UNIVERSITY OF SINGAPORE 2010 ii ACKNOWLEDGEMENTS I am so grateful that I have Professor Loh Wei Liem as my supervisor. He is truly a great mentor not only in statistics but also in daily life. I would like to thank him for his guidance, encouragement, time, and endless patience. Next, I would like to thank my senior Li Mengxin for discussion on various topics in research. I also thank all my friends who helped me to make life easier as a graduate student. I wish to express my gratitude to the university and the department for supporting me through NUS Graduate Research Scholarship. Finally, I will thank my family for their love and support. iii CONTENTS Acknowledgements ii Summary v List of Tables vii List of Figures ix List of Notations xi Chapter Introduction 1.1 Mat´ern Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Powered Exponential Class . . . . . . . . . . . . . . . . . . . . . . . Chapter Isotropic Covariance Function 11 CONTENTS iv 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 Some Probability Inequalities . . . . . . . . . . . . . . . . . . . . . 21 2.4 Spectral Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5 Tapered Covariance Functions . . . . . . . . . . . . . . . . . . . . . 40 2.6 Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.7 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 2.7.1 Precision of Theorem 2.3 approximations for finite 𝑛 . . . . 68 2.7.2 Precision of Theorem 2.1 and 2.2 approximations for finite 𝑛 71 Chapter Multiplicative Covariance Function 89 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.2 Quadratic Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.3 Spectral Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 3.3.1 Spectral Density Function . . . . . . . . . . . . . . . . . . . 108 3.3.2 Mean of the Periodogram . . . . . . . . . . . . . . . . . . . 124 3.3.3 Covariance and Variance of the Periodogram . . . . . . . . . 136 3.3.4 Consistent Estimation . . . . . . . . . . . . . . . . . . . . . 148 3.4 Multiplicative Mat´ern Class . . . . . . . . . . . . . . . . . . . . . . 151 3.5 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Chapter Conclusion 157 Bibliography 161 v SUMMARY Let 𝑋 : ℝ𝑑 → ℝ be a mean-zero Gaussian random field with covariance function 𝐶𝑜𝑣(𝑋(x), 𝑋(y)) = 𝜎 𝐾𝜃 (x − y), ∀ x, y ∈ ℝ𝑑 , where 𝜎, 𝜃 are unknown parameters. This thesis is concerned with the estimation of 𝜎, 𝜃 by using observations {𝑋(x1 ), 𝑋(x2 ), ⋅ ⋅ ⋅ , 𝑋(x𝑛 )}, where x1 , ⋅ ⋅ ⋅ , x𝑛 are distinct points in a fixed domain [0, 𝑇 ]𝑑 for some constant < 𝑇 < ∞. Summary Our work has two parts. The first part (Chapter 2) deals with isotropic covariance function. Maximum likelihood is a preferred method for estimating the covariance parameters. However, when the sample size 𝑛 is large, it is a burden to compute the likelihood. Covariance tapering is an effective technique to approximating the covariance function with a taper (usually a compactly supported correlation function) so that the computation can be reduced. Chapter studies the fixed domain asymptotic behavior of the tapered MLE for the microergodic parameter of isotropic Mat´ern class covariance function when the taper support is allowed to shrink as 𝑛 → ∞. In particular when 𝑑 ≤ 3, conditions are established in which the tapered MLE is strongly consistent and asymptotically normal. The second part (Chapter 3) establishes consistent estimators of the covariance and scale parameters of Gaussian random field with multiplicative covariance function. When 𝑑 = 1, in some cases it is impossible to consistently estimate them simultaneously under fixed domain asymptotics. However, when 𝑑 > 1, consistent estimators of functions of covariance and scale parameters can be constructed by using quadratic variation and spectral analysis. Consequently, they provide the consistent estimators of the covariance and scale parameters. vi vii List of Tables Table 2.1 Percentiles (standard in parentheses), and Stan√ 2Means √ errors 2𝜈 2𝜈 2𝜈 dard Deviations (SD) of 𝑛(ˆ 𝜎1 𝛼1,𝑛 − 𝜎 𝛼 )/( 2𝜎 𝛼 ), and Biases and Mean Square Errors (MSE) of estimator 𝜎 ˆ1,𝑛 𝛼12𝜈 for 𝜎 = 1, 𝛼 = 0.8, 𝛼1 = 1.3, 𝜈 = 1/4. . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Table 2.2 Percentiles (standard in parentheses), and Stan√ 2Means √ errors 2𝜈 2𝜈 2𝜈 dard Deviations (SD) of 𝑛(ˆ 𝜎1 𝛼1,𝑛 − 𝜎 𝛼 )/( 2𝜎 𝛼 ), and Biases and Mean Square Errors (MSE) of estimator 𝜎 ˆ1,𝑛 𝛼12𝜈 for 𝜎 = 1, 𝛼 = 0.8, 𝛼1 = 1.3, 𝜈 = 1/2. . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Table 2.3 Percentiles (standard in parentheses), and Stan√ 2Means √ errors 2𝜈 2𝜈 2𝜈 dard Deviations (SD) of 𝑛(ˆ 𝜎1 𝛼1,𝑛 − 𝜎 𝛼 )/( 2𝜎 𝛼 ), and Biases and Mean Square Errors (MSE) of estimator 𝜎 ˆ1,𝑛 𝛼12𝜈 for 𝜎 = 1, 𝛼 = 0.8, 𝛼1 = 2, 𝜈 = 1/4. . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Table 2.4 Percentiles (standard in parentheses), and Stan√ 2Means √ errors 2𝜈 2𝜈 2𝜈 dard Deviations (SD) of 𝑛(ˆ 𝜎1 𝛼1,𝑛 − 𝜎 𝛼 )/( 2𝜎 𝛼 ), and Biases 𝛼12𝜈 for 𝜎 = 1, 𝛼 = and Mean Square Errors (MSE) of estimator 𝜎 ˆ1,𝑛 0.8, 𝛼1 = 2, 𝜈 = 1/2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 List of Tables viii Table 2.5 Percentiles (standard in parentheses), and Stan√ 2Means √ errors 2𝜈 2𝜈 2𝜈 dard Deviations (SD) of 𝑛(ˆ 𝜎1 𝛼1,𝑛 − 𝜎 𝛼 )/( 2𝜎 𝛼 ), and Biases and Mean Square Errors (MSE) of estimator 𝜎 ˆ1,𝑛 𝛼12𝜈 for 𝑑 = 1, 𝜎 = 1, 𝛼 = 5, 𝛼1 = 7.5, 𝜈 = 1/4 and 𝜙1,1 (𝑥/𝛾𝑛 ). . . . . . . . . . . . . . . 78 Table 2.6 Percentiles (standard in parentheses), and Stan√ 2Means √ errors 2𝜈 2𝜈 2𝜈 dard Deviations (SD) of 𝑛(ˆ 𝜎1 𝛼1,𝑛 − 𝜎 𝛼 )/( 2𝜎 𝛼 ), and Biases and Mean Square Errors (MSE) of estimator 𝜎 ˆ1,𝑛 𝛼12𝜈 for 𝑑 = 1, 𝜎 = 1, 𝛼 = 5, 𝛼1 = 7.5, 𝜈 = 1/4 and 𝜙1,1 (𝑥/𝛾𝑛 ). . . . . . . . . . . . . . . 79 Table 2.7 Percentiles (standard in parentheses), and Stan√ 2Means √ errors 2𝜈 2𝜈 2𝜈 dard Deviations (SD) of 𝑛(ˆ 𝜎1 𝛼1,𝑛 − 𝜎 𝛼 )/( 2𝜎 𝛼 ), and Biases and Mean Square Errors (MSE) of estimator 𝜎 ˆ1,𝑛 𝛼12𝜈 for 𝑑 = 2, 𝜎 = 1, 𝛼 = 5, 𝛼1 = 7.5 and 𝜙2,1 (x/𝛾𝑛 ). . . . . . . . . . . . . . . . . . . . 80 Table 3.1 means of the estimators and standard errors in parentheses with 𝜎 = 1, 𝜃1 = 1, 𝜃2 = 1.5, 𝛾 = 0.5. . . . . . . . . . . . . . . . . . . 154 𝜃 𝜃 and standard errors in parentheses with 𝜎 = Table 3.2 means of 𝜎ˆ 1, 𝜃1 = 1, 𝜃2 = 1.5, 𝛾 = 0.5, 𝜆 = 2/3, 𝑛 = 100. . . . . . . . . . . . . . 155 𝜃 𝜃 and standard errors in parentheses with 𝜎 = Table 3.3 means of 𝜎ˆ 1, 𝜃1 = 1, 𝜃2 = 1.5, 𝛾 = 0.5, 𝜆 = 2/3, 𝑛 = 200. . . . . . . . . . . . . . 155 𝜃 𝜃 and standard errors in parentheses with 𝜎 = Table 3.4 means of 𝜎ˆ 1, 𝜃1 = 1, 𝜃2 = 1.5, 𝛾 = 1.2, 𝜆 = 2/3, 𝑛 = 200. . . . . . . . . . . . . . 156 ix List of Figures √ Figure 2.1 Histograms of 𝑛(ˆ 𝜎1,𝑛 𝛼12𝜈 −𝜎 𝛼2𝜈 ) without taper for different 𝑑 and 𝑛 with 𝜎 = 1, 𝛼 = 0.8, 𝛼1 = 1.3, 𝜈 = 1/4. . . . . . . . . . . . . 81 √ Figure 2.2 Histograms of 𝑛(ˆ 𝜎1,𝑛 𝛼12𝜈 −𝜎 𝛼2𝜈 ) without taper for different 𝑑 and 𝑛 with 𝜎 = 1, 𝛼 = 0.8, 𝛼1 = 1.3, 𝜈 = 1/2. . . . . . . . . . . . . 82 √ Figure 2.3 Histograms of 𝑛(ˆ 𝜎1,𝑛 𝛼12𝜈 −𝜎 𝛼2𝜈 ) without taper for different 𝑑 and 𝑛 with 𝜎 = 1, 𝛼 = 0.8, 𝛼1 = 2, 𝜈 = 1/4. . . . . . . . . . . . . . 83 √ Figure 2.4 Histograms of 𝑛(ˆ 𝜎1,𝑛 𝛼12𝜈 −𝜎 𝛼2𝜈 ) without taper for different 𝑑 and 𝑛 with 𝜎 = 1, 𝛼 = 0.8, 𝛼1 = 2, 𝜈 = 1/2. . . . . . . . . . . . . . 84 √ 2𝜈 Figure 2.5 Histograms of 𝑛(ˆ 𝜎12 𝛼1,𝑛 −𝜎 𝛼2𝜈 ) for different 𝑛 and tapering range 𝛾𝑛 = 𝐶𝑛−0.03 with 𝑑 = 1, 𝜎 = 1, 𝛼 = 5, 𝛼1 = 7.5, 𝜈 = 1/4. . . . 85 √ 2𝜈 −𝜎 𝛼2𝜈 ) for different 𝑛 and tapering Figure 2.6 Histograms of 𝑛(ˆ 𝜎12 𝛼1,𝑛 range 𝛾𝑛 = 𝐶𝑛−0.03 with 𝑑 = 1, 𝜎 = 1, 𝛼 = 5, 𝛼1 = 7.5, 𝜈 = 1/2. . . . 86 √ 2𝜈 −𝜎 𝛼2𝜈 ) for different 𝑛 and tapering Figure 2.7 Histograms of 𝑛(ˆ 𝜎12 𝛼1,𝑛 −0.02 range 𝛾𝑛 = 𝐶𝑛 with 𝑑 = 2, 𝜎 = 1, 𝛼 = 5, 𝛼1 = 7.5, 𝜈 = 1/4. . . . 87 √ 2𝜈 −𝜎 𝛼2𝜈 ) for different 𝑛 and tapering Figure 2.8 Histograms of 𝑛(ˆ 𝜎12 𝛼1,𝑛 range 𝛾𝑛 = 𝐶𝑛−0.02 with 𝑑 = 2, 𝜎 = 1, 𝛼 = 5, 𝛼1 = 7.5, 𝜈 = 1/8. . . . 88 List of Figures √ Figure 2.9 Histograms of 𝑛(ˆ 𝜎1,𝑛 𝛼12𝜈 − 𝜎 𝛼2𝜈 ) for different taper with 𝑑 = 1, 𝜎 = 1, 𝛼 = 0.8, 𝛼1 = 2, 𝜈 = 1/2. . . . . . . . . . . . . . . . . . x 88 3.3 Spectral Analysis 150 Proof. Note that if 𝛾 ∈ (0, 1) and max{1/𝑑, 2𝛾 − 1} < 𝜆 < 1, then 𝑑𝜆 > and 2𝛾 − − 𝜆 < −1. Thus by Proposition 3.4, ∞ ∑ 𝑉 𝑎𝑟{𝑚𝑑𝛾 𝑓ˆ𝜆 (2𝜋𝑚−1 J𝑚 )} < ∞. 𝑚=1 Therefore, using Markov inequality and Borel-Cantelli lemma, we conclude from Proposition 3.5 that a.s. 𝑚 𝑓ˆ𝜆 (2𝜋𝑚−1 J𝑚 ) −→ 𝑔(u; 𝛾)𝜎 𝑑𝛾 𝑑 ∏ 𝜃𝑗 , as 𝑚 → ∞. 𝑗=1 This proves Theorem 3.2. As we mentioned in Section 3.1, for 𝑑 = and 𝛾 ∈ (1/2, 2), 𝜎 and 𝜃 cannot be consistently estimated simultaneously under fixed domain asymptotics. But Theorem 3.1 shows that for 𝑑 = 1, 𝛾 ∈ (0, 2), the estimator of 𝜎 𝜃 obtained by quadratic variation of a lattice sample is consistent. Hence for Gaussian random field 𝑋(t), t ∈ [0, 1]𝑑 with mean zero and multiplicative powered exponential class covariance function, fixed 𝑡˜2 , ⋅ ⋅ ⋅ , 𝑡˜𝑑 , then 𝑋(𝑡1 , 𝑡˜2 , ⋅ ⋅ ⋅ , 𝑡˜𝑑 ) is a one-dimension Gaussian process. 𝜎 𝜃1 can be consistently estimated. Likewise, consistent estimators of 𝜎 𝜃𝑖 , 𝑖 = 2, ⋅ ⋅ ⋅ , 𝑑 can also be constructed. Furthermore, Theorem 3.2 gives the consistent estimator of 𝜎 ∏𝑑 𝑖=1 𝜃𝑖 for 𝛾 ∈ (0, 1). This implies that 𝜎 and 𝜃1 , ⋅ ⋅ ⋅ , 𝜃𝑑 can be consistently estimated for 𝑑 ≥ and 𝛾 ∈ (0, 1). 3.4 Multiplicative Mat´ern Class 3.4 151 Multiplicative Mat´ ern Class In this section, let 𝑋(t), t ∈ [0, 1]𝑑 be Gaussian random field with mean zero and multiplicative Mat´ern class covariance function (3.9). Then the corresponding spectral density 𝑓 has a close form 𝑑 { Γ(𝜈 + 12 ) }𝑑 ∏ 𝜃𝑖2𝜈 √ 𝑓 (w) = 𝜎 , (𝜃𝑖2 + 𝜔𝑖2 )𝜈+1/2 𝜋Γ(𝜈) 𝑖=1 ∀ w = (𝜔1 , ⋅ ⋅ ⋅ , 𝜔𝑑 )𝑇 ∈ ℝ𝑑 . So similar to the proofs of Lemma 3.5 and 3.6, we can check that for multiplicative Mat´ern class, the corresponding 𝑓¯𝛿 also satisfies conditions as Lemma 3.5 and 3.6 with 2𝜈 instead of 𝛾. This implies that Proposition 3.1, 3.2, 3.3 and 3.4 are also true for 𝜈 ∈ (0, 1/2). Furthermore, for fixed w = (𝜔1 , ⋅ ⋅ ⋅ , 𝜔𝑑 )𝑇 ∈ (−𝜋, 𝜋]𝑑 ∖ {0}, as 𝑚 → ∞ 𝑚 2𝑑𝜈 𝑓¯𝛿 (w) → { 𝑑 ∑ 𝑑 𝑑 } 2∏ 𝜔𝑗 { Γ(𝜈 + 12 ) }𝑑 ∏ { ∑ −(2𝜈+1) sin ( )} √ ∣𝜔𝑖 + 2𝜋𝐿𝑖 ∣ 𝜎 𝜃𝑖2𝜈 . 𝜋Γ(𝜈) 𝑗=1 𝑖=1 𝐿 ∈ℤ 𝑖=1 𝑖 Therefore, Theorem 3.3. Suppose J𝑚 = (𝐽1𝑚 , ⋅ ⋅ ⋅ , 𝐽𝑑𝑚 )𝑇 ∈ T𝑐∗ ,𝑚 , lim𝑚→∞ 2𝜋𝐽𝑗𝑚 /𝑚 = 𝑢𝑗 ∈ (−𝜋, 𝜋) for 𝑗 = 1, ⋅ ⋅ ⋅ , 𝑑, where T𝑐∗ ,𝑚 is as in Proposition 3.2. Then for 𝜈 ∈ (0, 1/2) 3.5 Simulations 152 and max{1/𝑑, 4𝜈 − 1} < 𝜆 < 1, 𝑚 2𝑑𝜈 a.s. 𝑓ˆ𝜆 (2𝜋𝑚−1 J𝑚 ) −→ { 𝑑 ∑ 𝑢𝑗 { Γ(𝜈 + 21 ) }𝑑 sin ( )} √ 𝜋Γ(𝜈) 𝑗=1 × 𝑑 ∏ {∑ 𝑖=1 −(2𝜈+1) ∣𝑢𝑖 + 2𝜋𝐿𝑖 ∣ } 𝜎 𝑑 ∏ 𝜃𝑖2𝜈 , 𝑖=1 𝐿𝑖 ∈ℤ as 𝑚 → ∞. Theorem 3.3 provides the consistent estimator of 𝜎 ∏𝑑 𝑗=1 𝜃𝑗 . In addition, con- sistent estimators of 𝜎 𝜃𝑗2𝜈 , 𝑗 = 1, ⋅ ⋅ ⋅ , 𝑑 can be constructed by Zhang’s result [see Zhang(2004)]. Thus, Gaussian random field 𝑋(t), t ∈ [0, 1]𝑑 with mean zero and multiplicative Mat´ern class covariance function (3.9), 𝜎 and 𝜃1 , ⋅ ⋅ ⋅ , 𝜃𝑑 can be consistently estimated for 𝑑 ≥ and 𝜈 ∈ (0, 1/2). 3.5 Simulations In this section, we hopefully use simulations to compliment our theoretical results. In Sections 3.2 and 3.3, we constructed the relative consistent estimators of unknown parameters using quadratic variation and spectral analysis respectively. The questions are how efficient they are and how much would be lost by using these methods. 3.5 Simulations 153 We simulated 500 independent realizations of Gaussian random field 𝑋(t) with mean zero and multiplicative powered exponential covariance function (3.1) for each set of parameters (𝜎, 𝜃1 , 𝜃2 , 𝛾, sample size). The method we used was described by Wood and Chan (1994). We simulated the samples on a grid in [0, 1]2 , which is 𝑖1 𝑖2 {( , ) : 𝑖1 , 𝑖2 = 0, 1, 2, ⋅ ⋅ ⋅ , 𝑛 − 1}, 𝑛 𝑛 where 𝑛 is an integer. Using the results of Theorem 3.1 , we defined the estimators as following: 2𝜃 𝜎ˆ = ∗ 2𝜃 𝜎ˆ = 2𝜃 𝜎ˆ = ∗ 2𝜃 𝜎ˆ = 𝑛−1 ∑ 𝑖 𝑖−1 [𝑋( , 0) − 𝑋( , 0)]2 , 1−𝛾 2(𝑛 − 1) 𝑛 𝑛 𝑖=1 𝑛−1 ∑ 𝑛−1 ∑ 𝑖 𝑗 𝑖−1 𝑗 [𝑋( , ) − 𝑋( , )] , 1−𝛾 2𝑛(𝑛 − 1) 𝑛 𝑛 𝑛 𝑛 𝑗=0 𝑖=1 𝑛−1 ∑ 𝑗 𝑗−1 [𝑋(0, ) − 𝑋(0, )] , 1−𝛾 2(𝑛 − 1) 𝑛 𝑛 𝑗=1 𝑛−1 ∑ 𝑛−1 ∑ 𝑖 𝑗 𝑖 𝑗−1 [𝑋( , ) − 𝑋( , )] . 2𝑛(𝑛 − 1)1−𝛾 𝑖=0 𝑗=1 𝑛 𝑛 𝑛 𝑛 In addition, using Theorem 3.2 we defined 2𝜃 𝜃 𝜎ˆ = 𝑚𝑑𝛾 ˆ 𝑓𝜆 (2𝜋𝑚−1 J𝑚 ), 𝑔(u; 𝛾) where 𝑓ˆ𝜆 and 𝑔(⋅, 𝛾) are as in (3.6) and (3.8). Here 𝑚 = 𝑛 − 2. 3.5 Simulations 154 n=50 n=100 n=200 2𝜃 𝜎ˆ 0.9158 (0.1977) 0.9506 (0.1522) 0.9611 (0.1056) ∗ 2𝜃 𝜎ˆ 0.9255 (0.1071) 0.9497 (0.0742) 0.9609 (0.0517) 2𝜃 𝜎ˆ 1.3421 (0.3033) 1.3944 (0.2161) 1.4179 (0.1531) ∗ 2𝜃 𝜎ˆ 1.3450 (0.1896) 1.3804 (0.1324) 1.4139 (0.0923) Table 3.1 means of the estimators and standard errors in parentheses with 𝜎 = 1, 𝜃1 = 1, 𝜃2 = 1.5, 𝛾 = 0.5. We first considered 𝜎 = 1, 𝜃1 = 1, 𝜃2 = 1.5, 𝛾 = 0.5. Table 3.1 displays means and standard errors of estimators of 𝜎 𝜃1 and 𝜎 𝜃2 for different 𝑛. As an estimation ∗ ∗ 2𝜃 2 ˆ ˆ of 𝜎 𝜃1 , 𝜎ˆ is more stable than 𝜎 𝜃1 . This is not surprising, 𝜎 𝜃1 uses all the 𝜃 just uses a row of data. Tables 3.2, 3.3 display means and standard data while 𝜎ˆ 𝜃 𝜃 at Jm = (𝐽 𝑚 , 𝐽 𝑚 ) ∈ {⌊𝑚/8⌋, ⌊𝑚/6⌋, ⌊𝑚/3⌋}2 with 𝜆 = 2/3 and errors of 𝜎ˆ 2 𝑚 = 98, 198 respectively. So the corresponding u = (𝑢1 , 𝑢2 ) ∈ {𝜋/4, 𝜋/3, 2𝜋/3}2 . As showed in Tables 3.2, 3.3, the errors are a little large at some points where at 𝜃 𝜃 is constructed by least one component of Jm is near axis. This is because 𝜎ˆ periodogram of a process where the periodogram is biased estimator of spectral density at frequency 2𝜋𝑚−1 J𝑚 if J𝑚 is near axis and nearly unbiased estimator if J𝑚 is far away from both axes. Also as the sample size increases, the errors decrease as expected. 3.5 Simulations 𝐽1𝑚 155 12 16 32 12 3.6587 (0.3147) 3.0500 (0.2977) 3.4861 (0.3477) 16 2.8409 (0.2527) 1.7247 (0.0889) 1.6187 (0.0839) 32 3.0842 (0.2821) 1.5764 (0.0801) 1.2923 (0.0619) 𝐽2𝑚 𝜃 𝜃 and standard errors in parentheses with 𝜎 = 1, 𝜃 = Table 3.2 means of 𝜎ˆ 1, 𝜃2 = 1.5, 𝛾 = 0.5, 𝜆 = 2/3, 𝑛 = 100. 𝐽1𝑚 24 33 66 24 1.8913 (0.0719) 1.8368 (0.0663) 1.9353 (0.0693) 33 1.8005 (0.0636) 1.5737 (0.0494) 1.5317 (0.0485) 66 1.8666 (0.0680) 1.5043 (0.0453) 1.3860 (0.0410) 𝐽2𝑚 𝜃 𝜃 and standard errors in parentheses with 𝜎 = 1, 𝜃 = Table 3.3 means of 𝜎ˆ 1, 𝜃2 = 1.5, 𝛾 = 0.5, 𝜆 = 2/3, 𝑛 = 200. In Theorem 3.2, we restricted the smoothness parameter 𝛾 less than 1. Our next example showed that this condition is necessary. We fixed 𝜎 = 1, 𝜃1 = 1, 𝜃2 = 1.5 ∗ 𝜃 = 1.0256, 𝜎 2𝜃 ˆ ˆ and 𝛾 = 1.2. We obtained that for 𝑛 = 200, 𝜎ˆ 1 = 1.0220, 𝜎 𝜃2 = ∗ 2𝜃 ˆ 1.4885, 𝜎ˆ = 1.4875. But Table 3.4 shows that the errors of the estimator 𝜎 𝜃1 𝜃2 are large. This indicates that Theorem 3.2 may not be true for 𝛾 > 1. 3.5 Simulations 𝐽1𝑚 156 24 33 66 24 5.2063 (0.4093) 3.8944 (0.3520) 2.6459 (0.2760) 33 3.6791 (0.2760) 2.0066 (0.1408) 1.1647 (0.1121) 66 2.5465 (0.2386) 1.1339 (0.0949) 0.4616 (0.0321) 𝐽2𝑚 𝜃 𝜃 and standard errors in parentheses with 𝜎 = 1, 𝜃 = Table 3.4 means of 𝜎ˆ 1, 𝜃2 = 1.5, 𝛾 = 1.2, 𝜆 = 2/3, 𝑛 = 200. 157 CHAPTER Conclusion This study established consistent parameter estimation of Gaussian random field models in computer experiments under fixed domain asymptotics. In Chapter 2, We first investigated asymptotic properties of Gaussian random field with isotropic Mat´ern class covariance function. Maximum likelihood is a preferred method of for estimating covariance function. But when the sample size 𝑛 is large, it is a challenge to evaluate the likelihood of observations which have long memory. So in order to reduce the computation, covariance tapering is a way to approximate the covariance function with a taper (usually a compactly supported correlation function). We studied the fixed domain asymptotic behavior of the tapered MLE 158 for the microergodic parameter when the taper support is allowed to shrink as 𝑛 → ∞. Our results show that if the dimension 𝑑 ≤ 3, the tapered MLE is strongly consistent and also asymptotically normal under mild conditions which are easy to check in practice. In Chapter 3, we also investigated asymptotic properties of Gaussian random field with powered exponential class covariance function under fixed domain asymptotics. We pointed out that for 𝑑 = and 𝛾 ∈ (1/2, 2], 𝜎 and 𝜃 cannot be consistently estimated simultaneously under fixed domain asymptotics. This is because ˜ are equivin this case the two Gaussian measures induced by (𝜎 , 𝜃) and (˜ 𝜎 , 𝜃) ˜ But we found that for 𝑑 = 1, 𝛾 ∈ (0, 3/2), the estimator alent if 𝜎 𝜃 = 𝜎 ˜ 𝜃. of 𝜎 𝜃 obtained by quadratic variation of a lattice sample is strongly consistent. We further found that the estimator of 𝜎 ∏𝑑 𝑖=1 𝜃𝑖 obtained by smoothed peri- odogram is strongly consistent for 𝛾 ∈ (0, 1). Hence for Gaussian random field 𝑋(𝑡), 𝑡 ∈ [0, 1]𝑑 with mean zero and powered exponential class covariance function and 𝑑 ≥ 2, 𝛾 ∈ (0, 1), we can construct (𝑑 + 1) linear independent consistent estimators of 𝜎 𝜃1 , ⋅ ⋅ ⋅ , 𝜎 𝜃𝑑 and 𝜎 ∏𝑑 𝑖=1 𝜃𝑖 . This means that 𝜎 and 𝜃1 , ⋅ ⋅ ⋅ , 𝜃𝑑 can be consistently estimated for 𝑑 ≥ and 𝛾 ∈ (0, 1). Thus, the structure for high dimension is quite different from that for one dimension in this model. Compared to MLE, our estimators of the same quantity yield strong consistency and are easy to compute. However, we are mainly concerned with the consistency 159 of parameter estimators. We may expect some loss of efficiency in our estimators as compared to MLE. Furthermore, it should be noted that there are some limitations for the methods we use. Firstly, our study is under the assumption that the smoothness parameter is known. In fact, for real data we not know how smooth the sample path is. So the smoothness parameter also needs to be estimated. Secondly, the estimators obtained by periodogram are only available when the smoothness parameter is small enough. This is because the order of tail of spectral density function for every direction depends on the smoothness parameter. Our approximation needs this kind of order less than two. Considering the research in this area, there are still some open problems for future work: First, for isotropic Mat´ern class, the case 𝑑 = 4, whether the scale and variance parameters can be separated or not under fixed domain asymptotics is still unknown. Second, for multiplicative covariance function it is possible to extend that the estimator obtained by periodogram is available without the restriction of smoothness parameter. Our conjecture is that there are two ways. One way is to determine the error term, since the smoothed periodogram may not be the unbiased estimator of the spectral density when the smoothness parameter is large, that is, the sample path is so smooth that the dominant term of smoothed periodogram is not just spectral density. Another way is to establish some kind of approximation 160 using new techniques to avoid the restriction of smoothness parameter, since our approximation is not sharp enough. Another interesting area for future work is the efficiency of estimators. As we know, MLE is asymptotical efficient under mild conditions. So we need to develop a criterion to determine how efficient the estimators are compared to MLE. 161 Bibliography [1] Adler, R. J. and Pyke, R. (1993). Uniform quadratic variation for Gaussian processes. Stoch. Proc. Appl. 48 191-209. [2] Anderes, E. (2010). On the consistent separation of scale and variance for Gaussian random fields. Ann. Statist. 38 870-893. [3] Anderson, T. W. (1971). The Statistical Analysis of Time Series. Wiley, New York. [4] Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis, 3rd edition. Wiley, New York. [5] Andrews, G. E., Askey, R. and Roy, R. (1999). Special Functions. Cambridge Univesity Press, Cambridge. [6] Baxter, G. (1956). A strong limit theorem for Gaussian processes. Proc. Amer. Math. Soc. 522-527. Bibliography [7] Bennett, G (1962). Probability inequalities for the sum of independent random variables. J. Amer. Statist. Assoc. 57 33-45. ¨ m, H. (1952). On some expansions of stable distributions. Ark. [8] Bergstro Mat. 375-378. [9] Blumenthal, R. M. and Getoor, R. K. (1960). Some theorems of stable process. Trans. Amer. Math. Soc. 95 263-273. [10] Brillinger, D. R. (1981). Time Series: Data Analysis and Theory. HoldenDay, San Francisco. [11] Constantine, A. G. and Hall, P. (1994). Characterizing surface smoothmess via estimation of effective fractal dimension. J. R. Statist. Soc. B 56 97-113. [12] Cressie, N. (1993). Statistics for Spatial Data. Wiley, New York. [13] Du, J., Zhang, H. and Mandrekar, V. S. (2009). Fixed-domain asymptotic properties of tapered maximum likelihood estimators. Ann. Statist. 37 3330-3361. [14] Feller, W. (1971). An Introduction to Probability Theory and its Applications, Vol. 2. Wiley, New York. [15] Furrer, R., Genton, M. G. and Nychka, D. (2006). Covariance tapering for interpolation of large spatial datasets. J. Comput. Graph. Statist. 15 502523. [16] Gawronski, W. (1984). On the bell-shape of stable densities. Ann. Probab. 12 230-242. [17] Gneiting, T. (2002). Compactly supported correlation functions. J. Multivar. Anal. 83 493-508. [18] Gradshteyn, I. S. and Ryzhik, I. M. (2007). Table of Integrals, Series, and Products, 7th edition. Academic Press, Oxford. [19] Grafakos, L. (2004). Classical and Modern Fourier Analysis. Prentice Hall, Upper Saddle River. 162 Bibliography [20] Guyon, X. (1982). Parameter estimation for a stationary process on a 𝑑dimensional lattice. Biometrika 69 95-105. [21] Guyon, X. and Leon, J. (1989). Convergence en loi des 𝐻-variations d’un processus Gaussian stationnaire sur 𝑅. Ann. Inst. Henri Poincar´e Probab. Statist. 25 265-282. [22] Horn, R. and Johnson, C. (1991). Topics in Matrix Analysis. Cambridge University Press, Cambridge. [23] Ibragimov, I. A. and Rozanov, Y. A. (1978). Gaussian Random Processes. Springer, New York. [24] Istas, J. and Lang, G. (1997). Quadratic variations and estimation of the local H¨older index of a Gaussian process. Ann. Inst. Henri Poincar´e Probab. Statist. 33 407-436. [25] Kaufman, C., Schervish, M. and Nychka, D. (2008). Covariance tapering for likelihood based estimation in large spatial datasets. J. Amer. Statist. Assoc. 103 1545-1555. ´, E. (1975). On quadratic variation of processes with [26] Klein, R. and Gine Gaussian increments. Ann. Probab. 716-721. ´vy, P. (1940). Le mouvement Brownien plan. Amer. J. Math. 62 487-550. [27] Le [28] Lim, C. Y. and Stein, M. L. (2008). Properties of spatial cross-periodograms using fixed-domain asymptotics. J. Multivar. Anal. 99 1962-1984. [29] Loh, W. L. (2005). Fixed-domain asymptotics for a subclass of Mat´ern-type Gaussian random fields. Ann. Statist. 33 2344-2394. [30] Loh, W. L. and Lam, T. K. (2000). Estimating structured correlation matrices in smooth Gaussian random field models. Ann. Statist. 28 880-904. [31] Mardia, K. V. and Marshall, R. J. (1984). Maximum likelihood estimation of models for residual covariance in spatial regression. Biometrika 71 135-46. 163 Bibliography [32] Nhu, D. L. and James, V. Z. (2006). Statistical Analysis of Environmental Space-Time Processes. Springer, New York. [33] Pissanetsky, S. (1984). Sparse Matrix Technology. Academic Press, London. [34] Sacks, J., Welch, W. J., Mitchell, T. J. and Wynn, H. P. (1989). Design and analysis of computer experiments (with discussion). Statist. Sci. 409-435. [35] Skorohod, A. V. (1961). Asymptotic formulars for stable distribution laws. Selected Trans. Math. Stat. and Prob. 157-161. [36] Sneddon, I. (1951). Fourier Transforms. McGraw-Hill, New York. [37] Stein, E. M. and Weiss, G. (1971). Introduction to Fourier Analysis on Euclidean Space. Princeton University Press, Princeton. [38] Stein, M. L. (1988). Asymptotically efficient predition of a random field with a misspecified covariance function. Ann. Statist. 16 55-63. [39] Stein, M. L. (1990). Bounds on the efficiency of linear predictions using an incorrect covariance function. Ann. Statist. 18 1116-11138. [40] Stein, M. L. (1993). Spline smoothing with an estimated order parameter. Ann. Statist. 21 1522-1544. [41] Stein, M. L. (1995). Fixed-domain asymptotics for spatial periodograms. J. Amer. Statist. Assoc. 90 1277-1288. [42] Stein, M. L. (1999). Interpolation of Spatial Data: Some Theory for Kriging. Springer, New York. [43] Stein, M. L. (2004) Equivalence of Gaussian measure for some nonstationary random fields. J. Statist. Plann. Inference 123 1-11. [44] Stein, M. L., Chi, Z. and Welty, L. J. (2004). Approximating likelihoods for large spatial data sets. J. Roy. Statist.Soc. Ser. B 66 275-296. [45] van der Vaart, A. (1996). Maximum likelihood estimation under a spatial sampling scheme. Ann. Statist. 24 2049-2057. 164 Bibliography [46] Wendland, H. (1995). Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree. Adv. Comput. Math. 389-396. [47] Wendland, H. (1998). Error estimates for interpolation by compactly supported radial basis functions of minimal degree. J. Approx. Theory 93 258-272. [48] Wood, A. T. A. and Chan, G. (1994). Simulation of stationary Gaussian processes in [0, 1]𝑑 . J. Comp. Graph. Statist. 409-432. [49] Wu, Z. M. (1995). Compactly supported positive definite radial functions. Adv. Comput. Math. 283-292. [50] Yadrenko, M. I. (1983). Spectral Theory of Random Fields. Optimization Software, New York. [51] Ying, Z. (1991). Asymptotic properties of a maximum likelihood estimator with data from a Gaussian process. J. Multivar. Anal. 36 280-296. [52] Ying, Z. (1993). Maximum likelihood estimation of parameters under a spatial sampling scheme. Ann. Statist. 21 1567-1590. [53] Zhang, H. (2004). Inconsistent estimation and asymptotically equal interpolations in model-based geostatistics. J. Amer. Statist. Assoc. 99 250-261. [54] Zolotarev, V. M. (1986). One-Dimensional Stable Distributions. American Mathematical Society, Providence. ˇ I. G. (1986). The Spectral Analysis of Time Series. North[55] Zurbenko, Holland, New York. 165 [...]... in spatial statistics, increasing domain asymptotics and fixed domain asymptotics or called in ll asymptotics (Cressie, 1993) In increasing domain asymptotics, the distance between neighboring observations is bounded away from zero so that the observation region grows as the number of observation increases In fixed domain asymptotics, the number of observations increases in a given fixed and bounded domain. .. regularity conditions, under increasing domain asymptotics (Mardia and Marshall, 1984) We are interested in processes on a given fixed region of ℝ 𝑑 and focus on fixed domain asymptotics In the next two sections, we will describe some asymptotic properties under fixed domain asymptotics for two classes of Gaussian processes with the following covariance functions: Mat´rn class and powered exponential e class... parameters and K 𝜈 is the modified Bessel function of order 𝜈 [see Andrews et al (1999), p.223] The larger 𝜈 is, the smoother 𝑋 is In particular, 𝑋 will be 𝑚 times mean square differentiable if and only if 𝜈 > 𝑚 Results for parameter estimation under fixed domain asymptotics are difficult to derive in general and little work has been done in this area Stein (1999) strongly recommended using Gaussian random field... of smoothness parameter for a class of periodic Gaussian processes in one dimension Constantine and Hall (1994) studied the estimation of smoothness parameter in one dimension under a sort of mixture of increasing domain and fixed domain asymptotics However, the estimation of smoothness parameter is not the subject to this study 10 11 CHAPTER 2 Isotropic Covariance Function 2.1 Introduction Let 𝑋 : ℝ... that the process is in nitely differentiable in mean square sense For 𝛾 = 1 and 𝑑 = 1, the process known as the Ornstein-Uhlenbeck process has Markovian properties that for 𝑡 > 𝑠, 𝑋(𝑡) − 𝑒−𝜃∣𝑡−𝑠∣ 𝑋(𝑠) is independent of 𝑋(𝑢), 𝑢 ≤ 𝑠 Another interesting fact for 𝑋(𝑡) is that if 𝜎 2 𝜃 = ˜ 2 ˜ the in 𝜃, duced Gaussian measures with (𝜎 2 , 𝜃) and (˜ 2 , ˜ are equivalent [cf Ibraginov and 𝜎 𝜃) Rozanov (1978)]... allowed to shrink as 𝑛 → ∞ The conditions in Kaufman et al (2008) are difficult to check in practice In Chapter 2, conditions will be established in which the tapered MLE is strongly consistent and asymptotically normal for 1 ≤ 𝑑 ≤ 3 The second objective of this thesis is to investigate the asymptotic properties of Gaussian random field with multiplicative covariance function under fixed domain asymptotics. .. functions 𝑎(𝑥) and 𝑏(𝑥), 𝑎(𝑥) ≍ 𝑏(𝑥) means that there exist constants 0 < 𝐶1 < 𝐶2 < ∞ such that 𝐶1 ∣𝑏(𝑥)∣ ≤ ∣𝑎(𝑥)∣ ≤ 𝐶2 ∣𝑏(𝑥)∣ for all possible 𝑥 𝑝 −→ a.s −→ 𝑑 −→ converges in probability converges almost sure converges in distribution 1 CHAPTER 1 Introduction Computer modeling is having a profound effect on scientific research In deterministic computer experiments, unlike physical experiments, no random error... of models He investigated the performance of maximum likelihood estimators for the parameters of a periodic version of the Mat´rn model e with the hope that the large sample results for this periodic model would be similar to those for non-periodic Mat´rn-type Gaussian random fields under fixed domain e asymptotics Zhang(2004) proved some important results about isotropic Mat´rn class He e 2 first pointed... quadratic variation will give a consistent estimator of this product for 𝑑 = 1 using the observations on a grid For 𝑑 > 1, spectral analysis may provide another consistent estimator for product of variance and all scale parameters Then these consistent estimators can provide the consistent estimators of variance and all scale parameters for 𝑑 > 1 This thesis mainly concerns the consistent estimators of unknown... realization of a spatial process In this regard, Sacks, Welch, Mitchell and Wynn (1989) modeled 𝑌 (t) as a realization of a Gaussian spatial process (random field), that includes a 2 regression model, called Kriging model, 𝑌 (t) = 𝑋(t) + 𝑝 ∑ 𝛽 𝑗 𝑓 𝑗 (t), 𝑗=1 where 𝑓 𝑗 (t)’s are known functions, 𝛽 𝑗 ’s are unknown coefficients and 𝑋(t) is assumed to be Gaussian random field with mean zero and covariance 𝐶𝑜𝑣(𝑋(t), . FIXED DOMAIN ASYMPTOTICS AND CONSISTENT ESTIMATION FOR GAUSSIAN RANDOM FIELD MODELS IN SPATIAL STATISTICS AND COMPUTER EXPERIMENTS WANG DAQING (B.Sc. University of Science and Technology. asymptotic frameworks in spatial statistics, increas- ing domain asymptotics and fixed domain asymptotics or called in ll asymptotics (Cressie, 1993). In increasing domain asymptotics, the distance. conditions, under increasing domain asymptotics (Mardia and Marshall, 1984). We are interested in processes on a given fixed region of ℝ

Ngày đăng: 11/09/2015, 10:01

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan