A Course in Mathematical Statistics phần 5 ppsx

59 347 0
A Course in Mathematical Statistics phần 5 ppsx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Since, as ε → 0, F X [z(c ± ε )] → F X (zc), we have lim sup , . n n n X P X Y z F zc z →∞ ≤ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ≤ () ∈ ޒ (6) Next, Xzc Xzc Yc Xzc Yc Yc Xzc Yc nnnn nn nn ≤± () [] =≤± () [] ∩−≥ () +≤± () [] ∩−< () ⊆−≥ () ∪≤± () [] ∩−< () εεεε εε εε . By choosing ε < c, we have that |Y n − c| < ε is equivalent to 0 < c − ε < Y n < c + ε and hence Xzc Yc X Y zz nn n n ≤− () [] −< () ⊆≤ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ≥ εε I ,,if 0 and Xzc Yc X Y zz nn n n ≤+ () [] −< () ⊆≤ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ < εε I ,.if 0 That is, for every z ∈ ޒ , Xzc Yc X Y z nn n n ≤± () [] −< () ⊆≤ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ εε I and hence Xzc Yc X Y zz nn n n ≤± () [] ⊆−≥ () ≤ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ∈ εε U ,. ޒ Thus PX zc PY c P X Y z nn n n ≤± () [] ≤−≥ () +≤ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ εε . Letting n →∞ and taking into consideration the fact that P(|Y n − c| ≥ ε ) → 0 and P[X n ≤ z(c ± ε )] → F x [z(c ± ε )], we obtain Fzc P X Y zz X n n n ± () [] ≤≤ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ∈ →∞ ε lim inf , . ޒ Since, as ε → 0, F X [z(c ± ε )] → F X (zc), we have Fzc P X Y zz X n n n () ≤≤ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ∈ →∞ lim inf , . ޒ (7) Relations (6) and (7) imply that lim n→∞ P(X n /Y n ≤ z) exists and is equal to FzcPXzcP X c zFz XXc () =≤ () =≤ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ = () . Thus 8.5 Further Limit Theorems 203 204 8 Basic Limit Theorems P X Y zF z Fzz n n XY n Xc nn ≤ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ = () ⎯→⎯ () ∈ →∞ ,, ޒ as was to be seen. ▲ REMARK 12 Theorem 8 is known as Slutsky’s theorem. Now, if X j , j = 1, , n, are i.i.d. r.v.’s, we have seen that the sample variance S n XX n XX njn j n j j n n 2 2 1 2 1 2 11 =− () =− == ∑∑ . Next, the r.v.’s X 2 j , j = 1, . . . , n are i.i.d., since the X’s are, and EX X EX EX X jjj j j 22 2 22 2 2 () = () + () =+ = () = () σσμμσσ ,,if (which are assumed to exist). Therefore the SLLN and WLLN give the result that 1 2 1 22 n X j j n n = →∞ ∑ ⎯→⎯+ σμ a.s. and also in probability. On the other hand, X ¯ 2 n ⎯→⎯ →∞n μ 2 a.s. and also in probability, and hence X ¯ 2 n ⎯→⎯ →∞n μ 2 a.s. and also in probability (by Theorems 7(i) and 7′(i)). Thus 1 2 1 22222 n XX j j n n = ∑ −→+−= σμμσ a.s. and also in probability (by the same theorems just referred to). So we have proved the following theorem. Let X j , j = 1, , n, be i.i.d. r.v.’s with E(X j ) = μ , σ 2 (X j ) = σ 2 , j = 1, , n. Then S 2 n ⎯→⎯ →∞n σ 2 a.s. and also in probability. REMARK 13 Of course, S n n S n P n n P n 22 2 2 1 1⎯→⎯ − ⎯→⎯ →∞ →∞ σ σ implies , since n/(n − 1) ⎯→⎯ →∞n 1. If X 1 , , X n are i.i.d. r.v.’s with mean μ and (positive) variance σ 2 , then nX S N nX S N n n d n n n d n −− () ⎯→⎯ () − () ⎯→⎯ () →∞ →∞ 1 01 01 μμ ,,.and also PROOF In fact, nX N n d n − () ⎯→⎯ () →∞ μ σ 01,, THEOREM 9 COROLLARY TO THEOREM 8 8.5 Further Limit Theorems 205 by Theorem 3, and n n S n P n − ⎯→⎯ →∞ 1 1 σ , by Remark 13. Hence the quotient of these r.v.’s which is nX S n n −− () 1 μ converges in distribution to N(0, 1) as n →∞, by Theorem 9. ▲ The following result is based on theorems established above and it is of significant importance. For n = 1, 2, . . . , let X n and X be r.v.’s, let g: ޒ → ޒ be differentiable, and let its derivative g′(x) be continuous at a point d. Finally, let c n be constants such that 0 ≠ c n →∞, and let c n (X n − d) d ⎯→⎯ X as n →∞. Then c n [g(X n ) − g(d)] d ⎯→⎯ g′(d)X as n →∞. PROOF In this proof, all limits are taken as n →∞. By assumption, c n (X n −d) d ⎯→⎯ X and c n −1 → 0. Then, by Theorem 8(ii), X n − d d ⎯→⎯ 0, or equivalently, X n − d P ⎯→⎯ 0, and hence, by Theorem 7′(i), Xd n P −⎯→⎯ 0. (8) Next, expand g(X n ) around d according to Taylor’s formula in order to obtain gX gd X dg X nnn () = () +− () ′ () * , where X n * is an r.v. lying between d and X n . Hence cgX gd cX dgX nn nn n () − () [] =− () ′ () * . (9) However, |X n * − d| ≤ |X n − d| P ⎯→⎯ 0 by (8), so that X n * P ⎯→⎯ d, and therefore, by Theorem 7′(i) again, gX gd n P * . () ⎯→⎯ () (10) By assumption, convergence (10) and Theorem 8(ii), we have c n (X n − d) g′(X n *) d ⎯→⎯ g′(d)X. This result and relation (9) complete the proof of the theorem. ▲ Let the r.v.’s X 1 , , X n be i.i.d. with mean μ ∈ ޒ and variance σ 2 ∈ (0, ∞), and let g: ޒ → ޒ be differentiable with derivative continuous at μ . Then, as n →∞, ngX g N g n d () − () [] ⎯→⎯ ′ () [] ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ μσμ 0 2 ,. PROOF By the CLT, n (X ¯ n − μ ) d ⎯→⎯ X ∼ N(0, σ 2 ), so that the theorem applies and gives THEOREM 10 COROLLARY 206 8 Basic Limit Theorems ngX g g X N g n d () − () [] ⎯→⎯ ′ () ′ () [] ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ μμ σμ ~, .0 2 ▲ APPLICATION If the r.v.’s X j , j = 1, , n in the corollary are distributed as B(1, p), then, as n →∞, n X X pq N pq p nn d 1012 2 − () − [] ⎯→⎯− () ⎛ ⎝ ⎞ ⎠ ,. Here μ = p, σ 2 = pq, and g(x) = x(1 − x), so that g′(x) = 1 − 2x. The result follows. Exercises 8.5.1 Use Theorem 8(ii) in order to show that if the CLT holds, then so does the WLLN. 8.5.2 Refer to the proof of Theorem 7′(i) and show that on the set A 1 ∩ A 2 (n), we actually have −2M < X < 2M. 8.5.3 Carry out the proof of Theorem 7′(ii). (Use the usual Euclidean distance in ޒ k .) 8.6* Pólya’s Lemma and Alternative Proof of the WLLN The following lemma is an analytical result of interest in its own right. It was used in the corollary to Theorem 3 to conclude uniform convergence. (Pólya). Let F and {F n } be d.f.’s such that F n (x) ⎯→⎯ →∞n F(x), x ∈ ޒ , and let F be continuous. Then the convergence is uniform in x ∈ ޒ . That is, for every ε > 0 there exists N( ε ) > 0 such that n ≥ N( ε ) implies that |F n (x) − F(x)| < ε for every x ∈ ޒ . PROOF Since F(x) → 0 as x →−∞, and F(x) → 1, as x →∞, there exists an interval [ α , β ] such that FF αε β ε () < () >−212,. (11) The continuity of F implies its uniform continuity in [ α , β ]. Then there is a finite partition α = x 1 < x 2 < ···< x r = β of [ α , β ] such that Fx Fx j r jj+ () − () <= ⋅⋅⋅ − 1 21 1 ε ,,,. (12) Next, F n (x j ) ⎯→⎯ →∞n F(x j ) implies that there exists N j ( ε ) > 0 such that for all n ≥ N j ( ε ), Fx Fx j r nj j () − () <= ⋅⋅⋅ ε 21,,,. By taking LEMMA 1 nN N N r ≥ () = () ⋅⋅⋅ () () εεε max , , , 1 we then have that Fx Fx j r nj j () − () <= ⋅⋅⋅ ε 21,,,. (13) Let x 0 =−∞, x r+1 =∞. Then by the fact that F(−∞) = 0 and F(∞) = 1, relation (11) implies that Fx Fx Fx Fx rr10 1 22 () − () < () − () < + εε ,. (14) Thus, by means of (12) and (14), we have that Fx Fx j r jj+ () − () <= ⋅⋅⋅ 1 201 ε ,,,,. (15) Also (13) trivially holds for j = 0 and j = r + 1; that is, we have Fx Fx j r nj j () − () <= ⋅⋅⋅ + ε 201 1,,,,. (16) Next, let x be any real number. Then x j ≤ x < x j+1 for some j = 0, 1, . . . , r. By (15) and (16) and for n ≥ N( ε ), we have the following string of inequalities: Fx Fx Fx Fx Fx Fx Fx Fx jnjnnjj jj () −< () ≤ () ≤ () < () + < () +≤ () +≤ () + +− + εε εε ε 22 11 1 . Hence 022 1 ≤ () +− () ≤ () +− () +< + Fx F x Fx Fx nj j εεεε and therefore |F n (x) − F(x)| < ε . Thus for n ≥ N( ε ), we have Fx Fx x n () − () <∈ ε for every ޒ . (17) Relation (17) concludes the proof of the lemma. ▲ Below, a proof of the WLLN (Theorem 5) is presented without using ch.f.’s. The basic idea is that of suitably truncating the r.v.’s involved, and is due to Khintchine; it was also used by Markov. ALTERNATIVE PROOF OF THEOREM 5 We proceed as follows: For any δ > 0, we define Yn Y XXn Xn jj jj j () == ≤⋅ >⋅ ⎧ ⎨ ⎪ ⎩ ⎪ , , if if δ δ 0 and Zn Z Xn XXnj n jj j jj () == ≤⋅ >⋅ = ⋅⋅⋅ ⎧ ⎨ ⎪ ⎩ ⎪ 0 1 , ,,,,. if if δ δ Then, clearly, X j = Y j + Z j , j = 1, , n. Let us restrict ourselves to the continuous case and let f be the (common) p.d.f. of the X’s. Then, 8.6* Pélya’s Lemma and Alternative Proof of the WLLN 207 208 8 Basic Limit Theorems σσ δδ δ δ δ δ δ 22 1 1 2 1 2 1 2 1 2 1 2 2 1 YY EY EY EY EX I X xI xfxdx x f xdx n xf xdx n xf xdx j Xn xn n n () = () = () − () ≤ () =⋅ () {} = ()() = () ≤⋅ () ≤⋅ () ≤⋅ [] ≤⋅ [] −∞ ∞ −⋅ ⋅ −∞ ∞ − ∫ ∫∫ ⋅⋅ ⋅ ∫ =⋅ n n nE X δ δ 1 ; that is, σδ 2 1 YnEX j () ≤⋅⋅ . (18) Next, EY EY E XI X xI x f x dx j Xn xn () = () = () {} = ()() ≤⋅ [] ≤⋅ [] −∞ ∞ ∫ 11 1 1 δ δ . Now, xI x f x x f x xI x f x xf x xn xn n ≤⋅ [] ≤⋅ [] →∞ ()() < () ()() ⎯→⎯ () δδ ,, and xf xdx () <∞ −∞ ∞ ∫ . Therefore xI x f x dx xf x dx xn n ≤⋅ [] →∞ −∞ ∞ −∞ ∞ ()() ⎯→⎯ () = ∫∫ δ μ by Lemma C of Chapter 6; that is, EY j n () ⎯→⎯ →∞ μ . (19) P n YEY P YE Y n n Y nY n nnEX n EX jj j n jj j n j n j j n 1 1 111 22 2 1 2 1 22 1 22 2 1 −≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ =− ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ≤ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ = () ≤ ⋅⋅ = === = ∑∑∑ ∑ εε ε σ σ ε δ ε δ ε 8.6* Pólya’s Lemma and Alternative Proof of the WLLN 209 by (18); that is, P n YEY EX j j n 1 1 1 2 1 −≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ≤ = ∑ ε δ ε . (20) Thus, P n YP n YEY EY P n YEY EY P n YEY j j n j j n j j n j j n 1 2 1 2 1 2 1 1 1 1 1 1 1 1 1 1 −≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ =− () ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ + () − () ≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ≤−+−≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ≤−≥ ⎡ ⎣ ⎢ ⎢ == = = ∑∑ ∑ ∑ με μ ε με ε ⎤⎤ ⎦ ⎥ ⎥ +−≥ [] ≤ PEY EX 1 2 1 με δ ε for n sufficiently large, by (19) and (20); that is, P n YEX j j n 1 2 1 2 1 −≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ≤ = ∑ με δ ε (21) for n large enough. Next, PZ PZ n PX n fxdx fxdx fxdx fxdx x n fxdx n xf xdx jj j n n xn xin xn xn ≠ () =>⋅ () =>⋅ () = () + () = () = () ∫ < ⋅ () = ⋅ () < ⋅ ∞ −∞ −⋅ >⋅ () >> () >⋅ () >⋅ () ∫∫ ∫ ∫ ∫ ∫ 0 1 1 δ δ δ δ δ δ δ δ δ 11 2 2 δ δ δ δ δ ⋅ = () < >⋅ () ∫ n n xf xdx xn , since for n sufficiently large. So P(Z j ≠ 0) ≤ δ /n and hence P Z nP Z j j n j = ∑ ≠ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ≤≠ () ≤ 1 00 δ (22) 210 8 Basic Limit Theorems for n sufficiently large. Thus, P n XP n Y n Z P n Y n Z P n YP j j n jj j n j n j j n j j n j j n 1 4 11 4 11 4 1 2 111 11 1 −≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ =+−≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ≤−+≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ≤−≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ + === == = ∑∑∑ ∑∑ ∑ με με με με 11 2 1 20 1 11 2 1 n Z P n YPZ EX j j n j j n j j n = == ∑ ∑∑ ≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ≤−≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ +≠ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ≤+ ε με δ ε δ for n sufficiently large, by (21), (22). Replacing δ by ε 3 , for example, we get P n XEX j j n 1 4 1 1 3 −≥ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ≤+ = ∑ μεε ε for n sufficiently large. Since this is true for every ε > 0, the result follows. ▲ This section is concluded with a result relating convergence in probability and a.s. convergence. More precisely, in Remark 3, it was stated that X n P n ⎯→⎯ →∞ X does not necessarily imply that X n a.s. ⎯→⎯ →∞n X. However, the following is always true. If X n P n ⎯→⎯ →∞ X, then there is a subsequence {n k } of {n} (that is, n k ↑∞, k →∞) such that X nk a.s. ⎯→⎯ →∞n X. PROOF Omitted. As an application of Theorem 11, refer to Example 2 and consider the subsequence of r.v.’s {X 2 k −1 }, where XI k k k 21 21 2 1 1 1 − − ⋅ ⎛ ⎝ ⎜ ⎤ ⎦ ⎥ ⎥ = − − . Then for ε > 0 and large enough k, so that 1/2 k−1 < ε , we have PX PX kk k 21 21 1 1 1 2 −− − > () == () =< εε . Hence the subsequence {X 2 k −1 } of {X n } converges to 0 in probability. THEOREM 11 Exercises 211 Exercises 8.6.1 Use Theorem 11 in order to prove Theorem 7′(i). 8.6.2 Do likewise in order to establish part (ii) of Theorem 7′. 212 9 Transformations of Random Variables and Random Vectors 212 Chapter 9 Transformations of Random Variables and Random Vectors 9.1 The Univariate Case The problem we are concerned with in this section in its simplest form is the following: Let X be an r.v. and let h be a (measurable) function on ޒ into ޒ , so that Y = h(X) is an r.v. Given the distribution of X, we want to determine the distribution of Y. Let P X , P Y be the distributions of X and Y, respectively. That is, P X (B) = P(X ∈ B), P Y (B) = P(Y ∈ B), B (Borel) subset of ޒ . Now (Y ∈ B) = [h(X) ∈ B] = (X ∈ A), where A = h −1 (B) = {x ∈ ޒ ; h(x) ∈ B}. Therefore P Y (B) = P(Y ∈ B) = P(X ∈ A) = P X (A). Thus we have the following theorem. Let X be an r.v. and let h: ޒ → ޒ be a (measurable) function, so that Y = h(X) is an r.v. Then the distribution P Y of the r.v. Y is determined by the distribution P X of the r.v. X as follows: for any (Borel) subset B of ޒ , P Y (B) = P X (A), where A = h −1 (B). 9.1.1 Application 1: Transformations of Discrete Random Variables Let X be a discrete r.v. taking the values x j , j = 1, 2, . . . , and let Y = h(X). Then Y is also a discrete r.v. taking the values y j , j = 1, 2, . . . . We wish to determine f Y (y j ) = P(Y = y j ), j = 1, 2, . . . . By taking B = {y j }, we have Axhx y ii j = () = {} ;, and hence fy PY y P y PA fx Yj j Y j X Xi xA i () == () = {} () = () = () ∈ ∑ , THEOREM 1 [...]... T and set A = h−1(B) Then A is an interval in S and () ( [ ( )] () ) [( ) ] ( ) () P Y ∈ B = P h X ∈ B = P X ∈ A = ∫ fX x dx A Under the assumptions made, the theory of changing the variable in the integral on the right-hand side above applies (see for example, T M Apostol, 216 9 Transformations of Random Variables and Random Vectors Mathematical Analysis, Addison-Wesley, 1 957 , pp 216 and 270–271) and... Then show that: 1 1 2 1 1 1 2 2 ;α Fr ,r 1 1 1 → 1 2 χ r ;α r1 1 2 1 1 2 1 2 as r2 → ∞ 9.3 Linear Transformations of Random Vectors In this section we will restrict ourselves to a special and important class of transformations, the linear transformations We first introduce some needed notation and terminology 9.3.1 Preliminaries A transformation h: ‫ ޒ‬k → ‫ ޒ‬k which transforms the variables x1, ... Then, as is known from linear algebra (see also Appendix 1), Δ* = 1/Δ If, furthermore, the linear transformation above is such that the column vectors (c1j, c2j, , ckj)′, j = 1, , k are orthogonal, that is 236 9 Transformations of Random Variables and Random Vectors ⎫ j ≠ j ′⎪ ⎪ i =1 and (3) ⎬ k 2 cij = 1, j = 1, ⋅ ⋅ ⋅ , k, ⎪ ∑ ⎪ i =1 ⎭ then the linear transformation is called orthogonal The... )] ⎡ ⎢ − y − a + b exp⎢ 2 a 2σ 2 2π a σ ⎢ ⎣ 1 2 ⎤ ⎥ ⎥ ⎥ ⎦ which is the p.d.f of a normally distributed r.v with mean a + b and variance a2 σ2 Thus, if X is N(μ, σ2), then aX + b is N (a + b, a2 σ2) Now it may happen that the transformation h satisfies all the requirements of Theorem 2 except that it is not one-to-one from S onto T Instead, the following might happen: There is a (finite) partition of S,... 4 , and ( ) ( ) ( ) PY B = P Y = y = PX A = e − λ ⋅ λ−1+ y + 4 −1 + y + 4 ! ( ) For example, for y = 12, we have P(Y = 12) = e−λλ3/3! It is a fact, proved in advanced probability courses, that the distribution PX of an r.v X is uniquely determined by its d.f X The same is true for r vectors (A first indication that such a result is feasible is provided by Lemma 3 in Chapter 7.) Thus, in determining... in the case of orthogonality, we also have ( ) ( ) 238 9 Transformations of Random Variables and Random Vectors k ∑X j =1 k 2 j = ∑Y 2 j j =1 The following theorem is an application of Theorem 4 to the normal case THEOREM 5 Let the r.v.’s Xi be N(μi, σ2), i = 1, , k, and independent Consider the orthogonal transformation k Yi = ∑ cij X j , i = 1, ⋅ ⋅ ⋅ , k j =1 Then the r.v.’s Y1, , Yk are also... variables y1, , yk in the following manner: k yi = ∑ cij xj , cij j =1 real constants, i, j = 1, 2, ⋅ ⋅ ⋅ , k (1) is called a linear transformation Let C be the k × k matrix whose elements are cij That is, C = (cij), and let Δ = |C| be the determinant of C If Δ ≠ 0, we can uniquely solve for the x’s in (1) and get k xi = ∑ dij y j , dij j =1 real constants, i, j = 1, ⋅ ⋅ ⋅ , k (2) Let D = (dij) and... distributed as the r.v Y Use the ch.f approach to determine the p.d.f of −∑n= 1Yj j 9.2 The Multivariate Case 9.1 The Univariate 219 1 1 9.1.6 If the r.v X is distributed as U(− –π, –π), show that the r.v Y = tan X is 2 2 distributed as Cauchy Also find the distribution of the r.v Z = sin X 9.1.7 If the r.v X has the Gamma distribution with parameters α, β, and Y = 2X/β, show that Y ∼ χ 2α, provided 2α is an integer... orthogonality relations (3) are equivalent to orthogonality of the row vectors (ci1, , cik)′ i = 1, , k That is, k ∑ cij ci j′ = 0 for ⎫ for i ≠ i ′ ⎪ ⎪ j =1 ⎬ k ∑ cij2 = 1, i = 1, ⋅ ⋅ ⋅ , k ⎪ ⎪ j =1 ⎭ k ∑ cij ci′ j = 0 and (4) It is known from linear algebra that |Δ| = 1 for an orthogonal transformation Also in the case of an orthogonal transformation, we have dij = cji, i, j = 1, , k, so that... determine the distribution of the r.v Y defined above; iii) If X is interpreted as a specified measurement taken on each item of a product made by a certain manufacturing process and cj, j = 1, 2, 3 are the profit (in dollars) realized by selling one item under the condition that X ∈ Bj, j = 1, 2, 3, respectively, find the expected profit from the sale of one item j 9.1.3 Let X, Y be r.v.’s representing the . usual Euclidean distance in ޒ k .) 8.6* Pólya’s Lemma and Alternative Proof of the WLLN The following lemma is an analytical result of interest in its own right. It was used in the corollary. applies (see for example, T. M. Apostol, EXAMPLE 4 216 9 Transformations of Random Variables and Random Vectors Mathematical Analysis, Addison-Wesley, 1 957 , pp. 216 and 270–271) and gives f x dx. Transformations of Random Variables and Random Vectors 212 Chapter 9 Transformations of Random Variables and Random Vectors 9.1 The Univariate Case The problem we are concerned with in this section in its

Ngày đăng: 23/07/2014, 16:21

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan