A Course in Mathematical Statistics phần 3 docx

54 449 0
A Course in Mathematical Statistics phần 3 docx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

90 4 Distribution Functions, Probability Densities, and Their Relationship 4.1.3 Refer to Exercise 3.3.13, in Chapter 3, and determine the d.f.’s corre- sponding to the p.d.f.’s given there. 4.1.4 Refer to Exercise 3.3.14, in Chapter 3, and determine the d.f.’s corre- sponding to the p.d.f.’s given there. 4.1.5 Let X be an r.v. with d.f. F. Determine the d.f. of the following r.v.’s: −X, X 2 , aX + b, XI [a,b) (X) when: i) X is continuous and F is strictly increasing; ii) X is discrete. 4.1.6 Refer to the proof of Theorem 1 (iv) and show that we may assume that x n ↓−∞ (x n ↑∞) instead of x n →−∞(x n →∞). 4.1.7 Let f and F be the p.d.f. and the d.f., respectively, of an r.v. X. Then show that F is continuous, and dF(x)/dx = f(x) at the continuity points x of f. 4.1.8 i) Show that the following function F is a d.f. (Logistic distribution) and derive the corresponding p.d.f., f. Fx e x x () = + ∈>∈ −+ () 1 1 0 αβ αβ ,,,; ޒޒ ii) Show that f(x) = α F(x)[1 − F(x)]. 4.1.9 Refer to Exercise 3.3.17 in Chapter 3 and determine the d.f. F corre- sponding to the p.d.f. f given there. Write out the expressions of F and f for n = 2 and n = 3. 4.1.10 If X is an r.v. distributed as N(3, 0.25), use Table 3 in Appendix III in order to compute the following probabilities: i) P(X <−1); ii) P(X > 2.5); iii) P(−0.5 < X < 1.3). 4.1.11 The distribution of IQ’s of the people in a given group is well approxi- mated by the Normal distribution with μ = 105 and σ = 20. What proportion of the individuals in the group in question has an IQ: i) At least 150? ii) At most 80? iii) Between 95 and 125? 4.1.12 A certain manufacturing process produces light bulbs whose life length (in hours) is an r.v. X distributed as N(2,000, 200 2 ). A light bulb is supposed to be defective if its lifetime is less than 1,800. If 25 light bulbs are 4.1 The Cumulative Distribution Function 91 tested, what is the probability that at most 15 of them are defective? (Use the required independence.) 4.1.13 A manufacturing process produces 1 2 -inch ball bearings, which are assumed to be satisfactory if their diameter lies in the interval 0.5 ± 0.0006 and defective otherwise. A day’s production is examined, and it is found that the distribution of the actual diameters of the ball bearings is approximately normal with mean μ = 0.5007 inch and σ = 0.0005 inch. Compute the propor- tion of defective ball bearings. 4.1.14 If X is an r.v. distributed as N( μ , σ 2 ), find the value of c (in terms of μ and σ ) for which P(X < c) = 2 − 9P(X > c). 4.1.15 Refer to the Weibull p.d.f., f, given in Exercise 3.3.19 in Chapter 3 and do the following: i) Calculate the corresponding d.f. F and the reliability function ޒ (x) = 1 − F(x); ii) Also, calculate the failure (or hazard) rate Hx fx x () = () () ޒ , and draw its graph for α = 1 and β = 1 2 , 1, 2; iii) For s and t > 0, calculate the probability P(X > s + t|X > t) where X is an r.v. having the Weibull distribution; iv) What do the quantities F(x), ޒ (x), H(x) and the probability in part (iii) become in the special case of the Negative Exponential distribution? 4.2 The d.f. of a Random Vector and Its Properties—Marginal and Conditional d.f.’s and p.d.f.’s For the case of a two-dimensional r. vector, a result analogous to Theorem 1 can be established. So consider the case that k = 2. We then have X = (X 1 , X 2 )′ and the d.f. F(or F X or F X 1 , X 2 ) of X, or the joint distribution function of X 1 , X 2 , is F(x 1 , x 2 ) = P(X 1 ≤ x 1 , X 2 ≤ x 2 ). Then the following theorem holds true. With the above notation we have i) 0 ≤ F(x 1 , x 2 ) ≤ 1, x 1 , x 2 ∈ ޒ . ii) The variation of F over rectangles with sides parallel to the axes, given in Fig. 4.2, is ≥ 0. iii) F is continuous from the right with respect to each of the coordinates x 1 , x 2 , or both of them jointly. THEOREM 4 4.2 The d.f. of a Random Vector and Its Properties 91 92 4 Distribution Functions, Probability Densities, and Their Relationship y 0 x x 2 x 1 y 1 y 2 ϩϪ Ϫϩ (x 1 , y 2 ) (x 1 , y 1 ) (x 2 , y 1 ) (x 2 , y 2 ) Figure 4.2 The variation V of F over the rectangle is: F ( x 1 , y 1 ) + F ( x 2 , y 2 ) − F ( x 1 , y 2 ) − F ( x 2 , y 1 ) iv) If both x 1 , x 2 , →∞, then F(x 1 , x 2 ) → 1, and if at least one of the x 1 , x 2 → −∞, then F(x 1 , x 2 ) → 0. We express this by writing F(∞, ∞) = 1, F(−∞, x 2 ) = F(x 1 , −∞) = F(−∞, −∞) = 0, where −∞ < x 1 , x 2 <∞. PROOF i) Obvious. ii) V = P(x 1 < X 1 ≤ x 2 , y 1 < X 2 ≤ y 2 ) and is hence, clearly, ≥ 0. iii) Same as in Theorem 3. (If x = (x 1 , x 2 )′, and z n = (x 1n , x 2n )′, then z n ↓ x means x 1n ↓ x 1 , x 2n ↓ x 2 ). iv) If x 1 , x 2 ↑∞, then (−∞, x 1 ] × (−∞, x 2 ] ↑ R 2 , so that F(x 1 , x 2 ) → P(S) = 1. If at least one of x 1 , x 2 goes (↓) to −∞, then (−∞, x 1 ] × (−∞, x 2 ] ↓∅, hence Fx x P 12 0,. () →∅ () = ᭡ REMARK 3 The function F(x 1 , ∞) = F 1 (x 1 ) is the d.f. of the random variable X 1 . In fact, F(x 1 , ∞) = F 1 (x 1 ) is the d.f. of the random variable X 1 . In fact, Fx PX x X x PX x X PX x F x x n n 1112 11 2 11 11 , lim , ,. ∞ () =≤≤ () =≤−∞<<∞ () =≤ () = () ↑∞ Similarly F(∞, x 2 ) = F 2 (x 2 ) is the d.f. of the random variable X 2 . F 1 , F 2 are called marginal d.f.’s. REMARK 4 It should be pointed out here that results like those discussed in parts (i)–(iv) in Remark 1 still hold true here (appropriately interpreted). In particular, part (iv) says that F(x 1 , x 2 ) has second order partial derivatives and ∂ ∂∂ 2 12 12 12 xx Fx x fx x,, () = () at continuity points of f. For k > 2, we have a theorem strictly analogous to Theorems 3 and 6 and also remarks such as Remark 1(i)–(iv) following Theorem 3. In particular, the analog of (iv) says that F(x 1 , , x k ) has kth order partial derivatives and 4.1 The Cumulative Distribution Function 93 ∂ ∂∂ ∂ k k kk xx x Fx x fx x 12 11 ⋅⋅⋅ ⋅⋅⋅ () = ⋅⋅⋅ () ,, ,, at continuity points of f, where F, or F X , or F X 1 ,···,X k , is the d.f. of X, or the joint distribution function of X 1 , , X k . As in the two-dimensional case, Fx Fx jjj ∞ ⋅⋅⋅ ∞∞ ⋅⋅⋅ ∞ () = () ,,,,,, is the d.f. of the random variable X j , and if m x j ’s are replaced by ∞ (1 < m < k), then the resulting function is the joint d.f. of the random variables correspond- ing to the remaining (k − m) X j ’s. All these d.f.’s are called marginal distribu- tion functions. In Statement 2, we have seen that if X = (X 1 , , X k )′ is an r. vector, then X j , j = 1, 2, . . . , k are r.v.’s and vice versa. Then the p.d.f. of X, f(x) = f(x 1 , , x k ), is also called the joint p.d.f. of the r.v.’s X 1 , , X k . Consider first the case k = 2; that is, X = (X 1 , X 2 )′, f(x) = f(x 1 , x 2 ) and set fx fx x fx x dx fx fx x fx x dx x x 11 12 12 2 22 12 12 1 2 1 () = () () ⎧ ⎨ ⎪ ⎩ ⎪ () = () () ⎧ ⎨ ⎪ ⎩ ⎪ ∑ ∫ ∑ ∫ −∞ ∞ −∞ ∞ , , , ,. Then f 1 , f 2 are p.d.f.’s. In fact, f 1 (x 1 ) ≥ 0 and fx fx x xxx 11 1 2 1 211 () = () = ∑∑∑ ,, or f x dx f x x dx dx 11 1 1 2 1 2 1 () = () = −∞ ∞ −∞ ∞ −∞ ∞ ∫∫∫ ,. Similarly we get the result for f 2 . Furthermore, f 1 is the p.d.f. of X 1 , and f 2 is the p.d.f. of X 2 . In fact, PX B fx x fx x f x f x x dx dx f x x dx dx f x dx xBx xxB xB BBB 1 12 12 11 12 12 12 2 1 11 1 12 211 ∈ () = () = () = () () = () [] = () ⎧ ⎨ ⎪ ⎩ ⎪ ∈∈ ∈∈∈ ∑∑∑∑ ∫∫∫∫∫ ,, ,, . , ޒޒ ޒޒ Similarly f 2 is the p.d.f. of the r.v. X 2 . We call f 1 , f 2 the marginal p.d.f.’s. Now suppose f 1 (x 1 ) > 0. Then define f(x 2 |x 1 ) as follows: fx x fx x fx 21 12 11 () = () () , . 4.2 The d.f. of a Random Vector and Its Properties 93 94 4 Distribution Functions, Probability Densities, and Their Relationship This is considered as a function of x 2 , x 1 being an arbitrary, but fixed, value of X 1 (f 1 (x 1 ) > 0). Then f(·|x 1 ) is a p.d.f. In fact, f(x 2 |x 1 ) ≥ 0 and fx x fx fx x fx fx xx 21 11 12 11 11 11 1 22 () = () () = () ⋅ () = ∑∑ ,, fx x dx fx fx x dx fx fx 21 2 11 12 2 11 11 11 1 () = () () = () ⋅ () = −∞ ∞ −∞ ∞ ∫∫ ,. In a similar fashion, if f 2 (x 2 ) > 0, we define f(x 1 |x 2 ) by: fxx fx x fx 12 12 22 () = () () , and show that f(·|x 2 ) is a p.d.f. Furthermore, if X 1 , X 2 are both discrete, the f(x 2 |x 1 ) has the following interpretation: fx x fx x fx PX x X x PX x PX x X x 21 12 11 112 2 11 2211 () = () () = == () = () === () ,, . Hence P(X 2 ∈ B|X 1 = x 1 ) =∑ x 2 ∈B f(x 2 |x 1 ). For this reason, we call f(·|x 2 ) the conditional p.d.f. of X 2 , given that X 1 = x 1 (provided f 1 (x 1 ) > 0). For a similar reason, we call f(·|x 2 ) the conditional p.d.f. of X 1 , given that X 2 = x 2 (provided f 2 (x 2 ) > 0). For the case that the p.d.f.’s f and f 2 are of the continuous type, the conditional p.d.f. f (x 1 |x 2 ) may be given an interpretation similar to the one given above. By assuming (without loss of generality) that h 1 , h 2 > 0, one has 1 1 1 1 111112 222 12 1 1 1 1 2 2 2 2 22222 12 1 2 1 1 2 2 hPx X x hx X x h hh P x X x h x X x h hPx X x h hh F x x F x h x h () <≤+ <≤+ () = () <≤+ < ≤+ () () <≤+ () = ()() ++ + () , ,, −−+ () −+ () [] () + () − () [] Fx x h Fx h x hFx h Fx 12 2 1 12 222 2 22 1 ,, where F is the joint d.f. of X 1 , X 2 and F 2 is the d.f. of X 2 . By letting h 1 , h 2 → 0 and assuming that (x 1 , x 2 )′ and x 2 are continuity points of f and f 2 , respectively, the last expression on the right-hand side above tends to f(x 1 , x 2 )/f 2 (x 2 ) which was denoted by f(x 1 |x 2 ). Thus for small h 1 , h 2 , h 1 f(x 1 |x 2 ) is approximately equal to P(x 1 < X 1 ≤ x 1 + h 1 |x 2 < X 2 ≤ x 2 + h 2 ), so that h 1 f(x 1 |x 2 ) is approximately the conditional probability that X 1 lies in a small neighborhood (of length h 1 ) of x 1 , given that X 2 lies in a small neighborhood of x 2 . A similar interpretation may be given to f(x 2 |x 1 ). We can also define the conditional d.f. of X 2 , given X 1 = x 1 , by means of 4.1 The Cumulative Distribution Function 95 Fx x fxx fxx dx xx x 21 21 21 2 22 2 () = ′ () ′ () ′ ⎧ ⎨ ⎪ ⎩ ⎪ ′ ≤ −∞ ∑ ∫ , and similarly for F(x 1 |x 2 ). The concepts introduced thus far generalize in a straightforward way for k > 2. Thus if X = (X 1 , , X k )′ with p.d.f. f(x 1 , , x k ), then we have called f(x 1 , , x k ) the joint p.d.f. of the r.v.’s X 1 , X 2 , , X k . If we sum (integrate) over t of the variables x 1 , , x k keeping the remaining s fixed (t + s = k), the resulting function is the joint p.d.f. of the r.v.’s corresponding to the remaining s variables; that is, fxx fx x fx x dx dx iii i k xx kj j ss jj t t 11 1 1 1 1 ,, ⋅⋅⋅ ⋅⋅⋅ −∞ ∞ −∞ ∞ ⋅⋅⋅ () = ⋅⋅⋅ () ⋅⋅⋅ ⋅⋅⋅ () ⋅⋅⋅ ⎧ ⎨ ⎪ ⎩ ⎪ ∑ ∫∫ ,, ,, ,, . ,, There are kk k k k 12 1 22 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ + ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ +⋅⋅⋅+ − ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ =− such p.d.f.’s which are also called marginal p.d.f.’s. Also if x i 1 , , x i s are such that f i 1 , , i t (x i 1 , , x i s ) > 0, then the function (of x j 1 , , x j t ) defined by fx x x x fx x fxx jjii k iii i ts ss 11 11 1 ,, ,, ,, ,, ,, ⋅⋅⋅ ⋅⋅⋅ () = ⋅⋅⋅ () ⋅⋅⋅ () ⋅⋅⋅ is a p.d.f. called the joint conditional p.d.f. of the r.v.’s X j 1 , , X j t , given X i 1 = x i 1 ,···,X j s = x j s , or just given X i 1 , , X i s . Again there are 2 k − 2 joint condi- tional p.d.f.’s involving all k r.v.’s X 1 , , X k . Conditional distribution func- tions are defined in a way similar to the one for k = 2. Thus Fx x x x fx x x x fx x x jjii jjii xxxx x jji tt ts jj t jj t j t 11 11 11 1 11 ,, ,, ,, ,, ,, , ,, ,, ⋅⋅⋅ ⋅⋅⋅ () = ′ ⋅⋅⋅ ′ ⋅⋅⋅ () ⋅⋅⋅ ′ ⋅⋅⋅ ′ ⋅ ′ ⋅⋅⋅ ′ () ≤ ⋅⋅⋅ () −∞ ∑ ∫ ⋅⋅ ⋅ () ′ ⋅⋅⋅ ′ ⎧ ⎨ ⎪ ⎪ ⎩ ⎪ ⎪ −∞ ∫ ,.xdx dx ij j x st j t 1 We now present two examples of marginal and conditional p.d.f.’s, one taken from a discrete distribution and the other taken from a continuous distribution. Let the r.v.’s X 1 , , X k have the Multinomial distribution with parameters n and p 1 , , p k . Also, let s and t be integers such that 1 ≤ s, t < k and s + t = k. Then in the notation employed above, we have: 4.2 The d.f. of a Random Vector and Its Properties 95 EXAMPLE 1 96 4 Distribution Functions, Probability Densities, and Their Relationship ii) fxx n xxnr ppq qp prx x iii i ii i x i x nr iiii ss s i s i s ss 11 1 1 1 11 1 ,, ,, ! !!! , ,; ⋅⋅⋅ − ⋅⋅⋅ () = ⋅⋅⋅ − () ⋅⋅⋅ = − +⋅⋅⋅+ () = +⋅⋅⋅ + that is, the r.v.’s X i 1 , , X i s and Y = n − (X i 1 + ···+ X i s ) have the Multinomial distribution with parameters n and p i 1 , , p i s , q. ii) fx x x x nr xx p q p q rx x jjii jj j x j x ii ts t jj t s 11 1 1 1 1 1 ,, ,, ! !! , ; ⋅⋅⋅ ⋅⋅⋅ () = − () ⋅⋅⋅ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⋅⋅⋅ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ = +⋅⋅⋅+ that is, the (joint) conditional distribution of X j 1 , , X j t given X i 1 , , X i s is Multinomial with parameters n − r and p j 1 /q, , p j t /q. DISCUSSION i) Clearly, Xx Xx X XrnYrYnr ii ii i i ss s11 1 = ⋅⋅⋅ = () ⊆ +⋅⋅⋅+ = () =−= () ==− () ,, , so that Xx Xx Xx XxYnr ii ii ii ii ss ss11 11 = ⋅⋅⋅ = () == ⋅⋅⋅ ==− () ,, ,, , . Denoting by O the outcome which is the grouping of all n outcomes distinct from those designated by i 1 , , i s , we have that the probability of O is q, and the number of its occurrences is Y. Thus, the r.v.’s X i 1 , , X i s and Y are distributed as asserted. ii) We have fx x x x fx x x x fx x fx x fx x n xx p jjii jjii ii k ii k x ts ts ss 11 11 11 1 1 1 1 , , , , , ⋅⋅⋅ ⋅⋅⋅ () = ⋅⋅⋅ ⋅⋅⋅ () ⋅⋅⋅ () = ⋅⋅⋅ () ⋅⋅⋅ () = ⋅⋅⋅ ⋅ ,, ,, , , ,, ,, ! !! ⋅⋅ ⋅ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⋅⋅⋅ − () ⋅⋅⋅ ⎛ ⎝ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ = ⋅⋅⋅ ⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⎛ ⎝ ⎜ ⎜ − p n xxnr ppq pppp xxxx k x ii i x i x nr i x i x j x j x iij j k s i s i s i s i s j t j t st ! !!! !!!! 1 1 1 1 1 1 1 11 ⎞⎞ ⎠ ⎟ ⎟ ⋅⋅⋅ ⋅⋅⋅ − () ⎛ ⎝ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ − = − +⋅⋅⋅+ () = +⋅⋅⋅+ () = − () ⋅⋅⋅ ⎛ ⎝ ⎜ ⎞ +⋅⋅⋅+ ppq xxnr nr n x x x x nr xx p q i x i xx x ii iij j jj j i s i s jj t s st t 1 11 1 11 1 1 !!! ! !! since ⎠⎠ ⎟ ⋅⋅⋅ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ x j x j t j t p q 1 , as was to be seen. Let the r.v.’s X 1 and X 2 have the Bivariate Normal distribution, and recall that their (joint) p.d.f. is given by: EXAMPLE 2 4.1 The Cumulative Distribution Function 97 fx x xxxx 12 12 2 2 11 1 2 11 1 22 2 22 2 2 1 21 1 21 2 , exp . () = − ×− − () − ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ − − ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ − ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ + − ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎧ ⎨ ⎪ ⎩ ⎪ ⎫ ⎬ ⎪ ⎭ ⎪ πσ σ ρ ρ μ σ ρ μ σ μ σ μ σ We saw that the marginal p.d.f.’s f 1 , f 2 are N( μ 1 , σ 2 1 ), N( μ 2 , σ 2 2 ), respectively; that is, X 1 , X 2 are also normally distributed. Furthermore, in the process of proving that f(x 1 , x 2 ) is a p.d.f., we rewrote it as follows: fx x xxb 12 12 2 11 2 1 2 2 2 2 2 2 1 21 2 21 , exp exp , () = − − − () ⎡ ⎣ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⋅− − () − ⎛ ⎝ ⎞ ⎠ ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ πσ σ ρ μ σ σρ where bx=+ − () μρ σ σ μ 2 2 1 11 . Hence fx x fx x fx xb 21 12 11 2 2 2 2 2 2 2 1 21 21 () = () () = − − − () − ⎛ ⎝ ⎞ ⎠ ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ , exp πσ ρ σρ which is the p.d.f. of an N(b, σ 2 2 (1 − ρ 2 )) r.v. Similarly f(x 1 |x 2 ) is seen to be the p.d.f. of an N(b′, σ 2 1 (1 − ρ 2 )) r.v., where ′ =+ − () bx μρ σ σ μ 1 1 2 22 . Exercises 4.2.1 Refer to Exercise 3.2.17 in Chapter 3 and: i) Find the marginal p.d.f.’s of the r.v.’s X j , j = 1, · · · , 6; ii) Calculate the probability that X 1 ≥ 5. 4.2.2 Refer to Exercise 3.2.18 in Chapter 3 and determine: ii) The marginal p.d.f. of each one of X 1 , X 2 , X 3 ; ii) The conditional p.d.f. of X 1 , X 2 , given X 3 ; X 1 , X 3 , given X 2 ; X 2 , X 3 , given X 1 ; Exercises 97 98 4 Distribution Functions, Probability Densities, and Their Relationship iii) The conditional p.d.f. of X 1 , given X 2 , X 3 ; X 2 , given X 3 , X 1 ; X 3 , given X 1 , X 2 . If n = 20, provide expressions for the following probabilities: iv) P(3X 1 + X 2 ≤ 5); v) P(X 1 < X 2 < X 3 ); vi) P(X 1 + X 2 = 10|X 3 = 5); vii) P(3 ≤ X 1 ≤ 10|X 2 = X 3 ); viii) P(X 1 < 3X 2 |X 1 > X 3 ). 4.2.3 Let X, Y be r.v.’s jointly distributed with p.d.f. f given by f(x, y) = 2/c 2 if 0 ≤ x ≤ y, 0 ≤ y ≤ c and 0 otherwise. i) Determine the constant c; ii) Find the marginal p.d.f.’s of X and Y; iii) Find the conditional p.d.f. of X, given Y, and the conditional p.d.f. of Y, given X; iv) Calculate the probability that X ≤ 1. 4.2.4 Let the r.v.’s X, Y be jointly distributed with p.d.f. f given by f(x, y) = e −x−y I (0,∞)×(0,∞) (x, y). Compute the following probabilities: i) P(X ≤ x); ii) P(Y ≤ y); iii) P(X < Y); iv) P(X + Y ≤ 3). 4.2.5 If the joint p.d.f. f of the r.v.’s X j , j = 1, 2, 3, is given by fxxx ce Ixxx cx x x A123 3 123 12 3 ,, ,, , () = () −++ () where A =∞ () ×∞ () ×∞ () 000,,,, i) Determine the constant c; ii) Find the marginal p.d.f. of each one of the r.v.’s X j , j = 1, 2, 3; iii) Find the conditional (joint) p.d.f. of X 1 , X 2 , given X 3 , and the conditional p.d.f. of X 1 , given X 2 , X 3 ; iv) Find the conditional d.f.’s corresponding to the conditional p.d.f.’s in (iii). 4.2.6 Consider the function given below: fxy ye x xy x y () = = ⋅⋅⋅ ≥ ⎧ ⎨ ⎪ ⎩ ⎪ − ! ,,,; , 01 0 0 otherwise. 4.1 The Cumulative Distribution Function 99 i) Show that for each fixed y, f(·|y) is a p.d.f., the conditional p.d.f. of an r.v. X, given that another r.v. Y equals y; ii) If the marginal p.d.f. of Y is Negative Exponential with parameter λ = 1, what is the joint p.d.f. of X, Y? iii) Show that the marginal p.d.f. of X is given by f(x) = ( 1 2 ) x+1 I A (x), where A = {0, 1, 2, . . . }. 4.2.7 Let Y be an r.v. distributed as P( λ ) and suppose that the conditional distribution of the r.v. X, given Y = n, is B(n, p). Determine the p.d.f. of X and the conditional p.d.f. of Y, given X = x. 4.2.8 Consider the function f defined as follows: fx x xx e xxI x x 12 1 2 2 2 1 3 2 3 11 11 12 1 22 1 4 , exp , ,, () =− + ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ + () − [] ×− [] ππ and show that: i) f is a non-Normal Bivariate p.d.f. ii) Both marginal p.d.f.’s fx fx x dx 11 1 2 2 () = () −∞ ∞ ∫ , and fx fx xdx 22 1 2 1 () = () −∞ ∞ ∫ , are Normal p.d.f.’s. 4.3 Quantiles and Modes of a Distribution Let X be an r.v. with d.f. F and consider a number p such that 0 < p < 1. A pth quantile of the r.v. X, or of its d.f. F, is a number denoted by x p and having the following property: P(X ≤ x p ) ≥ p and P(X ≥ x p ) ≥ 1 − p. For p = 0.25 we get a quartile of X, or its d.f., and for p = 0.5 we get a median of X, or its d.f. For illustrative purposes, consider the following simple examples. Let X be an r.v. distributed as U(0, 1) and let p = 0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80 and 0.90. Determine the respective x 0.10 , x 0.20 , x 0.30 , x 0.40 , x 0.50 , x 0.60 , x 0.70 , x 0.80 , and x 0.90 . Since for 0 ≤ x ≤ 1, F(x) = x, we get: x 0.10 = 0.10, x 0.20 = 0.20, x 0.30 = 0.30, x 0.40 = 0.40, x 0.50 = 0.50, x 0.60 = 0.60, x 0.70 = 0.70, x 0.80 = 0.80, and x 0.90 = 0.90. Let X be an r.v. distributed as N(0, 1) and let p = 0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80 and 0.90. Determine the respective x 0.10 , x 0.20 , x 0.30 , x 0.40 , x 0.50 , x 0.60 , x 0.70 , x 0.80 , and x 0.90 . EXAMPLE 4 EXAMPLE 3 4.3 Quantiles and Modes of a Distribution 99 [...]... containing x and contained in G Without loss of generality, such intervals may be taken to be centered at x It follows from this definition that an open interval is an open set, the entire real line ‫ ޒ‬is an open set, and so is the empty set (in a vacuous manner) LEMMA 1 Every open set in ‫ ޒ‬is measurable PROOF Let G be an open set in ‫ ,ޒ‬and for each x ∈ G, consider an open interval centered at x and... somewhat advanced mathematics courses, one encounters sometimes the so-called Cauchy Principal Value Integral This coincides with the improper Riemann integral when the latter exists, and it often exists even if the Riemann integral does not It is an improper integral in which the limits are taken symmetrically As an example, for σ = 1, μ = 0, we have, in terms of the principal value integral, REMARK 6... and contained in G Clearly, the union over x, as x varies in G, of such intervals is equal to G The same is true if we consider only those intervals corresponding to all rationals x in G These intervals are countably many and each one of them is measurable; then so is their union ᭡ 4.1 The Cumulativeof Statements 1 and 2 4.4* Justification Distribution Function DEFINITION 2 LEMMA 2 1 03 A set G in ‫ ޒ‬m,... and is called the variance of X Its positive square root σX or σ(X) or just σ is called the standard deviation (s.d.) of X 2 As in the case of μX, σ X has a physical interpretation also Its definition corresponds to that of the second moment, or moment of inertia One recalls that a large moment of inertia means the mass of the body is spread widely about its center of gravity Likewise a large variance... charged if the company is to expect to come ahead by $M for administrative expenses and profit? 5.1 Moments of Random Exercises Variables 5.1.11 green 1 13 A roulette wheel has 38 slots of which 18 are red, 18 black, and 2 iii) Suppose a gambler is placing a bet of $M on red What is the gambler’s expected gain or loss and what is the standard deviation? iii) If the same bet of $M is placed on green and... consider an open cube centered at x and contained in G The union over x, as x varies in G, of such cubes clearly is equal to G The same is true if we restrict ourselves to x’s in G whose m coordinates are rationals Then the resulting cubes are countably many, and therefore their union is measurable, since so is each cube ᭡ PROOF DEFINITION 3 Recall that a function g: S ⊆ ‫ ޒ → ޒ‬is said to be continuous at... ≥ 1, is called open if for every x in G there exists an open cube in ‫ ޒ‬m containing x and contained in G; by the term open “cube” we mean the Cartesian product of m open intervals of equal length Without loss of generality, such cubes may be taken to be centered at x Every open set in ‫ ޒ‬n is measurable It is analogous to that of Lemma 1 Indeed, let G be an open set in ‫ ޒ‬m, and for each x ∈ G,... −0.2 53, x0.50 = 0, x0.60 = 0.2 53, x0.70 = 0.524, x0.80 = 0.842, and x0.90 = 1.282 Knowledge of quantiles xp for several values of p provides an indication as to how the unit probability mass is distributed over the real line In Fig 4 .3 various cases are demonstrated for determining graphically the pth quantile of a d.f Let X be an r.v with a p.d.f f Then a mode of f, if it exists, is any number which maximizes... Exercise 3. 3.7 in Chapter 3 and suppose that each TV tube costs $7 and that it sells for $11 Suppose further that the manufacturer sells an item on money-back guarantee terms if the lifetime of the tube is less than c ii) Express his expected gain (or loss) in terms of c and λ; ii) For what value of c will he break even? 5.2.10 Refer to Exercise 4.1.12 in Chapter 4 and suppose that each bulb costs 30 cents... gravity and its physical interpretation as the point of balance of the distributed mass, the interpretation of μX as the mean or expected value of the random variable is the natural one, provided the probability distribution of X is interpreted as the unit mass distribution In Definition 1, suppose X is a continuous r.v Then E[g(X)] = ∞ ∫−∞ g( x)f ( x)dx On the other hand, from the last expression above, . x,, () = () at continuity points of f. For k > 2, we have a theorem strictly analogous to Theorems 3 and 6 and also remarks such as Remark 1(i)–(iv) following Theorem 3. In particular, the analog. over x, as x varies in G, of such intervals is equal to G. The same is true if we consider only those intervals corresponding to all rationals x in G. These intervals are countably many and each. dx kkk X xx x () [] = ()() ⋅⋅⋅ ⋅⋅⋅ () ⋅⋅⋅ () ⋅⋅⋅ ⎧ ⎨ ⎪ ⎩ ⎪ ∑ ∫∫ −∞ ∞ −∞ ∞ 111 ,, ,, Chapter 5 Moments of Random Variables—Some Moment and Probability Inequalities DEFINITION 1 5.1 Moments of Random Variables 107 and call it the mathematical expectation or mean value or

Ngày đăng: 23/07/2014, 16:21

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan