Market risk analysis ii practical financial econometrics carol alexander 0470998016

431 3 0
Market risk analysis ii practical financial econometrics carol alexander 0470998016

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ Market Risk Analysis Volume II Practical Financial Econometrics Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ Market Risk Analysis Volume II Practical Financial Econometrics Carol Alexander Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ Published in 2008 by John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone +44 1243 779777 Email (for orders and customer service enquiries): cs-books@wiley.co.uk Visit our Home Page on www.wiley.com Copyright © 2008 Carol Alexander All Rights Reserved No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to permreq@wiley.co.uk, or faxed to (+44) 1243 770620 Designations used by companies to distinguish their products are often claimed as trademarks All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners The Publisher is not associated with any product or vendor mentioned in this book This publication is designed to provide accurate and authoritative information in regard to the subject matter covered It is sold on the understanding that the Publisher is not engaged in rendering professional services If professional advice or other expert assistance is required, the services of a competent professional should be sought Carol Alexander has asserted her right under the Copyright, Designs and Patents Act 1988, to be identified as the author of this work Other Wiley Editorial Offices John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA Wiley-VCH Verlag GmbH, Boschstr 12, D-69469 Weinheim, Germany John Wiley & Sons Australia Ltd, 42 McDougall Street, Milton, Queensland 4064, Australia John Wiley & Sons (Asia) Pte Ltd, Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 John Wiley & Sons Canada Ltd, 6045 Freemont Blvd, Mississauga, Ontario, Canada L5R 4J3 Wiley also publishes its books in a variety of electronic formats Some content that appears in print may not be available in electronic books British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 978-0-470-99801-4 (HB) Typeset in 10/12pt Times by Integra Software Services Pvt Ltd, Pondicherry, India Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two trees are planted for each one used for paper production Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ To Rick van der Ploeg Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ Contents List of Figures xiii List of Tables xvii List of Examples xx Foreword xxii Preface to Volume II xxvi II.1 Factor Models II.1.1 Introduction II.1.2 Single Factor Models II.1.2.1 Single Index Model II.1.2.2 Estimating Portfolio Characteristics using OLS II.1.2.3 Estimating Portfolio Risk using EWMA II.1.2.4 Relationship between Beta, Correlation and Relative Volatility II.1.2.5 Risk Decomposition in a Single Factor Model II.1.3 Multi-Factor Models II.1.3.1 Multi-factor Models of Asset or Portfolio Returns II.1.3.2 Style Attribution Analysis II.1.3.3 General Formulation of Multi-factor Model II.1.3.4 Multi-factor Models of International Portfolios II.1.4 Case Study: Estimation of Fundamental Factor Models II.1.4.1 Estimating Systematic Risk for a Portfolio of US Stocks II.1.4.2 Multicollinearity: A Problem with Fundamental Factor Models II.1.4.3 Estimating Fundamental Factor Models by Orthogonal Regression II.1.5 Analysis of Barra Model II.1.5.1 Risk Indices, Descriptors and Fundamental Betas II.1.5.2 Model Specification and Risk Decomposition II.1.6 Tracking Error and Active Risk II.1.6.1 Ex Post versus Ex Ante Measurement of Risk and Return II.1.6.2 Definition of Active Returns II.1.6.3 Definition of Active Weights II.1.6.4 Ex Post Tracking Error Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ 1 2 10 11 11 13 16 17 21 22 23 25 27 28 30 31 32 32 33 33 viii Contents II.1.6.5 Ex Post Mean-Adjusted Tracking Error II.1.6.6 Ex Ante Tracking Error II.1.6.7 Ex Ante Mean-Adjusted Tracking Error II.1.6.8 Clarification of the Definition of Active Risk II.1.7 Summary and Conclusions II.2 Principal Component Analysis II.2.1 Introduction II.2.2 Review of Principal Component Analysis II.2.2.1 Definition of Principal Components II.2.2.2 Principal Component Representation II.2.2.3 Frequently Asked Questions II.2.3 Case Study: PCA of UK Government Yield Curves II.2.3.1 Properties of UK Interest Rates II.2.3.2 Volatility and Correlation of UK Spot Rates II.2.3.3 PCA on UK Spot Rates Correlation Matrix II.2.3.4 Principal Component Representation II.2.3.5 PCA on UK Short Spot Rates Covariance Matrix II.2.4 Term Structure Factor Models II.2.4.1 Interest Rate Sensitive Portfolios II.2.4.2 Factor Models for Currency Forward Positions II.2.4.3 Factor Models for Commodity Futures Portfolios II.2.4.4 Application to Portfolio Immunization II.2.4.5 Application to Asset–Liability Management II.2.4.6 Application to Portfolio Risk Measurement II.2.4.7 Multiple Curve Factor Models II.2.5 Equity PCA Factor Models II.2.5.1 Model Structure II.2.5.2 Specific Risks and Dimension Reduction II.2.5.3 Case Study: PCA Factor Model for DJIA Portfolios II.2.6 Summary and Conclusions II.3 Classical Models of Volatility and Correlation II.3.1 Introduction II.3.2 Variance and Volatility II.3.2.1 Volatility and the Square-Root-of-Time Rule II.3.2.2 Constant Volatility Assumption II.3.2.3 Volatility when Returns are Autocorrelated II.3.2.4 Remarks about Volatility II.3.3 Covariance and Correlation II.3.3.1 Definition of Covariance and Correlation II.3.3.2 Correlation Pitfalls II.3.3.3 Covariance Matrices II.3.3.4 Scaling Covariance Matrices II.3.4 Equally Weighted Averages II.3.4.1 Unconditional Variance and Volatility II.3.4.2 Unconditional Covariance and Correlation II.3.4.3 Forecasting with Equally Weighted Averages Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ 36 39 40 42 44 47 47 48 49 49 50 53 53 55 56 58 60 61 62 66 70 71 72 73 76 80 80 81 82 86 89 89 90 90 92 92 93 94 94 95 96 97 98 99 102 103 Introduction to Copulas 297 Estimate the Sharpe ratio using the formula ˆ = R − Rf s  (II.6.96) where R is the annualized sample mean return, s is the annualized sample standard deviation of returns and Rf is the risk free rate Use Excel Solver to find the portfolio weights that maximize the Sharpe ratio The result, when the correlation is −0795, is that 91.4% of capital should be invested in the FTSE and 8.6% of capital invested in the Vftse.44 The allocation of the volatility is positive even though the mean–variance characteristics of the FTSE are far more favourable than the Vftse, as shown in Table II.6.4 Over the sample the FTSE index had an average annualized mean return of 10.52% with a average volatility of 10.52%, whereas the Vftse had an average annualized mean return of –4.11% with an average volatility of 83.10% So why should we include the Vftse in the portfolio at all? The reason is that even though returns on volatility are often negative, they have a very high negative correlation with equity Thus adding volatility to the portfolio considerably reduces the portfolio volatility and the risk adjusted performance improves Finally, repeating the above for different values of the correlation produces the data used to construct Figure II.6.20 This illustrates how the optimal portfolio weight on the FTSE index and the optimal Sharpe ratio (SR) change as the correlation ranges between −095 and 0.05.45 The more negative the correlation, the greater the potential gain from diversification into the Vftse Thus the more weight is placed on the Vftse index and the higher the Sharpe ratio 4.5 102% Weight on FTSE SR (Right Hand Scale) 100% 98% 3.5 96% 94% 2.5 92% 90% 1.5 88% –1 –0.8 –0.6 –0.4 –0.2 Figure II.6.20 Optimal weight on FTSE and Sharpe ratio vs FTSE–Vftse returns correlation 44 At the time of writing this would necessitate over-the-counter trades, as no exchange traded fund on the Vftse exists and Vftse futures behave quite differently from the Vftse index – see Section III.5.5 for further explanation 45 We not consider positive correlation, as this is empirically highly unlikely Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ 298 Practical Financial Econometrics II.6.9 SUMMARY AND CONCLUSIONS Since the seminal works of Embrechts et al (2002, 2003) a large academic literature on copulas has been directed towards problems in financial risk management All the major academic players in the field agree that copulas are essential for accurate modelling of financial risks But, as is typical, the industry has been cautious to incorporate these new ideas into market risk models This may be partly due to the level of difficulty usually associated with using copulas Indeed, most academic papers require a pretty high level of abstract statistical knowledge The aim of this chapter is to bring copulas to the attention of a wider audience, to quantitative finance academics, postgraduate finance students and most of all to the practitioners who really need copulas for accurate models of the behaviour of asset returns Of course a considerable amount of theory is necessary, but I have tried to adopt a pedagogical approach, and so have provided numerous examples in Excel My hope is that practitioners and students with a reasonable knowledge of statistics will gain confidence in using copulas and, through their application, progress in their theoretical understanding A good starting point for this chapter is actually Section II.3.3, with the summary of the ‘pitfalls of correlation’ described by Embrechts et al (2002) The poor properties of linear correlation as a measure of association provide tremendous motivation for the reader to learn about copulas The present chapter begins by introducing a general measure of association called concordance which is a more general concept of dependence than linear correlation Empirical examples illustrate how to calculate two concordance metrics: Spearman’s rho and Kendall’s tau These are introduced here because they play a useful role in calibrating copulas Then we follow with the formal definition of a copula distribution, its associated copula density and the fundamental theorem of Sklar (1959) This theorem allows us to ‘build’ joint distributions by first specifying the marginals and then specifying the copula The presentation here focuses on the conditional copula distribution, showing how to derive it and providing several empirical examples The conditional copula density is (usually) required for simulating random variables and for copula quantile regression analysis, which is in the next chapter discussed in detail The copulas that are implemented in the Excel spreadsheets are bivariate versions of the normal copula (which is also called the Gaussian copula), Student t, normal mixture, Clayton and Gumbel copulas The first two are implicit copulas because they are derived from a known bivariate distribution The last two are Archimedean copulas, which are constructed from a generator function Any convex, monotonic decreasing function can be used to generate an Archimedean copula Hence, there are a vast number of copulas that could be applied to a fixed pair of marginal distributions to generate a different joint distribution each time! The big question is: which is the ‘best’ copula? This is the subject of a considerable amount of ongoing research After explaining how to calibrate copulas and assess their goodness of fit to sample data we move on to the main risk management applications The first application is Monte Carlo simulation, where we simulate returns that have a joint distribution characterized by any marginal distributions and any copula Monte Carlo simulations are very widely used in risk management, from pricing and hedging options to portfolio risk assessment Simulation is computationally burdensome, yet practitioners commonly view it as worthwhile because it is based on a realistic model of returns behaviour In particular, we not need to assume Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ Introduction to Copulas 299 normality in Monte Carlo simulations Structured Monte Carlo simulation may still be based on a correlation matrix if we assume the returns have an elliptical distribution, and if the joint distribution is a Student t then the returns will display symmetric tail dependence However, most realistic models of returns behaviour have asymmetric tail dependence For instance, the dependence between stock returns is greater during stressful periods, when many extreme negative returns are observed We have provided empirical examples that show how the Clayton and Gumbel copulas capture asymmetric tail dependence We chose these copulas because they are particularly simple one-parameter Archimedean copulas, but there are numerous other copulas with asymmetric tail dependence An immediate application of Monte Carlo simulation is of course to estimate portfolio value at risk Instead of assuming that risk factor or asset returns have elliptical joint distributions, the use of copulas in simulations allows one to estimate portfolio VaR under virtually any assumptions about the marginal returns distributions and about the symmetric or asymmetric tail dependence in asset returns Aggregation of distributions is based on a convolution integral whereby we derive the distribution of a sum of random variables from the marginal distributions of the variables and a copula An immediate application of convolution on the joint distribution specified by a copula is risk aggregation By successively deriving returns distributions of larger and larger portfolios and applying a risk metric (such as VaR) to each distribution, market risk analysts may provide senior managers with aggregate risk assessments of the various activities in a firm, and of the firm as a whole Many commercial portfolio optimization packages base allocations to risky assets on an empirical returns joint distribution, using historical data on all the assets in the investor’s universe The ‘best’ allocation is the one that produces a portfolio returns distribution that has the best performance metric, e.g the highest Sharpe ratio There are advantages in using an empirical returns joint distribution, because then we are not limited to the multivariate normality assumption of standard mean–variance analysis Using an empirical distribution, all the characteristics of the joint distribution of returns on risky assets can influence the optimal allocation, not just the asset volatilities and correlations However, a problem arises when no parametric form of joint distribution is fitted to the historical data, because the optimization can produce very unstable allocations over time We have shown how copulas provide a very flexible tool for modelling this joint distribution We not need to assume that asset returns are multivariate normal, or even elliptical, to derive optimal allocations Parametric portfolio optimization can take account of asymmetric tail dependence, for instance, if we use the simple Clayton copula During the last decade financial statisticians have developed copula theory in the directions that are useful for financial applications Recognizing the fact that credit loss distributions are highly non-normal, it has now become a market standard to use copulas in credit risk analysis, for instance to price and hedge collateralized debt obligations But copulas also have a wide variety of applications to market risk analysis, perhaps even more than they in credit risk Several of these applications have been described in this chapter Yet the industry has been slow to change its established practice for market risk, where risk and performance metrics are still usually based on the assumption that asset returns have multivariate normal distributions Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ II.7 Advanced Econometric Models II.7.1 INTRODUCTION A regression model is a tool that is rather like a pair of spectacles Like spectacles, regression models allow you to see more clearly Characteristics of the data that cannot be seen from simple graphs or by calculating basic sample statistics can be seen when we apply a regression model Spectacles come in all shapes and sizes, and some are specifically designed to be worn for certain purposes Likewise regression models come in many varieties and some models should only be applied to certain types of data A standard multiple linear regression estimated using ordinary least squares (OLS) is like an ordinary pair of spectacles It is fine when the data are in the right form and you not want to see too much But for special types of data we need to use a different type of model; for instance, when data are discrete we may use a probit or logit model Also, like spectacles, some regression models are more powerful than others For instance, non-linear regression, quantile regression, copula quantile regression or Markov switching regression models allow one to see far more than is possible using a simple linear regression We should always plot data before estimating a regression model This is a golden rule that should never be overlooked Forgetting to plot the data before prescribing and fitting the regression model is like an optician forgetting to an eye test before prescribing the lenses and fitting the frames A visual inspection of the data allows us to see details about the individual data and about the relationships between variables that will help us formulate the model, and to choose appropriate parameter estimation methods For instance, we may notice a structural break or jump in the data and a simple tool for dealing with this is to include a dummy A basic dummy variable takes the value except during the unusual period, where it takes the value Adding such a dummy to the regression is like having two constant terms It gives the model the freedom to shift up during the unusual period and therefore it improves the fit You may also, for any explanatory variable, add another explanatory variable equal to the product of the dummy and the variable In other words, include all the values of the explanatory variable X and include another variable which is zero everywhere except during the unusual period when it takes the X values This has the effect of allowing the slope coefficients to be different during the unusual period and will improve the fit further still Like the prior plotting of data, running an OLS linear regression is another elementary principle that we should adhere to This is the first stage of building any regression model, except for probit and logit models where linear regression cannot be applied Running an OLS regression is like putting on your ordinary spectacles It allows you to gain some idea about the relationship between the variables Then we may decide to use a more powerful model that allows us to see the relationship more clearly, but only if a relationship is already obvious from OLS Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ 302 Practical Financial Econometrics The optimization of a standard linear regression by OLS is straightforward We only need a very simple sort of engine to drive the model, like the engine of an old Citroën 2CV car In fact, we not even need to use a numerical method to estimate the coefficients because analytic solutions exist, i.e the OLS formulae But the optimization of more advanced regression models is not simple Most use a form of maximum likelihood for parameter estimation and in some models, for instance in Markov switching models, the optimization engine for maximum likelihood estimation is extremely complex A 2CV engine will no longer the job We should only use a more advanced model if OLS has already indicated that there is a relationship there to model Otherwise we are in danger of detecting spurious relationships that are merely an artefact of running a Ferrari rather than a 2CV engine on the model To use yet another simile, when baking a cake there is no point in putting beautiful decorations on the icing unless you have ensured that the basic cake underneath is good    but enough! Let me move on to outline the ingredients of this chapter The next section provides a detailed introduction to quantile regression Linear quantile regression is a natural extension of OLS regression where the optimization objective of minimizing the residual sum of squares is replaced by an asymmetric objective Thus we estimate the regression lines that, rather than passing through the mean of the sample, divide the sample into two unequal parts For instance, in the 0.1 quantile regression 10% of the data lie above the regression line OLS regression only provides a prediction of the conditional mean, but finding several quantile regression lines gives a more complete picture of the joint distribution of the data With linear quantile regression we can obtain predictions of all the conditional quantiles of the conditional joint distribution Non-linear quantile regression is harder, since it is based on a copula A good understanding of Chapter II.6 on copulas is essential for understanding the subsections on copula quantile regression Once this is understood the rest is plain sailing, and in Section II.7.3 we have provided several detailed Excel spreadsheets that implement all the standard copula quantile regressions in two separate case studies Section II.7.4 covers some advanced regression models, including discrete choice models which qualify as regression models only because they can be expressed in this form But they cannot be estimated as a linear regression, because the dependent variable is a latent variable, i.e an unobservable variable The input data that are relevant to the dependent variable are just a series of flags, or zeros and ones The actual dependent variable is a non-linear transformation of an unobservable probability, such as the probability of default This may sound complicated, but these models are actually very simple to implement We provide an Excel spreadsheet for estimating probit, logit and Weibull models in the context of credit default and hence compare the default probabilities that are estimated using different functional forms Section II.7.5 introduces Markov switching models These models provide a very powerful pair of spectacles since they allow the data generation process for returns to switch as the market changes between regimes They are incredibly useful for modelling financial data and may be applied to capture regime-specific behaviour in all financial markets Equity, commodity and credit markets tend to have two very distinct regimes, one with high volatility that rules during a crisis or turbulent market and the other with a lower volatility that rules during typical market circumstances Foreign exchange markets have less regime-specific behaviour and interest rates tend to have three regimes, one when interest rates are declining and the yield curve slopes downwards, one stable regime with Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ Advanced Econometric Models 303 a flat curve, and a third when interest rates are increasing and the yield curve slopes upwards Since Markov switching models are rather complex, this section focuses on presenting an easy-to-read description of the model structure But the engine that is used to optimize these models cannot be presented in Excel Instead my PhD student Andreas Kaeck has allowed his EViews code to be made available on the CD Many thanks, Andreas! Section II.7.6 surveys the vast academic literature on the use of ultra high frequency data in regression analysis After describing some useful sources of tic by tic data and how to deal with the errors that are often found in these data sets, we survey the autoregressive conditional duration models that attempt to capture the time between trades using an autoregressive framework that is similar to that of a GARCH volatility process Much of the recent econometric research on the use of ultra high frequency data concerns the prediction of realized volatility This is because the volume of trading on variance swaps has increased very rapidly over the last few years, and the ability to forecast realized volatility is important for pricing these instruments.1 We not survey the literature on point forecasts of high frequency returns since neutral networks, genetic algorithms and chaotic dynamics rather than econometric models are the statistical tools that are usually implemented in this case.2 Section II.7.7 summarizes and concludes II.7.2 QUANTILE REGRESSION Standard regression provides a prediction of the mean and variance of the dependent variable, Y, conditional on some given value of an associated independent variable X Recall that when simple linear regression was introduced in Chapter I.4 we assumed that X and Y had a bivariate normal distribution In that case we can infer everything about the conditional distribution of the dependent variable from the standard linear regression That is, knowing only the conditional mean and variance, we know the whole conditional distribution But more generally, when X and Y have an arbitrary joint distribution, the conditional mean and variance not provide all the information we need to describe the conditional distribution of the dependent variable The goal of quantile regression is to compute a family of regression curves, each corresponding to a different quantile of the conditional distribution of the dependent variable This way we can build up a much more complete picture of the conditional distribution of Y given X The aims of this section are:3 • to explain the concept of quantile regression, introduced by Koenker and Basset (1978); following Bouyé and Salmon (2002) to describe the crucial role that conditional copula distributions play in non-linear quantile regression analysis; and • to provide simple examples in Excel that focus on the useful risk management applications of quantile regression • Two case studies are provided to illustrate the main concepts See Section III.4.7 for further details on variance swaps However, see Alexander (2001a: Chapter 13) for further details Readers who wish to delve into this subject in more detail, though not with reference to copulas, are referred to the excellent text book by Koenker (2005) Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ 304 Practical Financial Econometrics II.7.2.1 Review of Standard Regression For convenience we first summarize some basic facts about simple linear regression from Chapter I.4 Using the notation defined in Section I.4.2, the simple linear regression model may be written Y =  + βX +  (II.7.1) where the parameters  and β are constants, Y is the dependent variable, X is the independent variable and  is an independent and identically distributed (i.i.d.) error term that is also independent of X Since  is an error we expect it to be zero – otherwise it would represent a systematic bias So we assume that E  = and indeed, since  is assumed to be independent of X, all conditional expectations of  are also assumed to be zero This means that taking conditional expectations of (II.7.1) gives EY X  =  + βX (II.7.2) In standard regression we assume that X and Y have a bivariate normal distribution Then the conditional expectation of Y given some value for X is EY X  = EY + VY X − EX  VX (II.7.3) where is the correlation between X and Y.4 It is easy to show the conditional distribution FY X  is normal when X and Y are bivariate normal and also that   VY X  = VY −  Hence the simple linear regression model specifies the entire conditional distribution in this case Equating (II.7.2) and (II.7.3) gives β= VY CovX Y = VX VX and  = EY − βEX  (II.7.4) Replacing (II.7.4) with sample estimates of the means and standard deviations and correlation of X and Y, based on some sample of size T, yields the familiar ordinary least squares estimators for the coefficient, i.e s ˆ βˆ = XY and ˆ = Y − βX (II.7.5) s2X where X and Y denote the sample means, s2X is the sample variance of X and sXY is the sample covariance Finally, in Section I.4.2 we showed that the OLS estimators ˆ and βˆ are the solutions to the optimization problem β T  Yt −  + βXt 2  (II.7.6) t=1 In other words, we obtain the OLS estimators by minimizing the residual sum of squares See Section I.3.4.6 for further details Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ Advanced Econometric Models II.7.2.2 305 What is Quantile Regression? In the simple linear regression model reviewed above we derived the conditional expectation and conditional variance of Y and, assuming the variables were bivariate normal, we completely specified the conditional distribution of the dependent variable But if X and Y not have a bivariate normal distribution then we need more than the conditional expectation and conditional variance to describe the conditional distribution FY X  Indeed, the most convenient way to describe the conditional distribution of the dependent variable is using its quantiles As a prelude to introducing the quantile regression equation, we now derive an expression for the conditional quantiles of Y given X, based on an arbitrary joint distribution For the moment we still assume that X and Y are related by the simple linear model (II.7.1), although quantile regression has a straightforward extension to non-linear relationships between X and Y, as we shall see in Section II.7.2.5 below In quantile regression we still assume that the error is i.i.d But now we must introduce a specific error distribution function denoted F Now consider the conditional quantiles of the simple linear regression model (II.7.1) Whilst the expectation of  is still assumed to be zero because it is an error, its quantiles are not zero in general Hence, when we take quantiles instead of expectations of the simple linear model (II.7.1), the error term no longer disappears Let q ∈ 0 1 and denote the q quantile of the error by F−1 q Also denote the conditional q quantile of the dependent variable, which is found from the inverse of FY X , by F−1 q X  Now, taking conditional q quantiles of (II.7.1) yields F−1 q X  =  + βX + F−1 q  (II.7.7) This is the simple linear quantile regression model In simple linear quantile regression we still aim to estimate a regression line through a scatter plot In other words, we shall estimate the parameters  and β based on a paired sample on X and Y But the difference between quantile regression and standard regression is that with standard regression coefficient estimates the regression line passes through the average or ‘centre of gravity’ of the points, whereas a quantile regression line will pass through a quantile of the points For instance, when q is small, say q = 01, then the majority of points would lie below the q quantile regression line In Figure II.7.1 the black line is the median regression line, the dashed grey line is the 0.1 quantile line and the solid grey line is the 0.9 quantile line Note that the quantile regression lines are not parallel This fact is verified empirically in the case studies later in this chapter II.7.2.3 Parameter Estimation in Quantile Regression We now explain how to estimate the coefficients  and β in the simple linear quantile regression model, given a sample on X and Y Again we shall draw analogies with standard ˆ of the regression, where using OLS estimators for the coefficients yields an estimate ˆ + βX conditional mean of Y We show how to find the q quantile regression coefficient estimates, which we shall denote ˆ q and βˆ q , and hence obtain an estimate ˆ q + βˆ q X of the conditional q quantile of Y By letting q vary throughout its range from to we can obtain all the information we want about the conditional distribution of Y In standard regression we find the OLS estimates as a solution to an optimization problem That is, we minimize the sum of the squared residuals as in (II.7.6) above In quantile Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ 306 Practical Financial Econometrics 0.15 0.1 0.05 –0.05 –0.1 –0.15 –0.15 –0.1 –0.05 0.05 0.1 0.15 Figure II.7.1 Quantile regression lines regression we also find the q quantile regression coefficient estimates as a solution to an optimization problem In fact, we find ˆ q and βˆ q as the solution to β T    q − 1Yt ≤ +βXt Yt −  + βXt  (II.7.8) t=1 where 1Yt ≤ +βXt =  if Yt ≤  + βXt  otherwise To understand why this is the case, recall that in standard regression we express the ‘loss’ associated with a large residual by the square of the residual It does not matter whether the residual is positive or negative In quantile regression we express the ‘loss’ associated with a large residual by the function q − 1Yt ≤+βXt , which is shown in Figure II.7.2 Along the horizontal axis we show the residual, and the OLS loss function (the square of the residual) q − Y t ≤ α + βX t slope q slope q − Yt – (α + βXt ) Figure II.7.2 Loss function for q quantile regression objective Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ Advanced Econometric Models 307 is depicted by the dotted quadratic curve The loss function for the q quantile regression objective is depicted by the bold grey lines In quantile regression we choose ˆ q and βˆ q to minimize expected loss, just as in OLS we choose ˆ and βˆ to minimize expected loss The only difference between standard and ˆ to minimizing the quantile regression is the form of the loss function The solution  ˆ β OLS loss function satisfies ˆ = EY ˆ X   ˆ + βX (II.7.9) where Eˆ Y X  is the sample estimate of the conditional mean Similarly, the solution ˆ q  βˆ q  to minimizing the quantile loss function shown in Figure III.4.13 satisfies5 Fˆ −1 q X  = ˆ q + βˆ q X + F−1 q  (II.7.10) where Fˆ q−1 Y X  is the sample estimate of the conditional q quantile Unlike OLS regression, where simple formulae can be used to find values of ˆ and βˆ given a sample on X and Y, there are generally no analytic formulae for the solutions ˆ q and βˆ q to (II.7.8) Therefore, we need to use a numerical algorithm In the case study of Section II.7.2.6 below we shall use Excel Solver However, Koenker and Hallock (2001) emphasize that specialized numerical algorithms are necessary to obtain reliable results We remark that free software for many regression models, including linear quantile regression and inference on these models, is available from Bierens (2007).6 II.7.2.4 Inference on Linear Quantile Regressions Inference in linear quantile regression is based on a remarkable ‘model free’ result that the confidence intervals for quantiles are independent of the distribution In fact, the distribution of a quantile estimator is based on the fact that the number of observations in a random sample (from any population) that are less than the q quantile has a binomial distribution Confidence intervals for a quantile estimator are derived in Section II.8.4.1, and a numerical example is given there The binomial distribution for the quantile estimator is simple enough to extend to the linear quantile regression framework In fact confidence intervals and standard errors of linear quantile regression estimators are now being included in some econometrics packages, including the EasyReg package referred to above Koenker (2005) provides a useful chapter on inference in linear quantile regression, but the theory of inference in non-linear quantile regressions has yet to be fully developed II.7.2.5 Using Copulas for Non-linear Quantile Regression Following Bouyé and Salmon (2002), a tractable approach to non-linear quantile regression is to replace the linear model in (II.7.8) by the q quantile curve of a copula This is an extremely useful tool, because returns on financial assets very often have highly non-linear relationships See Koenker (2005: Section 1.3) for the proof See http://econ.la.psu.edu/∼hbierens/EASYREG.HTM for free software for many regression models, including linear quantile regression Trắc nghiệm kiến thức Forex : https://tracnghiemforex.com/ 308 Practical Financial Econometrics Recall from Section II.6.5 that every copula has a q quantile curve which may sometimes be expressed as an explicit function For instance, when the marginals are both standard normal the normal (i.e Gaussian) copula quantile curves are given by  Y = X + − −1 q  (II.7.11) Now suppose we know the marginal distributions F X and G Y of X and Y have been specified and their parameters have already been estimated using maximum likelihood We then specify some functional form for a bivariate copula, and this will depend on certain parameters  For instance, the normal bivariate copula has one parameter, the correlation , the Clayton copula has one parameter, , and the bivariate Student t copula has two parameters, the degrees of freedom and the correlation, The normal copula quantile curves may be written    Y = G−1 −1 FX + − −1 q  (II.7.12) Similarly, from (II.6.69) we derive the Student t copula quantile curves as   −1 1 −   + 1 + t−1 t−1 Y = G−1 t t−1  FX + FX +1 q (II.7.13) and from (II.6.75) the Clayton copula quantile curves take the form   −1/  Y = G−1 + FX− q−/ 1+ − (II.7.14) There is no closed form for the Gumbel copula quantile curves, but there are many other types of copula in addition to normal, t and Clayton copulas for which the q quantile curve can be expressed as an explicit function: Y = Q q X q  We aim to estimate a different set of copula parameters ˆ q for each quantile regression Using the quantile function in place of the linear function, we perform a special type of non-linear quantile regression that Bouyé and Salmon (2002) call copula quantile regression To be more Xt  Yt Tt=1 , we define the q quantile regression curve as the curve precise, given a sample  Yt = Q q Xt  q ˆ q where the parameters ˆ q are found by solving the optimization problem  T   q − 1Yt ≤ Q  qXt q

Ngày đăng: 09/08/2023, 22:03

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan