Development of a fringe projection method for static and dynamic measurement

139 343 0
Development of a fringe projection method for static and dynamic measurement

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

DEVELOPMENT OF A FRINGE PROJECTION METHOD FOR STATIC AND DYNAMIC MEASUREMENT WU TAO (B.Eng. (Hons.)) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF ENGINEERING DEPARTMENT OF MECHANICAL ENGINEERING NATIONAL UNIVERSITY OF SINGAPORE 2003 ACKNOWLEDGEMENTS I will like to express my sincere and deepest appreciation to his supervisors Assoc. Prof Tay Cho Jui (Department of Mechanical Engineering) and Assist. Prof Quan Chenggen (Department of Mechanical Engineering) for their invaluable advice and guidance throughout this project. I would also like to express my gratitude to Mr. Chiam Tow Jong, Mr. Fu Yu, Ms. Sherrie Han, and Mr. Abdul Malik from the Experimental Mechanics Laboratory. I would also like to thank my family. Their financial and spiritual support have been enabled me to come to Singapore and study at an advanced academic level. i TABLE OF CONTENTS ACKNOWLEDGEMENTS i TABLE OF CONTENTS ii SUMMARY v NOMENCLATURE vii LIST OF FIGURES ix CHAPTER 1 1 INTRODUCTION 1.1 Introduction 1 1.2 Problem 2 1.3 Objectives of the project 3 CHAPTER 2 LITERATURE REVIEW 4 2.1 Application of optical techniques in shape and deformation measurement 4 2.2 Application of optical techniques in dynamic measurement 5 2.3 Development of fringe projection method 7 2.4 Enhancement of dynamic range of optical system 8 2.5 3-D displacement measurement by optical techniques 9 2.6 Digital image correlation (DIC) technique CHAPTER 3 3.1 11 THEORY 12 Formation of fringe patterns 12 3.1.1 Formation of fringe patterns by interferometry method 12 3.1.2 Formation of fringe patterna by a Liquid Crystal Display (LCD) Projector 14 3.2 Height and phase relationship 15 3.3 Determination of phase value 16 ii 3.4 Phase unwrapping 20 3.5 Enhancement of dynamic range of fringe projection method 21 3.6 Phase shift calibration 24 3.7 Integrated fringe projection and DIC method 24 CHAPTER 4 APPLICATION OF FRINGE PROJECTION METHOD FOR STATIC MEASUREMENT 36 4.1 Experimental work 36 4.2 Results and discussion 37 CHAPTER 5 APPLICATION OF FRINGE PROJECTION METHOD FOR DYNAMIC MEASUREMENT 5.1 Measurement of dynamic response of a small component 45 45 5.1.1 Experimental work 45 5.1.2 Results and discussion 46 5.2 48 Enhancement of dynamic range of the fringe projection method 5.2.1 Experimental work 49 5.2.2 Results and discussion 50 CHAPTER 6 INTEGRATED FRINGE PROJECTION AND DIC METHOD 71 6.1 Experimental Work 71 6.2 Results and discussion 72 CHAPTER 7 CONCLUSIONS AND RECOMMENDATIONS 94 7.1 Conclusions 94 7.2 Recommendations 97 References 98 APPENDIX A DERIVATION OF TIME-AVERAGE FRINGE PROJECTION 111 iii APPENDIX B PROCEDURE OF FFT PROCESSING 113 B.1 Fast Fourier Transform 113 B.2 Bandpass Filter for the Fourier Transform Method 113 B.3 Inverse FFT 114 B.4 Unwrapping 115 APPENDIX C PHASE MAPS AT DIFFERENT INTERVALS 121 APPENDIX D LIST OF PUBLICATIONS 126 iv SUMMARY This thesis is divided into three parts: the first part establishes the theory for a fringe projection method and its application to shape measurement for both static and dynamic loading conditions. Experimental verification for both static shape measurement and dynamic analysis is carried out on a micro membrane, a coin and a diaphragm of a speaker. The experimental results show excellent agreement with theoretical values. Since the exposure period of a camera is reduced with high frame rate, dynamic measurement with a short exposure is intrinsically light intensity starved. Thus insufficiency in light intensity often introduces underexposure problem and leads to poor image quality. To overcome this problem and enhance the dynamic range of the system, a practical and simple method which involves a white light source (WLS) is proposed and demonstrated. The theory is presented and an increase in the measurement range of up to a factor of 6 was achieved. Since the fringe projection method is mainly based on the height and out-ofplane displacement of the objects, it is observed that the in-plane displacement has significantly adverse effects on the results. Therefore, measurement of 3-D displacement is needed. A novel method combining the fringe projection and digital image correlation (DIC) into one optical system is developed to simultaneously measure displacement in three dimensions, where only one camera is used. In this technique, linear sinusoidal fringes are first projected on an object using a fringe v projector and images of the object’s surface with the fringe pattern are captured by a CCD camera. With the aid of Fourier transform, the carrier (fringe pattern) in the images is eliminated while only the background intensity variation is preserved. DIC is then used to obtain in-plane displacement based on the background images after carrier elimination. In the mean time, original images are processed by fast Fourier transform (FFT) technique to deliver information about shapes of the object. Based on the in-plane displacement vector obtained by DIC, the shapes of the object in different stages are compared in a reference coordinate to obtain out-of-plane displacement. Experimental results of 3-D displacement field of a small component are obtained to confirm the validity of the method. vi NOMENCLATURE A, C , C * Fourier spectra a Background intensity aW White light background introduced b Variation of fringe pattern c* Complex conjugate D Distance from the light source to the screen d Distance between two virtual points f Spectra frequency in x-direction g Distance between LCD and CCD h Height of the surface IF Resultant intensity of fringe pattern I F' Output intensity of fringe pattern IO Optimal intensity distribution IP Input intensity of fringe pattern k Coefficient relating Z to φ L Length from point A to the edge of the optical wedge l Distance between reference plane and CCD n0 Refractive index of the optical wedge P Position of the investigated point Po Fringes’ pitch Q Centre of the imaging optics vii t Thickness of an optical wedge u0 Carrier frequency W 3-D displacement vector Z ( x, y, t ) Instant displacement of the vibration surface Z ( x, y ) Vibration amplitude of one point at ( x, y ) α Angle of incidence of the light β Refractive angle θ Initial phase angle of vibration ω Angular velocity of vibration γ Projection angle λ Wavelength of a He-Ne laser ϕ 0 ( x, y ) Initial phase value ϕ ( x, y ) Phase variable ϕn Phase shifting interval δ Phase value δ (t ) Instant displacement µ,υ Contrast coefficient ∆x , ∆y , ∆z Displacement in the x -, y - and z - directions viii LIST OF FIGURES Figure 3.1 Schematic diagram of two-point-light-source interferometry with an optical fiber and an optical wedge. 28 Figure 3.2 Optical geometry for fringe analysis. 29 Figure 3.3 Fourier fringe analysis. 30 (a) spectrum of the interferogram. 30 (b) Processed spectrum after filtering. 30 (a) Wrapped phase with 2π jump. 31 (b) Unwrapped phase value. 31 Figure 3.5 Intensity of fringe pattern enhanced by a white light source. 32 Figure 3.6 General scheme of the proposed integrated method. 33 Figure 3.7 Logical model of CRA. 34 Figure 3.8 Schematic diagram of planar deformation process. 35 Figure 4.1 Experimental setup for shape measurement of small components. 39 Figure 4.2 Microphone chip. 40 Figure 4.3 Cross section view of the microphone. 40 Figure 4.4 Image of the fringe pattern on the test surface. 41 Figure 4.5 Relationship between the phase value and height of the test surface. 42 Figure 4.6 3-D plot of the surface of the microphone. 43 Figure 4.7 Cross-section of the micro membrane at x = 125 µ m. 44 Figure 5.1 Experimental setup of fringe projection method for dynamic Figure 3.4 Figure 5.2 measurement. 53 (a) Test specimen. 54 ix (b) A close up view of the surface with fringe projection. Figure 5.3 54 Sinusoidal fringe pattern projected on a small section of the test specimen. 55 Figure 5.4 Images at 0.002s time interval. 56 Figure 5.5 Calibration of the fringe projection system for the measurement of vibration in z direction. 57 Figure 5.6 Unwrapped phase maps at 0.002s time interval. 58 Figure 5.7 (a) Vibration amplitude before phase recovery. 59 (b) Vibration amplitude after phase recovery. 59 Figure 5.8 Vibration plots of different regions on the object. 60 Figure 5.9 Comparison of micro values and experimental vibration amplitude. 61 Figure 5.10 Experimental setup with the white light source. 62 Figure 5.11 A small section on the test coin surface. 63 Figure 5.12 Comparison of the images of part of coin recorded at recording rate of Figure 5.13 3000 fps. 64 (a) Image recorded with background enhancement. 64 (b) Image recorded without background enhancement. 64 (c) Image processed with an optimal contrast. 64 Comparison of the fringe pattern distribution of the cross section of YY. 65 Figure 5.14 Figure 5.15 (a) 3-D profile of test object before background enhancement. 66 (b) 3-D profile of test object after background enhancement. 66 (a) Image of the speaker recorded without background enhancement at frame rate of 3000 fps. 67 x (b) Image of the speaker recorded with white light background enhancement at frame rate of 3000 fps. Figure 5.16 67 (a) 3-D profile of speaker diaphragm with background enhancement.68 (b) 3-D profile of the speaker without background enhancement. 68 Figure 5.17 Vibration amplitude of speaker diaphragm. 69 Figure 5.18 Comparison of vibration plots at 3000fps frame rate. 70 Figure 6.1 A close up view of a speaker diaphragm. 76 Figure 6.2 (a) Images of the test surface with fringe pattern with a pitch of 50 Figure 6.3 Figure 6.4 Figure 6.5 pixels. 77 (b) Recovery of intensity distribution of background. 77 (a) 3-D representation of the spectrum before filtering. 78 (b) 3-D representation of the spectrum after filtering. 79 (a) Images of the test surface with fringe pattern with a pitch of 15 pixels. 80 (b) Recovery of intensity distribution of background. 80 (a) Images of the test surface with fringe pattern with a pitch of 1 pixel. 81 (b) Recovery of intensity distribution of background by a 70 pixels × 70 pixels filtering window. 82 (c) Recovery of intensity distribution of background by a 80 pixels × 80 pixels filtering window. 83 (d) Recovery of intensity distribution of background by a 100 pixels × 100 pixels filtering window. 84 (e) Recovery of intensity distribution of background by a 120 pixels × 120 pixels filtering window. xi 85 Figure 6.6 (a) A close up view of the test surface with fringe pattern before displacement (225 µ m, 300 µ m and 140 µ m in x -, y - and z directions respectively). 86 (b) A close up view of the test surface with fringe pattern after displacement (225 µ m, 300 µ m and 140 µ m in x -, y - and z directions respectively). 87 Figure 6.7 Phase map of the test surface. 88 Figure 6.8 Images of the background of the test surface after CRA. 89 (a) Image of the background of the test surface before displacement. 89 (b) Image of the background of the test surface after displacement. 90 Figure 6.9 Calibration of the measurement system. 91 (a) Calibration and error analysis of displacement along X direction. 91 (b) Calibration and error analysis of displacement along Y direction. 91 (c) Calibration and error analysis of out-of-plane displacement. 92 Figure 6.10 3-D displacement vector of the test surface. 93 Figure B.1 (a) Real part of the Fourier spectrum, (b) Imaginary part of the Fourier spectrum, (c) Spectrum. 116 Figure B.2 Dialog box for the bandpass filter for the Fourier transform method.117 Figure B.3 (a) Filtered real part of the Fourier spectrum, (b) Filtered imaginary part of the Fourier spectrum, (c) Amplitude spectrum of the result. 118 Figure B.4 (a) Back transformed real part of the Fourier spectrum, (b) Back transformed imaginary part of the Fourier spectrum, (c) Mod 2π -phase. 119 Figure B.5 Unwrapped phase map. 120 Figure C.1 Phase maps at different time intervals 125 xii Chapter 1 Introduction 1.1 Introduction Optical techniques can be classified into either static or dynamic methods with respect to the loading conditions. Some examples of the static approach are for shape measurement and deformation measurement, while vibrational excitation belongs to the dynamic measurement mode. Projected fringes for the measurement of surface shape is a non-contact optical method that has been widely recognized in the contour measurement of various diffuse objects. The technique is referred to as fringe projection. This method uses parallel or divergent fringes projected onto the object surface, either by a conventional imaging system or by coherent light interfererence patterns, in which the projection and recording directions are different. The resulting phase distribution of the measured fringe pattern includes information on the surface height variation of the object. An analysis of the fringe patterns is normally carried out either by the phase shifting technique or the fast Fourier transform (FFT) method. Both produce wrapped phase maps, where 2π phase jumps caused by the nature of arctangent must then be removed by the process known as phase unwrapping to recover the surface heights. Phase unwrapping is normally carried out by comparing the phase at neighboring pixels by adding or subtracting. In application, the fringe projection method has been proven to be a promising tool for deformation measurement and curvature measurement purpose. 1 1.2 Problem However, most research in fringe projection was based mainly on static measurement with phase-shifting technique. In such application, static loading is commonly applied to the test specimen to achieve the desired results. Over the years, although descriptions of the technique were often presented, no formal treatment of dynamic fringe projection had been dealt with. Dynamic measurement by fringe projection, which can be coupled with either static or dynamic measurement of the object, will enable the fringe pattern to be monitored live as it is produced and as it changes with time under the action of a varying load. This attribute makes dynamic measurement by fringe projection particularly useful for monitoring both time-dependent and transient incidents. If true 3-D displacement analysis is to be performed, the system must monitor the objects with more than one camera. This will require multiple projection-detection system, preferably working on the basis of the calibration principle, or with reference to a precalibrated measurement volume. These methods can effectively monitor 3-D displacement field, but most often two or more cameras are used to record 3-D information about the objects. There are some key limitations of multiple camera systems including 1) ill-suitability for dynamic measurement, 2) mismatch in the triangulation of corresponding points and 3) a calibration process that is laborious and time-consuming. Therefore single camera systems are greatly desired. Conventionally, fringe projection method is mostly used in out-of-plane displacement. 2 1.3 Objectives of the Project To formulate the fringe projection method for measuring components under vibrational loading conditions and 3-D translation, the objectives of this project are as follows: 1) To demonstrate the application of fringe projection method in both static and dynamic measurement; 2) To enhance the dynamic range of the measurement system; and 3) To develop a novel fringe projection method integrated with digital image correlation (DIC) technique which enables the determination of shape and 3-D displacement using only one camera. In the first chapter of the thesis, the objectives of this project are defined. The historical development of optical techniques will be presented in chapter 2. Chapter 3 to 6 will cover the main part of the project. These will include the theoretical derivation, the experimental techniques, followed by a detailed discussion on each of these topics. Chapter 7 will give the conclusions and lay down some recommendations for future investigation. A list of publications arising out of this research is shown in Appendix D. 3 Chapter 2 Literature Review 2.1 Application of Optical Techniques in Shape and Deformation Measurement Optical metrology has been developed rapidly since 1960s. Since then the surface measurement technique is regarded as one of the main components of optical metrology. In the early days of optics, a laser scanning machine was used as a surface detection tool. However, because of the time-consuming nature of point-by-point measurements, it may take a long time to perform the surface measurement. The main advantages of optical metrology, such as full field measurement, were utilized. Some techniques, such as shadow moiré method, are still used in surface measurement. Shadow moiré method [1-3] involves positioning a grating close to an object and its shadow on the object is observed through the grating. The method is useful for measuring 3-D shape of a relatively small object, however, the size of the object to be measured is restricted to the grating size. The sensitivity of the method is from the order of microns to that of millimeters depending on the frequency and the amount of relative rotation of the grating. Holographic method involves generation of a contour fringe pattern by two reconstructed images of a double-exposure hologram. Thalmann and Dandliker [4] reported holographic contouring using electronic phase measurement, which is based on two-illumination source, and the use of a microcomputer for data reduction. 4 ESPI (electronic speckle pattern interferometry), which is based on laser diodes and single-mode fiber optics is also developed for measuring surface contour [5]. However, poor quality of the contouring images remains the main limitation of this technique for surface measurement. Shearography [6, 7] has also been used in surface shape measurement. Unlike holography, shearography does not require special vibration isolation since a separate reference beam is not required; hence, it is a practical tool that can be used in an industrial environment. Optical grating methods have been applied to the measurement of 3-D shape [8-10]. In this method five separate defocused images using Ronchi grating are projected onto an object and the deformed grating images are captured by a CCD camera and evaluated by the phase-shifting technique. The method is used for relatively large objects. By the end of 1980s, computer vision with refractive moiré and projection fringe methods were developed for surface measurement. As classical approaches using mechanical probes remain inherently slow and ill suited for measurement of curved surfaces, 3-D sensing by non-contact optical methods are studied extensively for these applications. In industrial metrology, the non-contacting and non-destructive automated surface shape measurement technique is a desirable tool for vibration analysis, quality control, and contour mapping. 2.2 Application of Optical Techniques in Dynamic Measurement 5 The discussion so far has emphasized mainly on the static shape and curvature measurement of test specimens. In industrial metrology, a non-contact and nondestructive vibration measurement technique is a desirable tool for contour mapping, quality control and vibration analysis. Optical techniques for vibration measurement have been well established and can be traced to the earlier days of optical methods. The development of laser Doppler vibrometers (LDV) [11, 12] for use in engineering testing was stimulated by the advances of easily detecting sub-nm amplitudes devices over a frequency range from static to MHz. However, laser vibrometers are generally intended to make measurements at a single point on the surface of a test object. Some solutions for the whole-field measurement of vibration with optical techniques have been successfully proposed. Hung [13] employed shearography method to vibration measurement by digitizing speckle images of a deforming object using a high-speed digital image acquisition system. Moore [14] presented an Electronic Speckle Pattern Interferometer (ESPI) system that has enabled non-harmonic vibrations to be measured with micro second temporal resolution. Kokidko [15] developed the shadow Moiré method to measure deformation of a plastic panel by using high-speed photography. Nemes [16] presented a system based on grating projection and Fast Fourier Transform (FFT) technique to measure the transient surface shape in a polymer membrane inflation test. FFT technique with carrier fringe method [17, 18] has also been widely employed in dynamic measurement as the technique requires only one image for phase determination. Other methods [19, 20] based on a highspeed camera, using FFT technique have also been reported. Tiziani [21-24] developed pulsed digital holography for measurement of deformations and vibrations for various objects. Using time-averaged method, holographic method allows measurement of shape of structures subjected to vibration excitation. Chambard [25] 6 extended the method to include pulsed-TV holography for vibration analysis applications. Real-time pulsed ESPI [26] (Electronic Speckle Pattern Interferometry) based on a high precision scheme that synchronizes and fixes an object point during rotation is used to study out-of-plane vibrations in a noisy environment. Aslan [27] developed a real time laser interferometry system for measurement of displacement in hostile environment. 2.3 Development of fringe projection method Fringe projection method is a suitable method for three-dimensional optical topometry [28-36]. It is a useful addition to other methods such as confocal microscopy [37-40] and white-light interferometry [41-43]. Pixel-related devices offer a much wider range of possibilities, since virtually all desired intensity distributions can be generated. Triangulation and fringe projection are very appropriate and the most frequently employed techniques for macroscopic shape analysis. For fringe projection, a grating, e.g. with a sinusoidal intensity distribution, is imaged onto the surface for the measurement. The fringe deformation is used for height calculation. Typically only a few video-frames are need to be recorded in order to obtain a full field 3D measurement. The image processing based measurement principle enables very fast measurement. Phase values are determined by calculating Fourier transformation, filtering in the spatial frequency domain and calculating inverse Fourier transformation. Compared with the moiré topography, fast Fourier transform method can accomplish a fully automatic distinction between a depression and an evaluation of the object shape. It requires no fringe order assignments or fringe center 7 determination and requires no interpolation between fringes because it gives the height distribution at every pixel over the entire fields. After Takeda [44], the FFT method has been extensively studied [45]. A twodimensional Fourier transform is applied to provide a better separation of the height information from noise when speckle-like structures and discontinuities exist in the fringe pattern. 2.4 Enhancement of Dynamic Range of Optical System An important problem remains in dynamic imaging systems is underexposure of a CCD sensor in high-speed application. At a high frame rate, the exposure period of a CCD camera is decreased and hence reduced intensity is absorbed by the photosensors in the CCD [46]. This will result in insufficient information being recorded. This problem becomes more serious when the system is used to measure microcomponents with a Long-Distance Microscope (LDM) which has a limited aperture. Hence acquisition of image becomes an optimization problem of adapting the dynamic range of the scene to the dynamic range of the camera. To modulate the intensity in dark and bright areas, Tiziani [47] developed a method by the use of a three-chip color camera (RGB). The three-color channels are recorded simultaneously and the combined output of the RGB channels allows the use of full spatial resolution for each color channel as compared to a one-chip color CCD camera. To overcome the problem of low intensity Pedrini [48] presented a method which employs an image intensifier coupled to a CCD sensor. The image intensifier 8 together with an electronic shutter action allow recording of dim test surface with a short exposure period. To improve the shuttering characteristic Ito [49] suggested a method which consists of a proximity focused image intensifier with a micro-channel plate and an external transparent electrode. The method can effectively increase the intensity on the specimen to fall within the dynamic range of the camera. However, the apparatus used is somewhat luxurious and data processing is complex. 2.5 3-D Displacement Measurement by Optical Techniques In many areas of physics and engineering, measurement of three-dimensional displacement fields for an object that is being translated is of great interest. Within optical metrology, several techniques that measure all three components of a deformation simultaneously already exist. Formerly, the most widely used technique in this regard was speckle photography. This technique essentially consists of recording incoherent superposition of two or more speckle patterns generated before and after the motion of object and then analyzing the recorded speckle-gram by Fourier transform to reveal the object deformation. Some works in this field dealt with only in-plane displacement or out-of-plane displacement. But, in practice, the two kinds of motions are often coupled together. So it is always desirable to have a single technique providing a measure of the total object motion with effective in-plane and out-of-plane components. Several speckle-based techniques for 3-D displacement measurement have been developed. One of them used both He-Ne laser and polychromatic dye laser; some others obtained results by analyzing the null-speckle displacement ring. These 9 requirements introduce some inconvenience in system architecture or inaccuracy in measurement. Recently a technique using a photorefractive speckle correlator has been proposed. In this technique a double- or multi-exposure speckle interferogram is recorded with the use of a photorefractive crystal, and this interferogram is then placed as an input in a Fourier transform system to get correlation spots. The position of the spot is the indication of the object motion. Still this method has some drawbacks. First, the use of correlation operation increases the complexity and time of measurement. Second, for general 3-D displacement and tilt, the correlation spot and the dc term are not in the same plane, so we have to move the observation plane longitudinally to meet the focus position of correlation term. Since the longitudinal distribution of the correlation spot changes gradually, it is not always easy to locate exactly its sharpest spot, consequently this gives rise to another error source. Two or three-beam holography is suitable when the deformation components are of equal magnitude and small. Chiang [50, 51] has developed at least two different techniques. One is based on moiré interferometry and the other is called holospeckle interferometry where the in-plane components of the deformation are analyzed using speckle photography and the out-of-plane component by holographic interferometry. The non-interferometric methods use stereovision to obtain the true deformation field by capturing the apparent motion of a reference pattern from two cameras in space. Henao [52] developed a technique where a diffraction grating was used to form stereovision. During the last decade several researchers [53-55] have presented systems based on digital correlation algorithms combined with a stereo pair of CCD cameras. Such systems can handle discontinuities in the deformation and measure deformations. Pawlowski [56] applied a spatio-temporal approach, in which the 10 temporal analysis of the intensity variation at a given pixel provides information about out-of-plane displacement. In-plane motion of the object is determined by a photogrammetry-based marker tracking method. 2.6 Digital image correlation (DIC) technique Digital image correlation (DIC) is a non-contact optical method for displacement of strain measurement, which was introduced by Sutton in 1980s [57]. Currently it has been well developed and applied to many industrial fields [58-66] as a robust measurement method. The technique is based on the gray level correlation between the two digital images in the undeformed and deformed states. The natural or artificial surface patterns in the images are the carrier that records the surface displacement information of an object. By making use of the correlation algorithm of gray level of the pixels in the two images, the displacement fields can be obtained. 11 Chapter 3 Theory 3.1 Formation of Fringe Patterns There are different approaches for generating fringe patterns such as interferometry, triangulation and spatial light modulating by a liquid crystal modulator. 3.1.1 Formation of Fringe patterns by Interferometry Method The arrangement that incorporates an optical wedge, enables a fringe pattern with a fine pitch to be obtained. This technique has the advantage of requiring a simple experimental setup and optical arrangement. Due to laser interference in a perfect common path the proposed fringe projector is compact and provides a stable and highly visible fringe pattern. This is suitable for micro-components measurement. As shown in Fig. 3.1(a), the fiber end S of an optical fibre acts as a point light source and emits a spherical wave front. The wave front is split into two portions, which correspond to EFGH and E1 F1G1 H 1 , respectively. Interference fringes in the superimposed area E1 F1GH of the two portions are formed from the coherent light of two point light sources, S 1 and S 2 , and are equivalent to a pair of pinholes in Young’s interferometer configuration, as shown in Fig. 3.1(b). Young’s fringes with a sinusoidal light intensity distribution will thus emerge on the observation screen [67, 68]. The fringes’ pitch P0 is given by 12 P0 = ( D / d )λ (3.1) where d is the distance between the two virtual point sources, D is the distance from the light source to the screen, and λ is the wavelength of a He-Ne laser. According to Fig. 3.1(a), wedge thickness t at point A is given by t = L tan θ (3.2) where θ is the wedge angle and L is the length from point A to the edge of the optical wedge. If the thickness (t) of the wedge is much smaller than D and L , beam AO is nearly parallel to beam CO1 . Hence the separation distance CD between AO and CO1 equals the separation d of the two point light sources, S 1 and S 2 . From geometry, distance d is given by d = 2t tan β sin α (3.3) where β is the refractive angle at point A , as shown in Fig. 3.1(a), and α is the angle of incidence of the light. Hence, by Snell’s law, sin α = n 0 sin β (3.4) where n0 is the refractive index of the optical wedge. Combining Eqs. (3.2)-(3.4), we can write the separation d of the two point light sources as 13 d = 2 L sin α tan[arcsin(n0 sin α )] tan θ (3.5) If light source S is shifted in the Z direction only, Eq. (3.5) can be simplified as follows: d = K tan θ (3.6) where the constant K = 2 L sin α tan[arcsin(n0 sin α )] . Hence the pitch ( P0 ) of the fringe pattern is given by P0 = Dλ K tan θ (3.7) Using optical wedges with different wedge angles (θ ) one can obtain fringe patterns with different pitches. 3.1.2 Formation of Fringe Patterns by a Liquid Crystal Display (LCD) Projector For the projection of the fringes a high-resolution spatial light modulator (SLM) is appropriate. Today, a large number of SLMs are readily available based on twistednematic liquid crystal displays, digital micromirror devices, and reflective LCDs. Images can be written into the LCD by supplying the driving electronics from a computer. Brightness and contrast could be set manually on the LCD driver board. 14 The LCD can provide us with grayscale capability, making it possible to use both gray code and phase-shifting algorithms with sinusoidal fringes. The advantages of this method are (a) extreme versatility in use, (b) matrix display builds up randomly configurable patterns for projection, (c) there is an internal memory that can store up to 32 images and 32 lines, (d) the fringes can be as small as 1 pixel × 1 pixel for measurement of small objects, which has been of great help to this project. 3.2 Height and Phase relationship Figure 3.2 shows the optical geometry of the projection and imaging system. Points P and E are the centers of the exit pupils for the projection and imaging lenses respectively. Every point on the reference plane is characterized by a unique phase value, with respect to a reference point such as B , which is stored in the computer memory as a system characteristic. The detector array is used to measure the phase at the sampling points. For example, the phase at a point on the reference plane and phase at a corresponding point on the object surface are measured. The phase mapping algorithm then searches for a point on the reference plane and based on the similar triangles, the phase and height relationship can be given by L AC ( ) d h( x, y ) = AC 1+ d (3.8) 15 However, if the distance between the sensor and the reference plane is large compared to the pitch of the projected fringes, under normal viewing conditions the phase and profile relationship is given by: h( x, y ) = l l ϕ CD FG = = kϕ CD g g 2πf (3.9) where l is the distance between the sensor and the reference plane, g the distance between the sensor and the projector, f the spatial frequency of the projected fringes on the reference plane, k = l /(2πgf ) is an optical coefficient related to the configuration of the optical measuring system and ϕ CD is a phase angle which contains the surface height information, k is a constant for a measurement system and given by k = L . The height of the object can then be calculated if the value of k d ⋅ 2πf is known by a calibration process. 3.3 Determination of Phase Value There are two popular techniques to determine phase value. The first is the phase shifting method [69-72]. Basically, it employs known phase shifts for one of the light beams by means of a phase shifter. Therefore the relative phase difference of two interference waves is changed artificially. The phase value δ ( x, y ) of a certain point on the interference field can be calculated from several images with a phase shift interval of ϕ n . 16 I n ( x, y ) = a( x, y ) + b( x, y ) ⋅ cos[δ ( x, y ) + ϕ n ] (3.10) where ϕ n = (n − 1)ϕ , n = 1,2...m , m ≥ 3 . For four-step phase shifting method, the phase value of each point in the image is obtained by, δ ( x, y ) = arctan I 4 ( x, y ) − I 2 ( x, y ) I 1 ( x, y ) − I 3 ( x, y ) (3.11) In general, m δ ( x, y ) = arctan ∑I n ∑I n n −1 m n −1 ( x, y ) ⋅ sin ϕ n (3.12) ( x, y ) ⋅ sin ϕ n The second technique is the Fourier Transform method [73-78], in which the fringe map is Fourier transformed. The Fourier spectrum is band-pass filtered and shifted to be centered at zero frequency. This is followed by inverse Fourier transform to yield a 2π module phase map. The method has the advantage of using only one interference fringe pattern for processing hence the background intensity variation and speckle noise are reduced. The input fringe pattern can be described by: 17 f ( x, y ) = a( x, y ) + b( x, y ) cos[2πu 0 x + ϕ 0 ( x, y ) + ϕ ( x, y )] (3.13) where a( x, y ) and b( x, y ) are the background and modulation terms respectively, u 0 is a spatial carrier frequency, ϕ 0 ( x, y ) is an initial phase, ϕ ( x, y ) is a phase variable which contains the desired information. For the simplicity of analysis, the initial phase ϕ 0 ( x, y ) has been assumed to be zero. For the purpose of Fourier fringe analysis, the input fringe pattern can be written in the following form, f ( x, y ) = a( x, y ) + c ( x, y ) exp[i(2πu 0 x )] + c * ( x, y ) exp[−i (2πu 0 x)] where c( x, y ) = (3.14) 1 b( x, y ) exp[iϕ ( x, y )] and the term containing ‘*’ denotes its 2 complex conjugate. The Fourier transform F (u ) of the recorded intensity distribution f ( x, y ) is given by F (u , y ) = A(u , y ) + C (u − u 0 , y ) + C * (u + u 0 , y ) (3.15) In most cases, a( x, y ) , b( x, y ) and ϕ ( x, y ) vary slowly compared to the carrier frequency u 0 , hence the spectra are separated from each other by the carrier frequency u 0 . One of the side lobes is weighted by the Hanning window and translated by u 0 towards the origin to obtain C (u , y ) . The central lobe and either of the two spectral 18 side lobes are filtered out by translating one of the side lobes to zero frequency. This is shown in Figure 3.3. Taking the inverse Fourier transform of C (u , y ) with respect to x yields c( x, y ) , the phase distribution may then be calculated point wise using the expression  Im[c( x, y )]    Re[c ( x, y )]  ϕ ( x, y ) = arctan (3.16) where Im[c( x, y )] and Re[c ( x, y )] denote the imaginary and real parts of c( x, y ) respectively. The displacement of the object can be evaluated by analyzing the phase value distribution of the neighborhood points. For a vibrating object, the displacement δ (t ) of a given point at a time t is related to its instantaneous height h( x, y, t ) and is given by Z ( x, y , t ) = h( x, y, t ) − h( x, y ,0) (3.17) where h(0) denotes the instantaneous displacement of a reference plane. Using a function generator, the displacement Z ( x, y , t ) ( µ m) is given by Z ( x, y, t ) = Z ( x, y ) sin(θ + tω ) (3.18) 19 where Z ( x, y ) is the amplitude of the vibration function, θ the initial phase angle, and ω the angular velocity. These values may be obtained from the settings of the function generator. 3.4 Phase Unwrapping In both phase shifting technique and Fourier transform technique, the phase is obtained by means of the inverse trigonometric function: arctangent. Due to its nature, this function returns only principal values, i.e., values in [ − π , π ], generating a discontinuous phase map wrapped into a [ − π , π ] interval. Hence this map should be unwrapped to the [ − ∞, ∞ ] interval before the phase values can be converted to continuous values of the physical variable of interest. Phase unwrapping [79-81] is essential in optical metrology by phase stepping and spatial altering techniques as represented in Fig. 3.4. However, determination of the absolute phase from its principal value can be approached in various ways, including pixel by pixel, block by block, and frame by frame. Gierloff [82] proposed a phase unwrapping algorithm that operates by dividing the fringe field into regions of inconsistency and then relating these areas to one another. Green and Walker [83] presented an algorithm that uses knowledge of the frequency band limits of a wrapped phase map. A method based on the identification of discontinuity sources that mark the start or end of a 2π phase discontinuity was developed by Cusack and Huntley [84]. Some of the methods seem to be very complicated with the assumption that the wrapped image is very noisy. However, if the phase map is obtained after applying a noise-suppressing phase mapping algorithm or with the noises filtered out as described in the previous sections, phase unwrapping is a relative simple and straightforward task. 20 A simple but robust phase unwrapping algorithm has been applied in this project. In summary, it seeks phase jumps greater than π and corrects them by addition or subtraction of a 2π offset until the difference between adjacent pixels is less than π . This operation is iterated, rightward line by line, until every pixel in the data set has been unwrapped. This algorithm does not need any pre-processing of the wrapped image to reduce the noise, nor does it require any effort to choose a special unwrapping path to avoid the noisy points. 3.5 Enhancement of Dynamic Range of Fringe Projection Method To solve the underexposure problem encountered in the previous study, it is necessary to adapt the dynamic range of the scene to the dynamic range of the camera. In normal application, the sensor in a CCD camera is sensitive to light intensity in 8 bit range where the range of gray value is from 0 to 255. When a sinusoidal fringe pattern is projected on a diffused test surface by a LCD projector particularly, the light intensity on the test surface may be lower than the threshold level of the photosensors in the camera [85] for a high-speed camera with a short exposure period. Hence underexposure problem is introduced as shown in Fig. 3.5, where the recorded intensity distribution ( I P′ ) of the CCD sensor does not correlate with the projected fringe ( I P ). To overcome this problem a WLS which superimposes a white light background intensity distribution ( aW ) is introduced. When the resultant intensity distribution of the superposed fringe pattern ( I F ) is within the threshold level of the 21 photosensors, the camera would record an intensity distribution ( I F′ ) correlated with the input ( I F ). The instantaneous displacement Z ( x, y, t ) of a given point on the object surface may be written as: Z ( x, y, t ) = Z ( x, y ) cos ωt (3.19) After introducing the reference plane at Z = 0 , Eq. (3.13) can be written as: ϕ ( x, y , t ) = Z ( x, y ) cos ωt k (3.20) Substituting Eq. (3.20) into Eq. (3.13), I P = a P ( x, y ) + bP ( x, y ) cos[2πµ 0 x + ϕ 0 ( x, y ) + Z ( x, y ) cos ωt ] k (3.21) By introducing a white light background aW ( x, y ) , the fringe intensity distribution ( I P ) is superimposed with the white light background and the resulting light intensity distribution in the imposed image ( I F ) can be written as, I F = aW ( x, y ) + a p ( x, y ) + b P ( x, y ) cos[ 2πx cos α Z ( x, y ) cos ωt + + φ 0 ( x, y )] p k 22 (3.22) Since the overall intensity distribution ( I F ) is within the threshold of the CCD sensors as shown in Fig. 3.5, the instantaneous intensity of the imposed image ( I F ) would be recorded as ( I F′ ) during a finite exposure period, T , given by the following: t +T I F′ = aW′ ( x, y )T + a ′P ( x, y )T + b P′ ( x, y ) ∫ cos[ t 2πx cos α Z ( x, y ) cos ωt + + φ 0 ( x, y )]dt p k (3.23) where aW′ ( x, y ) is the output for aW ( x, y ) . Now the dynamic range of the input intensity distribution is adjusted to match the dynamic range of the high-speed camera and I F′ is further amplified to obtain a final image with an intensity distribution of I 0 in order to produce an optimal contrast: t +T I 0 = µaW′ ( x, y )T + υ{a ′P ( x, y )T + b P′ ( x, y ) ∫ cos[ t 2πx cos α Z ( x, y ) cos ωt + + φ 0 ( x, y )]dt} p k (3.24) where µ and υ are contrast coefficients, which are determined from the output of the WLS and the fringe projector respectively. To obtain an optimal contrast, the gray values of the background µ [ aW′ ( x, y ) + a ′P ( x, y )] and the modulation function υbP′ ( x, y ) are both modulated to a gray level of approximately 128 and the fringe pattern intensity covers the full range of the output intensity of 0 to 255. Finally, each optimal fringe image based on Eq. (3.24) can be further processed by FFT image processing method. With further derivation, the method may have the potential for time-average measurement (See appendix A). 23 3.6 Phase Shift Calibration Calibration of the system is carried out by shifting the test object through a known distance δZ in the z-axis and the corresponding phase value δV on the specimen is determined. The two sets of images are then processed. Several points were chosen on the unwrapped maps of the first image and the phase values at these points are noted. From the phase map of the second image, the same points are chosen and the phase difference between these points would give the phase difference for the height difference. Hence the relationship between height and phase difference is found. The object height relative to the base plane can then be calculated by multiplying the phase values with the corresponding factor. 3.7 Integrated Fringe Projection and DIC Method Since DIC and fringe projection provide in-plane and out-of-plane displacement measurement respectively, the combination of DIC and fringe projection techniques would provide a 3-D displacement measurement of a planar object. In digital image correlation, the image intensity acts as an information carrier. Hence surface illumination should be uniform to ensure that the gray values on a surface do not change greatly during deformation. However in fringe projection, the fringe intensity is highly non-uniform. One way to overcome this problem is to filter out the fringes by Fourier transform. By filtering out a small area of the fringe frequency in the frequency domain followed by an inverse Fourier transform, the background intensity would be restored. 24 When the object undergoes 3-D deformation, the deformed and reference profiles generated by FFT are shifted by a distance equal to its in-plane deformation. Hence to obtain the out-of-plane displacement accurately, an interpolation process should be applied to the images being processed to obtain the final profiles. An approach tailored for the particular requirements of DIC, was developed. From among a range of possible alternatives, we opted for a simple algorithm where the only data operations are a Fourier transform followed by a filter convolution. Then the recovery procedure becomes very simple, consisting solely of an inverse Fourier transform. Figure 3.7 shows the logical model of carrier removal algorithm (CRA). Firstly original images are mapped into Fourier spectrums; secondly a low-pass filter is operated to isolate the Fourier coefficients from zero (DC) up to the cutoff frequency; thirdly, an inverse FFT of the resulting spectrum is computed and the intensity distribution of the background is obtained. In spatial domain, as defined in Eq. (3.10), a( x, y ) describes the background (object’s surface) variation, while b( x, y ) represents the variation of fringes. In frequency domain, as defined by Fourier transform, A , which describes the transform of the function a( x, y ) is preserved by CRA, while C and C * which represent transforms of the function b( x, y ) are eliminated by a low-pass filter. Therefore, the Fourier transform of a( x, y ) is isolated and, by inverse Fourier transform, the term a( x, y ) itself can be obtained. It is only necessary then to apply DIC technique to the resulting image for in-plane displacement vector D (∆x, ∆y ) . 25 Figure 3.8 illustrates schematically the in-plane deformation process of an object. In order to obtain the in-plane displacement u m and v m of point M in the reference image, a subset of pixels S around point M is chosen, and it is matched with a corresponding set S 1 in the deformed image. If subset S is sufficiently small, the coordinates of points in S 1 can be approximated by first-order Taylor expansion as follows: ∂u ∂x x n1 = x m + u m + (1 + y n1 = y m + v m + (1 + ∂v ∂x ) × ∆x + M ) × ∆x + M ∂u ∂y M ∂v ∂y M × ∆y (3.25) × ∆y (3.26) where the coordinates are as shown in Fig. 3.8. Let I ( x, y ) and I d ( x, y ) be the gray value distribution of the undeformed and deformed image respectively. For a subset S , a correlation coefficient C is defined as: ∑ [ f (x , y ) − f (x C= ∑ f (x , y ) N ∈S n n d , yn1 )] 2 n1 (3.27) 2 N ∈S n n where ( xn , y n ) is a point in subset S in the reference image, and ( xn1 , yn1 ) the corresponding point in subset S1 (defined by Eqs. (3.25) and (3.26)) in the deformed image. It is clear that if parameters u m , v m are the real displacement and 26 ∂u ∂x , M ∂v ∂x , M ∂u ∂y , M ∂v ∂y are the displacement derivatives of point M , the correlation M coefficient C would be zero. Hence minimization of the coefficient C would provide the best estimates of the parameters. Minimization of the correlation coefficient C is a non-linear optimization process. Newton-Raphson and Levenburg-Marquardt iteration methods are usually used in the implementation of algorithm. To achieve sub-pixel accuracy, interpolation schemes should be implemented to reconstruct a continuous gray value distribution in the deformed images. Normally higher order interpolation scheme would provide more accurate result, but with the limitation of more computation time. The choices of different schemes are depended on the different requirements, bi-cubic and bi-quintic spline interpolation scheme are widely used. 27 R3 Back Surface β R1 B S2 d n0 R2 C Optical Wedge Forward Surface G1 H1 G H O1 A D E1 S1 Screen O E L α F1 F θ S Optical Fiber Z X (a) Screen S2 S P S1 δ S ' S 2' S 1' D (b) Fig. 3.1 Schematic diagram of two-point-light-source interferometry with an optical fiber and an optical wedge. 28 Z g Q E l 3 – D Object D h Reference Plane X G B F O Figure 3.2 Optical geometry for fringe analysis 29 F (u, y) A C* C u -uo (a) u I (u, y) C -uo (b) u Figure 3.3 Fourier fringe analysis. (a) Spectrum of the interferogram; (b) Processed spectrum after filtering. 30 Phase value Phase value X (Pixel) (a) Wrapped phase with 2π jump. X (Pixel) (b) Unwrapped phase value. Figure 3.4 31 I/O Curve of CCD Camera Sensors Output intensity of Output intensity (Gray Value) fringe pattern I ′F 25 5 Threshold of sensors aW Input intensity 0 I P′ I0 IP Resultant intensity of fringe pattern I F Pixel aW Fig. 3.5 Intensity of fringe pattern enhanced by a white light source. 32 Images with fringe pattern Fast Fourier Transform Wrapped Phase map Filter (Cutoff the frequency of the fringe) Inverse Fourier Transform Pixels renumbered based on D (∆x, ∆y ) In plane displacement D (∆x, ∆y ) obtained by digital correlation Phase unwrapping Out of plane displacement 3-D displacement field obtained Fig. 3.6 General scheme of the proposed integrated method. 33 Original image Fourier transform (Transform the original images into frequency domain) Low pass filter (Preserve the frequency of the background other frequency set to zero) Inverse Fourier transform (Transform the frequency of the background back to spatial domain) Representation of the image after removal of carrier Fig. 3.7 Logical model of CRA. 34 Y (Pixel) X (Pixel) Fig. 3.8 Schematic diagram of planar deformation process. 35 Chapter 4 Application of Fringe Projection Method for Static Measurement This chapter describes the fringe projection method for static profile measurement. Linear sinusoidal fringes which are generated by a LCD projector are projected onto an object surface. The distorted fringe pattern caused by the surface profile is captured by a CCD camera and stored in a digital frame buffer. With the aid of FFT technique, the method is applied onto a micro-component 4.1 Experimental Work The setup of the system is shown schematically in Fig. 4.1. A micromembrane with an area of 400 µ m × 400 µ m is used as a test specimen. The micromembrane is part of a microphone bonded in an IC chip as shown in Fig. 4.2. The membrane is fully clamped at boundaries by a perforated backplate cap. It can be seen in Fig. 4.3 that the wafer and backplate cap keep the membrane rigid in the x - and y -directions when the membrane is loaded in the z -direction. A CCD camera with a LDM lens is used to capture the projected fringe patterns. The sensitivity of the system increases with the angle between the LCD projector and the camera. Increasing the angle would however create shadow areas on the object and affect the image quality. Hence, an appropriate angle which depends on the accuracy and size of object should be chosen for the system. In the experiment an angle of 24 0 is chosen. Images are recorded by the CCD camera and are stored in a digital frame storage card for further processing. 36 The LCD projector provides a resolution of 832 × 624 pixels, which can be adjusted between the relative transparency of 0 and 255. The pitch of the fringe pattern can be as small as 1 pixel × 1 pixel and thus is suitable for measurement in micro scale. 4.2 Results and Discussion Figure 4.4 shows the image of a fringe pattern, which is projected on the unloaded micromembrane by using the LCD projector. The image is subsequently processed by FFT from which the phase value can be determined. The phase value of ϕ ( x, y ) obtained is wrapped in 2π modules, therefore, phase unwrapping is carried out in order to convert the discontinuous phase distribution to a continuous one by adding or subtracting 2π where necessary. The procedure is carried out along each row and then repeated along the columns. To relate the phase values to the profile of the test surface, calibration of the system is necessary. It is carried out by shifting the test object through a known distance δZ in the z-axis in micro scale and the corresponding phase value δV on the specimen is determined as shown in Fig. 4.5. In this experiment, the calibrated value for δZ is given by δZ = 222.65δV . Figure 4.6 shows a 3-D plot of the membrane. A cross-section of the micro membrane at x = 125 µ m is shown in Fig. 4.7. There are two issues concerning the resolution of this method: the ( x, y ) spatial resolution and out-of-plane resolution. The ( x, y ) resolution depends on the spatial resolution of the projector and CCD camera. In this investigation, the camera has a spatial resolution of 512 pixels × 512 pixels, which means that a total of 250,000 surface points can be monitored. Should hardware with higher spatial resolution be 37 used, the method would be able to simultaneously measure more points. For example, a resolution of 1000 pixels × 1000 pixels would facilitate measurement of 1 million points in one image. The Z resolution, however, depends on the gray level resolution of the recording hardware. In this project, the hardware used has an 8-bit resolution, i.e. 256 gray levels. The resolution would be increased with the use of higher graylevel hardware. However, the higher resolution hardware may lead to higher electronic noise and thus set a limit on the resolution. Note that the profile of the micro membrane in Fig. 4.6 does not look very smooth. This is due to a limitation of the conventional CCD sensor (768 pixels × 576 pixels). If a high performance CCD camera with a higher resolution is used instead, the plot would be smoother. The setup shown in Fig. 4.1 is based on reflection of a computer-generated fringe pattern from a specularly reflective surface. The fringe pattern is perturbed in accordance with the test surface and the fringe phase bears information on the height of the test surface. Instead of deriving the mathematical relationship between the fringe phase and the surface height, this relationship is obtained by a calibration procedure. This fringe projection method makes it easy to achieve shape measurement of the micro-component. With further development, the method can be implemented for dynamic measurement, which will be discussed in the next chapter. 38 Y Test Surface α α Z LDM High-speed CCD Camera LCD Projector X Microphone Three-axis Translation Stage Power Supply Frame Grabber Monitor Computer LCD Controller Fig. 4.1 Experimental setup for shape measurement of small components. 39 Figure 4.2 Microphone chip. φ 500µm Air gap Acoustic holes Perforated backplate cap Z Y Si substrate 400 µ m Membrane (thickness 2 µm ) 2000 µ m Fig. 4.3 Cross section view of the microphone. 40 650 µ m X=125 µm X=125 µm 0.1 mm Fig. 4.4 Image of fringe pattern on test surface of a microphone without loading. 41 45 y = 222. 65x + 0. 7046 2 R = 0. 9999 Displacement Shift (µm) 40 35 30 25 20 15 10 5 0 0 0. 02 0. 04 0. 06 0. 08 0. 1 0. 12 Phase Value 0. 14 0. 16 0. 18 0. 2 Fig. 4.5 Relationship between phase value and displacement shift of the test surface. 42 z (Phase Value) X (Pixel) Y (Pixel) Fig. 4.6 3-D plot of the surface of the microphone. 43 Profile of the membrane at cross-section X=125 µm 700 600 Height (µm) 500 400 300 200 100 0 0 25 50 75 100 125 150 175 200 225 -100 Y (µm) Fig. 4.7 Cross-section of the micro membrane at x = 125 µ m. 44 250 275 Chapter 5 Application of Fringe Projection Method for Dynamic Measurement 5.1 Measurement of Dynamic Response of A Small Component In this section, the fringe projection method is used measure the dynamic response of a small vibrating component. The method enables measurement of vibrating object point-by-point on its test surface as does the LDV method. The method utilizes a long working distance microscope, a high-speed CCD camera, a LCD fringe projector and the FFT technique. Linear fringe patterns are projected on a vibrating object and the fringe patterns are captured by a high-speed camera and stored in a digital frame buffer. With the aid of the FFT technique, the amplitude and vibrating frequency of vibrating rigid objects are obtained. 5.1.1 Experimental Work Figure 5.1 shows the experimental setup. A small coin with a diffuse surface is used as a test object on which fringes are projected through a LDM lens and a LCD projector. A high-speed camera with a LDM lens is used to capture the projected fringe patterns. The coin was attached on a shaker, which has a sinusoidal output. The shaker was mounted on a test rig consisting of a micrometer-driven translation stage which enables out-of-plane translation. In the experiment an angle of 26 0 is chosen. 45 Images which can be displayed on a monitor are recorded by the high-speed camera with a video frame rate of 500 frames/s. 5.1.2 Results and Discussion Figure 5.2 shows part of the coin with a projected fringe pattern on the number “2”. A close-up view of the fringe pattern with a projected fringe area of 1 mm × 0.5 mm is shown in Fig. 5.3. The coin was subjected to a vibrating frequency of 50 Hz and images (see Fig. 5.4) were recorded at intervals of 0.002s. The images were subsequently processed by FFT method and phase maps were obtained (See appendix B). Calibration of the system is carried out by shifting the test object through a known distance δZ in the z-axis in micro scale to determine the relationship of δZ and δV (phase values of different points on the test surface to represent their heights) as shown in Fig. 5.5. In this experiment, the calibrated value is given by δZ = 26.73δV . Figure 5.6 shows the phase maps at different time intervals after fringe processing, where the gray values of the images represent the height of the test surface at different points. It should be noted that a surface profile variation (such as letters on a coin) is not necessary for dynamic measurement. Measurement of a vibrating plane (no variation in surface profile) can be easily obtained by this method. However, this method is also suitable for measuring objects with profile variation surface. The surface profile of the object can be obtained together with vibration frequency and amplitude. 46 Such profile variation on the surface may introduce “shadow effects”. This problem was reduced in principle by adjustment of experimental setup with an optimal angle for projection to minimize the shadow effect. Furthermore, since the object undergoes rigid-body vibration, investigation of a small number of points on the surface would represent the whole surface. Therefore, in the data analysis procedure, an area (a small number of points in this area) on the surface was deliberately chosen. The vibration plots of the test surface before and after temporal phase recovery is shown in Figs. 5.7a & 5.7b respectively. For real time measurement of dynamic events, provided the sampling rate is not so high that the phase change is more than ± π between two consecutive recordings, temporal phase recovery is needed. Temporal phase recovery is carried out by the addition or subtraction of a 2 π phase value and is determined by whether a neighboring (in time axis) point is moving towards or away from a reference point on the Z-axis. For movement towards the reference point, a 2 π phase angle would be added while for movement away from the reference point, a 2 π phase angle would be subtracted. The experimental values of parameters A, ω and θ in Eq. (3.18) can be obtained from the phase maps as follows: A certain region (marked with a square in Fig. 5.6) on the phase map at a particular time (t=0.002s in this case) is chosen and their instantaneous displacements are obtained from the calibrated phase values. Eight different regions are investigated to minimize the errors as shown in Fig. 5.8. The process is repeated for a second (t=0.004s) and third phase maps (t=0.006s) (See appendix C). With these known values Eq. (3.18) can be rewritten as: δ (t ) = 57.63 sin(−33.284 + 314.68t ) (5.1) 47 To further validate the method, the measurement results are compared with the frequency output of the function generator and the vibration amplitude obtained by a photogrammetry-based marker tracking test. In this test, the object surface is shifted by the micrometer (resolution 0.5 µ m) between the extreme positions of the vibration. The sequential stages of movement are saved frame by frame and subsequently compared with the images captured when the object is vibrating. By monitoring the position of markers attached to the object, the vibration amplitude can be obtained. A comparison of the results obtained is shown in Fig. 5.9. It is seen that the present results show excellent agreement with that obtained using the micrometer and the maximum discrepancy is less than 1%. It is also seen that the present results agree closely with the input sinusoidal function. The proposed method has the advantage of requiring a relatively simple experimental set-up and optical arrangement and hence utilizes fewer optical components. 5.2 Enhancement of dynamic range of the system The method described above is further applied on the vibrating object using a background enhancement technique to increase the temporal resolution of the system. With the proposed technique, the high-speed camera is able to function at a higher frame rate of up to 3000 fps and thus an increase in the measurement range of up to a factor of 6 (compared to the original frame rate limit of 500 fps) with enhancement in the temporal resolution in the system is achieved. In the use of high-speed camera, a problem often encountered is poor fringe image quality due mainly to insufficient exposure of the CCD sensors. This problem is especially serious when the system 48 incorporates LDM lens which are used to measure small components. To solve this problem, we developed a simple method to enable the method to capture good quality images at a high frame rate. It employs a LCD projector, a high-speed CCD camera and a WLS. Measurements are conducted on a vibrating speaker diaphragm. The proposed method not only enables harmonic vibration measurement to be studied, but also shows the potential application for measurement of non-repeatable transient events. 5.2.1 Experimental Work Figure 5.10 shows the experimental setup which consists of two LDMs, a highspeed camera, a LCD fringe projector, a WLS (component W) and image processing hardware. Figure 5.11 shows a number “2” on the test surface of the coin. A LDM lens (magnification 6 × ) is focused on the fringes and another LDM lens is mounted on a high-speed camera to capture the image. The WLS used to modulate the background intensity is an Optem Model 29-60-74 Fibre Light Source, which has a maximum output of 150 Watts. To ensure the light intensity of the test surface is increased uniformly, the WLS is arranged along the same axis as the projector. Values of µ and υ in Eq. (3.24) are adjusted by changing the power of the WLS and settings of the fringe projector according to the output of the sensor (i.e. the gray value distribution in I 0 ). To Images of the test surface are recorded by the high-speed camera at a frame rate of 3000 frames per second (fps). Tests were also conducted on a speaker diaphragm subjected to free vibration. The speaker, 5 cm in diameter, is mounted on a 3-D stage excited by a sinusoidal 49 voltage. Various stages of vibration of the speaker diaphragm are captured frame by frame and stored for further processing. 5.2.2 Results and Discussion Figure 5.12a and 5.12b show respectively the images of the test coin before and after the introduction of a white light background. It is seen in Fig. 5.12b that the image quality improved significantly and the recorded image intensity ( I 0 ) shows improved fringe quality (Fig. 5.12c). Figure 5.13 shows the gray value distribution of cross-section YY as indicated in Fig.5.12. Values of µ [ aW ( x, y ) + a P ( x, y )] and υbP ( x, y ) are both modulated around 128 to make the fringe pattern displayed in a wide range of gray value and thus produce optimal contrast. Figure 5.12c is further processed by FFT technique to obtain the profile of the vibrating surface and a 3-D plot of the surface profile is shown in Fig. 5.14a. Comparing with a 3-D image obtained without background enhancement (in Fig. 5.14b), it is seen that the 3-D profile obtained using the proposed method shows much better results. Fringe pattern on the speaker superimposed with background was captured at a frame rate of 3000 fps as shown in Fig. 5.15b, where the exciting frequency for the speaker is 50 Hz. For comparison, the WLS was subsequently switched off and images of the fringe pattern without the background were recorded at the same frame rate as shown in Fig. 5.15a,. As can be seen, the images in Fig. 5.15a is underexposed and in poor quality with insufficient light intensity. Obviously, the images with the background are improved significantly as shown in Fig. 5.15b. 50 Image in Fig. 5.15b is subsequently processed by FFT technique to obtain unwrapped phase maps. A 3-D profile of the vibrating test surface is shown in Fig. 5.16b compared with the one (Fig. 5.16a) which is obtained from Fig. 5.15a. As can be seen, the result has been improved by this method. The method was further applied to dynamic measurement of the speaker vibrating at a frequency of 100Hz. A sequence of images of the speaker was recorded at 3000 fps, giving about 30 frames per vibration period. Hence the profile of the diaphragm at a specific time can be obtained. Subsequently whole-field displacement of the test surface at a specific time and the vibration frequency and amplitude are determined. Figure 5.17 shows a plot of the vibration amplitude obtained by comparing two phase maps at the extreme positions of vibration. The mode shape of the vibrating diaphragm is determined by considering a point on the phase map at a particular time (t=3.3ms in this case) and its instantaneous displacement is obtained (from the calibrated phase values). The process is repeated for 30 points within the vibration period of the object. With these values known, the vibration can thus be plotted in Fig. 5.18. It can be seen that the accuracy of the experimental results with the proposed method is better than the original one without using white light background indicated by the black dot symbol. Comparing with micro values, it is seen that the proposed method shows excellent agreement. The micro values were obtained from the function generator and the photogrammetry-based marker tracking test, described in section 5.1.2. The results show a maximum discrepancy of less than 2%. It is note worthy that without the white light background enhancement, the highspeed camera can only capture image at a frame rate of up to 500 fps. However, with 51 the enhanced white light background, images of good quality can be captured at frame speed of up to 3000 fps. 52 Three-axis Translation Y High-Speed CCD Camera LDM Lens Z X A Shaker Object LCD Projector Function Generator Frame Grabber Monitor Computer LCD Controller Fig. 5.1 Experimental setup of fringe projection method for dynamic measurement. 53 (a) Test specimen. 1 mm 0.5 mm (b) A close up view of the surface with fringe projection. Fig. 5.2 54 0.05 mm Fig. 5.3 Sinusoidal fringe pattern projected on a small section of the test specimen. 55 (a) (b) (c) (d) Images at different time intervals (a) 0s (b) 0.002s (c) 0.004s (d) 0.006s Fig. 5.4 Images at 0.002s time interval. 56 45 y = 267. 25x + 0. 0412 2 R = 0. 9999 40 Displacement Shift (µm) 35 30 25 20 15 10 5 0 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 Phase Shift Fig. 5.5 Calibration of the fringe projection system for the measurement of vibration in z direction. 57 0.16 58 t= 0.004s Fig. 5.6 Unwrapped phase maps at 0.002s time interval. t= 0.002s t= 0.006s 45 40 35 30 25 20 15 10 5 0 Microns 5 4 3 Phase Value 2 1 0 -1 0 2 4 6 8 10 12 14 16 18 20 22 -2 -3 -4 -5 Sampling Number (a) Vibration amplitude before phase recovery. 5 Phase Value 4 3 2 1 0 0 2 4 6 8 10 12 14 16 Sampling Number (b) Vibration amplitude after phase recovery. Fig. 5.7 59 18 20 22 120 100 80 Amplitude (mm) 60 40 20 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 -20 -40 -60 -80 Time (ms) Fig. 5.8 Vibration plots of different regions on the object. 60 18 19 20 Experimental Results by Fringe Projection Experimental Results by Micrometer Function Generator Plot 100 80 Amplitude (µm) 60 40 20 0 0 5 10 15 20 25 30 35 -20 -40 Time (ms) Fig. 5.9 Comparison of micro values and experimental vibration amplitude. 61 40 Function Generator Z Reference Plane 3-Axis Translator Y X Object W LDM Lens LDM Lens LCD Projector High-speed CCD camera Computer Video Signal Processor Monitor Fig. 5.10 Experimental setup with the white light source. 62 1 mm Fig. 5.11 A small section on the test coin surface. 63 64 (b) Y (c) Fig. 5.12 Comparison of the images of part of coin recorded at recording rate of 3000 fps. (a) Image recorded with background enhancement. (b) Image recorded without background enhancement. (c) Image processed with an optimal contrast. (a ) Y Gray level distribution with enhanced background ( I ′F ) Gray level distribution without enhanced background ( I P′ ) Gray level distribution of matched fringe image ( I 0 ) 250 Gray Value 200 150 100 50 0 0 10 20 30 40 50 60 70 80 90 100 Pixel Fig. 5.13 Comparison of fringe pattern distribution of the cross section along YY. 65 (a) 3-D profile of test object before background enhancement. (b) 3-D profile of test object after background enhancement. Fig. 5.14 66 (a) Image of the speaker recorded without white light background enhancement at frame rate of 3000 fps. (b) Image of the speaker recorded with background enhancement at frame rate of 3000 fps. Fig.5.15 67 . (a) 3-D profile of the speaker without background enhancement. (b) 3-D profile of speaker diaphragm with background enhancement. Fig. 5.16 68 Fig. 5.17 Vibration amplitude of speaker diaphragm. 69 With background enhancement Without background enhancement Micro values 1.5 Amplitude (mm) 1 0.5 0 0 1 2 3 4 5 6 7 8 9 10 -0.5 -1 -1.5 Time (ms) Fig. 5.18 Comparison of vibration plots at 3000fps frame rate. 70 Chapter 6 Integrated Fringe Projection and DIC Method After developing the dynamic fringe projection method for vibration measurement, we try to look at an integrated fringe projection method to measure the 3-D displacement using only one camera. The method which uses only one camera and is different from the conventional multi-camera based systems for 3-D displacement measurement, shows a significant improvement over conventional method. 6.1 Experimental Work Part of a speaker (3.6 cm × 3.6 cm) with a diffuse surface is used as a test object as shown in Fig. 6.1. The speaker was mounted on a test rig consisting of a micrometer-driven translation stage enabling rigid-body translations in the out-ofplane and in-plane directions. Distortions in the fringe pattern on the test surface are recorded on the image plane of a CCD camera with a LDM lens. The CCD camera is located perpendicular to the base on which the object is mounted, and the angle α between the optical axis of the fringe projector and the optical axis of the CCD camera is 32.8 o . The object was subjected to a rigid-body motion which is defined by a 3-D displacement vector W ( ∆x x , y , ∆y x , y , ∆z x , y ) by using a 3-axis translation stage with a resolution of 0.5 µ m. The in-plane displacement D ( ∆x x , y , ∆y x , y ) of the object is determined by a carrier removal operation and DIC according to Eq. (3.27). The value of 71 D ( ∆x x , y , ∆y x , y ) is subsequently used for modulating the reference coordinate according to Eq. (3.27). After that, the shape and out-of-plane movement D ( ∆z x , y ) are determined by fringe projection. 6.2 Results and Discussion Figure 6.2(a) shows the image of the test surface with a projected fringe pattern, the pitch of which is 50 pixels. The Fourier transform of the fringe pattern is taken and three peaks are obtained: the Fourier transform of the term a( x, y ) is located in the center of the spectrum and Fourier transform of c( x, y ) and c* ( x, y ) is symmetric with respect to the center and located at a distance that is determined by the frequency of the fringe pattern. To obtain the contour of the test surface, the Fourier transform of c( x, y ) is usually isolated, and by inverse Fourier transforming the peak, the value of c( x, y ) itself is obtained, while the frequency outside the range of the phase are set to zero. It is then necessary to separate the imaginary and the real parts and perform an arctan operation to obtain the phase value of the fringe pattern. However, to obtain the in-plane displacement of the object, the windowing operation is placed at the center of the spectrum, and the Fourier transform of a( x, y ) is isolated. By inverse Fourier transforming the peak, the value of a( x, y ) , which is the intensity distribution of the background in the image, is obtained. The size of the window is determined by the position of the Fourier transform of c( x, y ) and c* ( x, y ) in the spectrum, which is dependent on the number of fringes in the x direction on the 72 original image. Therefore, to retain more of the background data, denser fringe patterns are preferred and thus a bigger filtering window size is preferred. A 3-D representation of the spectrum is illustrated in Fig. 6.3(a). A low-pass filter is applied to the spectrum to isolate the Fourier transform of a( x, y ) , while getting rid of the rest. Provided that we have chosen the cutoff frequency (directly proportional to the size of the filtering window) well, the filter will eliminate the fringe pattern completely, but still preserves enough background information. The quality level of the resulting images must be such that the background information is discernable and sharp, while the fringe pattern is removed thoroughly. Therefore, the cutoff frequency value needs to be chosen carefully. Usually, a higher cutoff frequency, i.e. big filtering window can preserve more coefficients of frequency area and improve image sharpness, but it runs the risk of incomplete removal of the fringe pattern, and if part of the Fourier transforms of c( x, y ) and c* ( x, y ) are maintained. In this experiment, a filtering window of 10 pixels × 10 pixels is applied to the spectrum to isolate a( x, y ) as shown in Fig. 6.3(b). Inverse Fourier transform is then performed onto Fig. 6.3(b) to obtain a( x, y ) itself as shown in Fig. 6.2(b). As can be seen in Fig. 6.2(b), a large amount of data of a( x, y ) is disregarded and the image looks very blur. This is due to the small size of the filtering window. Figure 6.4(b) shows the intensity of a( x, y ) by applying a 20 pixels × 20 pixels window to the fringe pattern with a pitch of 15 pixels (see Fig. 6.4a). Compared to Fig. 6.2(b), the image is slightly sharper. The procedure is repeated on a fringe pattern with a pitch of 1 pixel. The result is shown in Fig. 6.5. It can be seen that the quality of the images are improved significantly and a( x, y ) is recovered with higher frequency. 73 A smaller part (1.4 cm × 1.2 cm) of the speaker was investigated for 3-D displacement measurement. The images displayed in Fig. 6.6(a) and 6.6(b) show the test surface being translated in the x -, y - and z - directions by a 3 - axis micrometer driven translator with a resolution of 0.5 µ m. The magnitude of the displacement from the micrometer reading are 225 µ m, 300 µ m and 140 µ m in x -, y - and z directions, respectively. A 3-D plot of the test surface obtained by FFT technique is shown in Fig. 6.7. Figure 6.8(a) shows the image after the carrier removal algorithm (CRA) procedure. It appears similar when compared with an image without a projected fringe pattern (see Fig. 6.8(b)). In-plane displacement is subsequently computed by applying a DIC algorithm to the image. Since the test surface has undergone a rigid body translation, the displacement of the whole surface is theoretically similar and thus an average operation was taken on the whole test surface. The displacement in the x - and y - directions are 6.2 pixels and 8.4 pixels respectively. A two – dimensional ( x and y directions) calibration procedure was carried out. Figure 6.9 (a) and 6.9 (b) show the calibration plot, where the pixels were converted into global coordinate. In this experiment, the relationship between pixel and length is 1 pixel = 36.0 µ m and 1 pixel = 38.8 µ m for the x - and y - directions, respectively. Based on the value of the in-plane displacement, the shape data of the deformed test surface was shifted and compared with the undeformed test surface in a 3-D coordinate system. The displacement of the object can be evaluated by analyzing the phase value distribution of the neighborhood points. The out-of-plane displacement 74 δZ ( x, y ) of each point ( x, y ) is related to its height h ′( x, y ) - before translation and h( x, y ) - after translation and given by the equation δZ ( x, y ) = h ′( x, y ) − h( x, y ) (6.1) The value of phase variation δZ for the rigid body translation was 3.209. The setup was subsequently calibrated in the z - direction by shifting the test object through a known distance δZ and the corresponding phase value δV on the specimen is determined. In this experiment, the calibrated value for δZ is given by δZ = 42.9δV as shown in Fig. 6.9(c). Figure 6.10 shows the 3-D displacement vector with the value of 222.0 µ m, 297.4 µ m and 137.3 µ m in the x -, y - and z - directions. The measurement results are compared with the values obtained from the micrometer and show excellent agreement with discrepancies of 1.28%, 1.01% and 1.93% in x , y and z directions, respectively. 75 Fig. 6.1 A close up view of a speaker diaphragm. 76 (a) Images of the test surface with fringe pattern with a pitch of 50 pixels. (b) Recovery of intensity distribution of background. Fig. 6.2 77 (a) 3-D representation of the spectrum before filtering. Fig. 6.3 78 (b) 3-D representation of the spectrum after filtering. Fig. 6.3 79 (a) Images of the test surface with fringe pattern with a pitch of 15 pixels. (b) Recovery of intensity distribution of background. Fig. 6.4 80 (a) Images of the test surface with fringe pattern with a pitch of 1 pixel. Fig. 6.5 81 (b) Recovery of intensity distribution of background by a 70 pixels × 70 pixels filtering window. Fig. 6.5 82 (c) Recovery of intensity distribution of background by an 80 pixels × 80 pixels filtering window. Fig. 6.5 83 (d) Recovery of intensity distribution of background by a 100 pixels × 100 pixels filtering window. Fig. 6.5 84 (e) Recovery of intensity distribution of background by a 120 pixels × 120 pixels filtering window. Fig. 6.5 85 (a) A close up view of the test surface with fringe pattern before displacement (225 µ m, 300 µ m and 140 µ m in x -, y - and z - directions respectively). Fig. 6.6 86 (b) A close up view of the test surface with fringe pattern after displacement (225 µ m, 300 µ m and 140 µ m in x -, y - and z - directions respectively). Fig. 6.6 87 Fig. 6.7 Phase map of the test surface. 88 (a) Image of the background of the test surface before displacement. Fig. 6.8 Images of the background of the test surface after CRA. 89 (b) Image of the background of the test surface after displacement. Fig. 6.8 Images of the background of the test surface after CRA. 90 Theoretical Value of displacement along X direction Experimental Value of displacement along X direction by CRA Least-Squares Fit Least-Squares Fit Displacement in X direction (Pixel) 16 14 12 10 y = 0.0278x R2 = 0.9997 8 6 4 2 0 0 100 200 300 400 500 600 Translation in X direction by Micrometer (µm) (a) Calibration and error analysis of displacement along X direction. Theoretical Value of displacement along Y direction Experimental Value of displacement along Y direction by CRA Least-Squares Fit Least-Squares Fit Displacement in Y direction (Pixel) 16 14 12 10 y = 0.0258x R2 = 0.9999 8 6 4 2 0 0 100 200 300 400 500 Translation in Y direction by Micrometer (µm) (b) Calibration and error analysis of displacement along Y direction Fig. 6.9 Calibration of the measurement system. 91 600 Displacement in Z direction (Phase value) 8 Experimental Value of displacement along Z direction 7 Least-Squares Fit 6 5 y = 0.0233x R2 = 0.9998 4 3 2 1 0 0 50 100 150 200 250 300 Translation in Z direction by Micrometer (µm) (c) Calibration and error analysis of out-of-plane displacement. Fig. 6.9 Calibration of the measurement system. 92 350 Z ∆z = 137.3 µ m ∆x = 222.0 µ m D (∆x, ∆y , ∆z ) ∆y = 297.4 µ m Y Fig. 6.10 3-D displacement vector of the test surface. 93 X Chapter 7 Conclusions and Recommendations 7.1 Conclusions This thesis has concerned research work in: 1. Development of a fringe projection system, 2. Application of the method to dynamic measurement and enhancement of the dynamic range of the system and 3. Development of the integrated fringe projection and DIC method for 3-D displacement measurement. A dynamic fringe projection method for measurement of small components was presented. Based on the principle of fringe projection, the system containing LCD fringe projector, LDM lens, shaker, high-speed camera and controlling device was designed. Quantitative analysis of each surface point of the specimen can be realized with the wrapped phase maps extracted from a group of images captured by the highspeed camera. From the wrapped map, unwrapping method restored the discontinuities between every two adjacent points caused by the nature of arctangent. It is proven that the system performs well in the various applications and environments. Experimental results of vibrational frequency and amplitude are obtained and compared favorably with theoretical values. The method has also been successfully applied to the static measurement of micro-components. To overcome the underexposure problem in dynamic measurement, the background enhancement method was introduced and the dynamic range of the system was increased with a factor of 6. The vibration plot of a vibrating diaphragm 94 was also successfully obtained. This method not only enables harmonic vibration measurement to be studied, but also shows the potential application for measurement of non-repeatable transient events. In this project, integration of fringe projection and DIC techniques was also introduced. The method enabled 3-D displacement of a small component using only one CCD camera. In the integrated method, DIC was used to measure in-plane displacement vector, while fringe projection was used to obtain 3-D shape and out-ofplane displacement. The use of only one camera is a significant improvement to previous multi-camera systems. The results obtained show excellent agreement with theoretical values. The main advantage of this method is that only one camera is needed to retrieve the 3-D displacement vector. With the aid of the high-speed imaging system, the proposed method is ready for studying dynamic measurement in both in-plane and out-of-plane. It is noted the out-of-plane displacement may introduce variations in magnification rate of the system. This can be avoided by using a telecentric lens to project fringe pattern and capture images. In the measurement of full-field displacements in dynamic situations, the proposed method may encounter a few difficulties: 1.) Compared to the use of transducers, the proposed method has the advantages of non-contact and nondestructive. However, the temporal resolution of the system will be limited by the frame rate of the high-speed camera, for measurement of a fast event. 2.) The underexposure problem described in section 5.2 may occur, if the high-speed camera 95 is used at a high frame rate. 3.) The surface of the object must be painted white before measurement can be carried out. 4.) To achieve measurement of non-harmonic dynamic displacement, the current system needs to synchronize with the object due to the short operation duration of the high-speed camera (approx. 4 seconds at 3000 fps). 5.) In dynamic situations, a specially designed calibration procedure may also be required to relate the displacement to the phase values. 96 7.2 Recommendations The power and stability of a LCD projector is one of the most significant factors in the dynamic fringe projection system and the intensity of a projected fringe pattern may be very low, at a high frame rate. Hence the quality of an image may be greatly affected due to inefficiency in light intensity. It was also observed that fringe patterns projected by a LCD projector vibrate at lower frequency and this introduces errors in experimental results. Hence more powerful and stable projectors would improve test results. The measurement presented in this thesis is limited to 3-D rigid-body displacement. However, with further development the proposed method could be extended to non rigid-body whole-field displacement measurement. In addition, since the method requires only one image for each displaced state, it would also have the potential to be developed for dynamic and in-situ measurement. Image size recorded by a high-speed camera decreases with high frame rate (128 pixels × 128 pixels at 10000 frames per second). The small size of the image introduces difficulty for full-field measurement difficult. Hence a more powerful highspeed camera is recommended. 97 References 1. R. Henao, A. Tagliaferri, and R. Torroba. A contouring approach using single grating digital shadow Moiré with a phase stepping technique, Optik, 110 (4), pp.199-201. 1999. 2. A. K. Maji, and M. A. Starnes. Shape measurement and control of deployable membrane structures, Experimental Mechanics, 40 (2), pp.154-159. 2000. 3. M. M. Ratnam, P. L. Chee, and A. N. Khor. Three-dimensional object classification using shadow moire and neutral network, Opt. Eng., 40 (9), pp.2036-2040. 2001. 4. R. Thalmann, and R. Dandliker. Holographic contouring using electronic phase measurement, Opt. Eng., 24 (6), pp.930-935. 1985. 5. C. P. Lopez, F. M. Santoyoa, R. R. Vera, and M. Funes-Gallanzi. Separation of vibration fringe data from rotating object fringes using pulsed ESPI, Opt. Lasers Eng., 38, pp.145-152. 2002. 6. C. T. Griffen, Y. Y. Hung, and F. Chen. Three dimensional shape measurement using digital shearography, Proc. Of the SPIE – The International Society for Opt. Eng., 2545, pp.214-220. 1995. 98 7. Y. Y. Hung. Applications of digital shearography for testing of composite structures, Composites Part B (Engineering), 30B (7), pp.765-773. 1999. 8. Y. M. He, C. J. Tay, and H. M. Shang. Deformation and profile measurement using the digital projection grating method, Opt. Lasers Eng., 30(5), pp.367-377. 1998. 9. Y. Morimoto, M. Fujigaki, and S. Yoneyama. Shape, stress and strain measurement using phase analysis of grating or fringe patterns, Proc. of the SPIE – The International Society for Optical Engineering, 4537, pp.47-52. 2001. 10. Y. Han, L. Ma, and S. P. He. Parameters calibration in topography measurement, Proc. Of the SPIE – The International Society for Optical Engineering, 4537, pp.297-300. 2002. 11. L. Scalise and N. Paone. Laser Doppler vibrometry based on self-mixing effect, Opt. Laser Eng., 38, pp.173-184. 2002. 12. L. Scalise, G. Stavrakakis, A. Pouliezos. Fault detection for quality control of household appliances by non-invasive laser Doppler technique and likelihood classifier, Measurement, 25, pp.237–247. 2000. 13. Y. Y. Hung and J. D. Hovanesian. High speed shearography system for measuring dynamic deformation, Proc. of the SEM Spring Conference on Experimental Mechanics and Applied Mechanics, Houston, Texas, June 1-3, pp.49-50. 1998. 99 14. A. J. Moore, D. P. Hand, J. S. Barton, and J. D. C. Jones. Transient deformation measurement with electronic speckle pattern interferometry and a high-speed camera, Appl. Opt., 38, pp.1159-1162. 1999. 15. D. Kokidko, L. Gee, S. C. Chou, F. P. Chiang. Method for measuring transient out-of-plane deformation during impact, Int J Impact Eng., 19(2), pp.127-33. 1997. 16. Y. Li, A. Nemes, A. Derdouri. Optical 3-D dynamic measurement system and its application to polymer membrane inflation tests, Opt. Laser Eng., 33, pp.261-276. 2000. 17. D. R. Burton, M. J. Labor. Multichannel Fourier fringe analysis as an aid to automatic phase unwrapping, Appl. Opt,. 33, pp.2939-2947. 1994. 18. D. J. Bone, H. A. Bachor, and R. J. Sandeman. “Fringe-pattern analysis using 2-D Fourier transform, Appl. Opt., 25(10), pp.1653-1660. 1986. 19. C. Buckberry, M. Reeves, A. J. Moore, D. P. Hand, J. S. Barton, and J. D. C. Jones. The application of high-speed TV-holography to time-resolved vibration measurements, Opt. Laser Eng., 32, pp.387-394. 1999. 20. P. J. Rae, H. T. Goldrein, N. K. Bourne, W. G. Praud, W. G. Forde, L. C. Forde, and M. Liljekvist. Measurement of dynamic large-strain deformation maps using an automated fine grid technique, Opt. Laser Eng., 31, pp.113-122. 1999. 100 21. G. Pedrini, S. Schedin, and H.J. Tiziani. Pulsed digital holography combined with laser vibrometry for 3D measurements of vibrating objects, Opt. Laser Eng. 38, pp.117-129. 2002. 22. C. Perez-Lopez, F. Mendoza Santoyo, G. Pedrini, S. Schedin, and H. J. Tiziani. Pulsed digital holographic interferometry for dynamic measurement of rotating objects with an optical derotator, Appl. Opt. 40(28), pp.5106-5110. 2001. 23. S. Schedin, G. Pedrini, H. J. Tiziani, A. K. Aggarwal, and M. E. Gusev. Highly sensitive pulsed digital holography for built-in defect analysis with a laser excitation, Appl. Opt. 40(1), pp.100-103. 2001. 24. S. Schedin, G. Pedrini, H. J. Tiziani, and F. M. Santoyo. Simultaneous threedimensional dynamic deformation measurements with pulsed digital holography, Appl. Opt., 38(34), pp.7056-7062. 1999. 25. J. P. Chambard, V. Chalvidan, X. Carniel, and J. C. Pascal. Pulsed-TV holography recording for vibration analysis applications, Opt. Laser Eng., 38, pp.131-143. 2002. 26. C. P. Lopez, F. M. Santoyoa, R. R. Vera, and M. Funes-Gallanzi. Separation of vibration fringe data from rotating object fringes using pulsed ESPI, Opt. Laser Eng., 38, pp.145-152. 2002. 101 27. M. Aslan, and B. R. Tittmann. Laser interferometry for measurement of displacement in hostile environment and in real time, Proc. of the SPIE – The International Society for Optical Engineering, 3993, pp.68-77. 2000. 28. R. Windecker, M. Fleischer, K. Korner, and H. J. Tiziani. Testing micro devices with fringe priojection and white-light interferometry, Opt. Lasers Eng., 36(2), pp.141-154. 2001. 29. X. Y. He, X. Kang, C. Quan, C. J. Tay, S. H. Wang, and H. M. Shang. Optical methods for the measurement of MEMS materials and structures, Proc. Of the SPIE – The International Society for Optical Engineering, 4537, pp.63-68. 2002. 30. G. S. Spagnolo, and D. Ambrosini. Diffractive optical element based sensor for roughness measurement, Sensors and Actuators, A: Physical, 100(2-3), pp.180186. 2002. 31. C. Quan, C. J. Tay, X. Y. He, X. Kang, and H. M. Shang. Microscopic surface contouring by fringe projection method, Opt. Laser Tech., 34(7), pp.547-552. 2002. 32. C. Zhang, P. S. Huang, and F. P. Chiang. Microscopic phase-shifting profilometry based digital micromirror device technology, App. Opt., 41(28), pp.5896-5904. 2002. 102 33. K. Leonhardt, U. Droste, and H. J. Tiziani. Microshape and rough-surface analysis by fringe projection, Appl. Opt., 33, pp.7477-7488. 1994. 34. R. Windecker, M. Fleischer, and H. J. Tiziani. Three-dimensional topometry with stereo microscopes, Opt. Eng., 36, pp.3372-2277. 1997. 35. R. Windecker, S. Franz, and H. J. Tiziani. Optical roughness measurements with fringe projection, Appl. Opt., 38, pp.2837-2842. 1999. 36. R. Kowarschik, P. Kuhmstedt, J. Gerber, W. Schreiber, and G. Notni. Adaptive optical three-dimensional measurement with structured light, Opt. Eng., 39, pp.150-158. 2000. 37. F. Laguarta, R. Artigas, A. Pinto, and I. Al-Khatib. Micromeasurements on smooth surfaces with a new confocal optical profiler, Proc. of the SPIE – The International Society for Optical Engineering, 3520, pp.149-160. 1998. 38. P. Pavlicek, and J. Soubusta. Theoretical measurement uncertainty of white-light interferometry on rough surfaces, App. Opt., 42(10), pp.1809-1813. 2003. 39. T. Wilson, ed., Confocal Microscopy. Chaps. 1-3, London: Academic Press. 1990. 40. D. Steudle, M. Wegner, and H. J. Tiziani. Confocal principle for macro- and microscopic surface and defect analysis, Opt. Eng., 39, pp.32-39. 2000. 103 41. P. Hlubina. White-light spectral interferometry to measure intermodal dispersion in two-mode elliptical-core optical fibres, Opt. Commun., 218 (4-6), pp.283-289. 2003. 42. G. S. Kino and S. Chim. Mirau correlation microscope, Appl. Opt., 29, pp.37753783. 1990. 43. P. de Groot and L. Deck. Surface profiling by analysis of white-light interferograms in the spatial frequency domain, J. Mod. Opt., 42, pp.389-401. 1995. 44. M. Takeda, and K. Motoh. Fourier transform profilometry for the automatic measurement of 3-D object shapes, Appl. Opt., 22 (24), pp.3977-3982. 1983. 45. J. Li, X. Y. Su, and L. R. Gou. An improved Fourier transform profilometry for automatic measurement of 3-D object shapes, Opt. Eng., 29 (12), pp.1439-1444. 1990. 46. P. K. Rastogi, Digital speckle pattern interferometry and related techniques. John Wiley & Sons Ltd, 2001. 47. K. P. Proll, J. M. Nivet, C. Voland, and H. J. Tiziani. Enhancement of the dynamic range of the detected intensity in an optical measurement system by a three channel technique, Appl. Opt., 41, pp.130-135. 2002. 104 48. G. Pedrini, H. J. Tiziani, and I. Alexeenko. Digital-holographic interferometry with an image-intensifier system, Appl. Opt., 41 pp.648-653. 2002. 49. Y. Ito, Y. Katoh, M. Kagata, S. Tomioka, and T. Enoto. Analysis for improvement of simultaneity of shuttering in an ultra high-speed framing camera, IEEE Transactions on Magnetics., 36, pp.1774-1778. 2002. 50. Y. Wang and F. P. Chiang. New moire interferometry for measuring threedimensional displacements, Opt. Eng., 33, pp.2654-2658. 1994. 51. F. P. Chiang and D. W. Li. Random Speckle patterns for displacement and strain measurements: some recent advances, Opt. Eng., 24, pp.936-943. 1985. 52. R. Henao, F. Medina, H. J. Rabal, and M. Trivi. Three-dimensional speckle measurement with a diffraction grating, Appl. Opt., 32, pp.726-729. 1985. 53. P. F. Luo, Y. J. Chao, and M. A. Sutton. Application of stereo vision to threedimensional deformations anlyses in fracture experiments, Opt. Eng., 33, pp.981980. 1994. 54. J. D. Helm, S. R. McNeill, and M. A. Sutton. Improved three-dimensional image correlation for surface displacement measurements, Opt. Eng., 35, pp.1911-1920. 1996. 105 55. . Synnergren. Measurements of three-dimensional displacement fields and shape using electronic speckle photography, Opt. Eng., 36, pp.2302-2310. 1997. 56. M. E. Pawlowski, M. Kujawinska, and M. G. Wgiel. Shape and motion measurement of time-varying three-dimensional objects based on spatiotemporal fringe-pattern analysis, Opt. Eng., 41 (2), pp.450-459. 2002. 57. M. A. Sutton, W. J. Wolters, W. H. Peters, W. F. Ranson and S. R. McNeill. Determination of displacements using an improved digital correlation method, Image Vis. Comput. 1, pp.133-139. 1983. 58. M. A. Sutton, M. Cheng, W. H. Peters, Y. J. Chao, and S. R. McNeill. Application of an optimized digital correlation method to planar deformation analysis, Image Vis. Comput. 4, pp.143-150. 1986. 59. H. A. Bruck, S. R. McNeill, M. A. Sutton and W. H. Peters. Digital image correlation using Newton-Raphson method for partial differential correlation, Exp. Mech. 29, pp.261-267. 1989. 60. M. A. Sutton, S. R. McNeill, J. Jang, and M. Babai. Effects of subpixel image restoration on digital correlation error estimates, Opt. Eng. 27, pp.870-877. 1988. 61. G. Vendroux and W. G. Knauss. Submicron deformation field measurements: Part 2. Improved digital image correlation, Exp. Mech. 38, pp.86-92. 1998. 106 62. H. Lu and P. D. Cary. Deformation measurements by digital image correlation: implementation of a second-order displacement gradient, Exp. Mech. 40, pp.393400. 2000 63. P. Cheng, M. A. Sutton, H. W. Schreier and S. R. McNeill. Full-field speckle pattern image correlation with B-Spline deformation function, Exp. Mech. 42, pp.344-352. 2002. 64. T. C. Chu, W. F. Ranson, M. S. Sutton, and W. H. Peters. Applications of digital image correlation techniques to experimental mechanics, Exp. Mech. 25, pp.232244. 1985. 65. W. H. Peters, H Zheng-Hui, M. A. Sutton, and W. F. Ranson. Two-dimensional fluid velocity measurement by use of digital speckle correlation techniques, Exp. Mech. 24, pp.117-121. 1984. 66. Y. J. Chao and M. A. Sutton. Measurement of strains in paper tensile specimen using computer vision and digital image correlation, Part2: Tensile specimen test, Tappi J. 70, pp.153-156. 1998. 67. S. H. Wang, C. J. Tay, C. G. Quan, and H. M. Shang. An optical fiber fringe projector for micro-component, Optik, pp.419-422. 2000. 107 68. J. H. Yi, S. H. Kim, and Y. K. Kwak. A nanometric displacement measurement method using the detection of fringe peak movement, Meas. Sci. Technol., pp.1353-1358. 2000. 69. P. S. Huang, Q. J. Hu, and F. P. Chiang. Double three-step phase-shifting algorithm, Appl. Opt., 41 (1), pp.4503-4509. 2002. 70. C. Quan, C. J. Tay, X. Y. He, X. Kang, and H. M. Shang. Microscopic surface contouring by fringe projection method, Opt. Laser Tech., 34 (7), pp.547-552. 2002. 71. C.P. Zhang, P. S. Huang, and F. P. Chiang. Microscopic phase-shifting profilometry based on digital micromirror device technology, Appl. Opt., 41 (28), pp.5896-5904. 2002. 72. C. Quan, C. J. Tay, X. Kang, X. Y. He, and H. M. Shang. Shape measurement by use of liquid-crystal display fringe projection with two-step phase shifting, Appl. Opt., 42 (13), pp.2329-2335. 2003. 73. D. R. Burton and M. J. Labor. Multichannel Fourier fringe analysis as an aid to automatic phase unwrapping, Appl. Opt., 33, pp.2939-2947. 1994. 74. D. J. Bone, H. A. Bachor, and R. J. Sandeman. Fringe-pattern analysis using a 2D Fourier transform, Appl. Opt., 25 (10), pp.1653-1660. 1986. 108 75. M. Takeda, K Mutoh, H. Ina, and S. Kobayashi. Fourier-transform method of fringe- pattern analysis for computer-based topography and interferometry, J. Opt. Soc. Am., 72 (1), pp.156-160. 1982. 76. W. W. Macy. Two-dimensional fringe-pattern analysis, Appl. Opt., 22(23), pp.3698-3901. 1983. 77. C. Quan, C. J. Tay, H. M. Shang, and P. J. Bryanston-Cross. Contour measurement by fibre optics fringe projection and Fourier transform analysis, Opt. Commun., 119, pp.479-483. 1995. 78. P. J. Bryanston-Cross, C. Quan, and T. R. Judge. Application of the FFT method for the quantitative extraction of information from high-resolution interferometric and photoelastic data, Opt. Laser Technol., 26, pp.147-155. 1994. 79. M. A. Herraez, D. R. Burton, M. J. Lalor, and M. A. Gdeisat. Fast twodimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path, Appl. Opt., 41 (35), pp.7437-7444. 2002. 80. X. Y. He, X. Kang, C. J. Tay, C. Quan, and H. M. Shang. Proposed algorithm for phase unwrapping, Appl. Opt., 41 (35), pp.7422-7428. 2002. 81. M. A. Herraez, M. A. Gdeisat, D. R. Burton, and M. J. Lalor. Robust, fast, and effective two-dimensional automatic phase unwrapping algorithm based on image decomposition, Appl. Opt., 41 (35), pp.7445-7455. 2002. 109 82. J. J. Gierloff. Phase unwrapping by regions, current developments in optical engineering, Proc. Of the SPIE – The International Society for Optical Engineering, 818, pp.2-9. 1987. 83. R. J. Green, J. G. Walker. Phase unwrapping using a priori knowledge about the band limits of a function, industrial inspection, Proc. Of the SPIE – The International Society for Optical Engineering, 1010, pp.36-43. 1988. 84. R. Cusack, J. M. Huntley, and H. T. Goldrein. Improved noise-immune phase unwrapping algorithm, Appl. Opt., 34 (5), pp.781-789. 1995. 85. A. J. Moore, J. D. C. Jones, and J. D. R. Valera, Dynamic measurements, In: P. K. Rastogi, editor. Digital speckle pattern interferometry and related techniques. John Wiley & Sons, LTD, 2001. 86. W. Osten, F. Elandaloussi, and M. Stache, An introduction to the Fringe Processor 2.0, Bremer Institut fur Angewandte Strahltechnik, 1998. 110 APPENDIX A Time-average Fringe Projection Equation (3.23) can be rewritten as following, I F′ = a P ( x, y ) + aW ( x, y ) + t +T b P ( x, y ){ ∫ cos[ tp − t +T ∫ sin[ t Z ( x, y ) cos ωt 2πx cos α + φ 0 ( x, y )] ⋅ cos dt p k (A.1) 2πx cos α Z ( x, y ) cos ωt + ϕ 0 ( x, y )] sin dt} k k Subsequently, I F′ = a P ( x, y, t ) + aW ( x, y, t ) + 2πx cos α + φ 0 ( x, y )] ⋅ b P ( x, y ) ⋅ cos[ p − bP ( x, y ) ⋅ sin[ t +T ∫ cos[ t 2πx cos α + ϕ 0 ( x, y )] ⋅ k Z ( x, y ) cos ωt ]dt k t +T ∫ sin[ t (A.2) Z ( x, y ) cos ωt ]dt k The integration expression of zero-order first-kind of Bessel’s function is J 0 ( A) = 1 T 1 ∫ T Γ( ) 0 2 cos( A cos t )dt = 1 1.7724 T T ∫ cos( A cos t )dt (A.3) 0 The first part a P ( x, y ) + aW ( x, y ) of Eq. (A.2) is constant, which depends on the spatial intensity variation of the 111 background. The second part 2πx cos α b P ( x, y ) ⋅ cos[ + φ 0 ( x, y )] ⋅ p Bessel’s function − bP ( x, y ) ⋅ sin[ as t +T ∫ cos[ t shown 2πx cos α + ϕ 0 ( x, y )] ⋅ k Bessel’s function. However, for sin[ Z ( x, y ) cos ωt ]dt can be transformed into k in t +T ∫ sin[ t Eq. (A.3). The third part Z ( x, y ) cos ωt ]dt cannot be expressed by k Z ( x, y ) cos ωt ] is an odd function, when it is k integrated within a long period, value of which will approach zero or a constant. Thus the third part in Eq. (A.2) can be regarded as zero and Eq. (A.2) can be expressed by the sum of a constant value and a Bessel’s function. 112 APPENDIX B B.1 Procedure of FFT Processing Fast Fourier Transform The FFT can be applied only to gray value images which numbers of columns and rows are a power of 2 [86]. In the opposite case, we must resize the images to meet the requirement. As a result of the applied FFT, we will get three floating point images: • Real part of the Fourier spectrum (See Fig. B.1a), • Imaginary part of the Fourier spectrum (See Fig. B.1b), • Spectrum (See Fig. B.1c). B.2 Bandpass Filter for the Fourier Transform Method The aim of bandpass filtering is to get a smoothed phase by taking only one part of the spectrum by translating it to the origin. The bandpass filter is applied to both the real part and imaginary part of the Fourier spectrum after FFT. Six co-ordinates are necessary for the determination of the filter function which must be chosen carefully. They should be determined by analyzing the amplitude spectrum before calling the bandpass filter. A rectangular region of the spectrum is extracted by the first point in the left upper corner and the 113 last point in the right bottom. The size of the spectrum around the center to be deleted should also be given. After selecting the real part and imaginary part of the Fourier spectrum to be filtered, the values of the appropriate co-ordinates for the bandpass filter must be keyed in (See Fig. B.2): • Start: x and y , co-ordinates of the first pixel (left on the top) of the region to be filtered, • End: x and y , co-ordinates of the last pixel (right on the bottom) of the region to be filtered, • Zero: x and y , size of the spectrum around the center to be eliminated. After applying the bandpass filter to the Fourier spectrum, three floating point images are given: • Filtered real part of the Fourier spectrum (See Fig. B.3a), • Filtered imaginary part of the Fourier spectrum (See Fig. B.3b), • Amplitude spectrum of the result (See Fig. B.3c). B.3 Inverse FFT The inverse is applied to the filtered real and imaginary parts of the Fourier spectrum to deliver three floating point images: • Back transformed real part of the Fourier spectrum (See Fig. B.4a),, • Back transformed imaginary part of the Fourier spectrum (See Fig. B.4b),, • Mod 2π -phase (See Fig. B.4c),. 114 B.4 Unwrapping Since all phase measuring techniques deliver the phase only mod 2π because of the sinusoidal nature of the intensity distribution, unwrapping the saw-tooth images to reconstruct the continuous phase distribution is necessary: • Select the wrapped phase map, • Define the start point carefully to activate the unwrapping procedure, • Unwrapped phase map (See Fig. B.5). 115 (a) (b) (c) Fig. B.1 (a) Real part of the Fourier spectrum, (b) Imaginary part of the Fourier spectrum, (c) Spectrum. 116 Fig. B.2 Dialog box for the bandpass filter for the Fourier transform method. 117 (a) (b) (c) Fig. B.3 (a) Filtered real part of the Fourier spectrum, (b) Filtered imaginary part of the Fourier spectrum, (c) Amplitude spectrum of the result. 118 (a) (b) (c) Fig. B.4 (a) Back transformed real part of the Fourier spectrum, (b) Back transformed imaginary part of the Fourier spectrum, (c) Mod 2π -phase. 119 Fig. B.5 Unwrapped phase map. 120 APPENDIX C Phase Maps of the Test Surface at Different Intervals (a) (b) (c) (d) 121 (e) (f) (g) (h) 122 (i) (j) (k) (l) 123 (m) (n) (o) (p) 124 (q) (r) (s) (t) Fig. C.1 Phase maps at different time intervals (a) 0.002s (b) 0.004s (c) 0.006s (d) 0.008s (e) 0.010s (f) 0.012s (g) 0.014s (h) 0.016s (i) 0.018s (j) 0.020s (k) 0.022s (l) 0.024s (m) 0.026s (n) 0.028s (o) 0.030s (p) 0.032s (q) 0.034s (r) 0.036s (s) 0.038s (t) 0.040s 125 APPENDIX D List of Publications 1. C. J. Tay, C. Quan, H. M. Shang, T. Wu, and S. H. Wang. New method for measuring dynamic response of small components by fringe projection, Optical Engineering, 42(6), pp.1715-1720. 2003. 2. T. Wu, C. J. Tay, C. Quan, H. M. Shang, and S. H. Wang. Vibration measurement of micro-components by fringe projection method, Proceedings of the SPIE, 5116, pp.912-923. 2003. 3. C. Quan, C. J. Tay, T. Wu, S. H. Wang, and H. M. Shang. Extending the dynamic range of a white-light video measurement system, Submitted to Optics Communications. 4. C. J. Tay, C. Quan, T. Wu, and Y. H. Huang. An integrated method for 3-D shape and in-plane displacement measurement using fringe projection, Submitted to Optical Engineering. 126 [...]... control, and contour mapping 2.2 Application of Optical Techniques in Dynamic Measurement 5 The discussion so far has emphasized mainly on the static shape and curvature measurement of test specimens In industrial metrology, a non-contact and nondestructive vibration measurement technique is a desirable tool for contour mapping, quality control and vibration analysis Optical techniques for vibration measurement. .. different approaches for generating fringe patterns such as interferometry, triangulation and spatial light modulating by a liquid crystal modulator 3.1.1 Formation of Fringe patterns by Interferometry Method The arrangement that incorporates an optical wedge, enables a fringe pattern with a fine pitch to be obtained This technique has the advantage of requiring a simple experimental setup and optical arrangement... obtain a full field 3D measurement The image processing based measurement principle enables very fast measurement Phase values are determined by calculating Fourier transformation, filtering in the spatial frequency domain and calculating inverse Fourier transformation Compared with the moiré topography, fast Fourier transform method can accomplish a fully automatic distinction between a depression and. .. demonstrate the application of fringe projection method in both static and dynamic measurement; 2) To enhance the dynamic range of the measurement system; and 3) To develop a novel fringe projection method integrated with digital image correlation (DIC) technique which enables the determination of shape and 3-D displacement using only one camera In the first chapter of the thesis, the objectives of this... developed for surface measurement As classical approaches using mechanical probes remain inherently slow and ill suited for measurement of curved surfaces, 3-D sensing by non-contact optical methods are studied extensively for these applications In industrial metrology, the non-contacting and non-destructive automated surface shape measurement technique is a desirable tool for vibration analysis, quality... [53-55] have presented systems based on digital correlation algorithms combined with a stereo pair of CCD cameras Such systems can handle discontinuities in the deformation and measure deformations Pawlowski [56] applied a spatio-temporal approach, in which the 10 temporal analysis of the intensity variation at a given pixel provides information about out -of- plane displacement In-plane motion of the... Phase maps at different time intervals 125 xii Chapter 1 Introduction 1.1 Introduction Optical techniques can be classified into either static or dynamic methods with respect to the loading conditions Some examples of the static approach are for shape measurement and deformation measurement, while vibrational excitation belongs to the dynamic measurement mode Projected fringes for the measurement of. .. correlation between the two digital images in the undeformed and deformed states The natural or artificial surface patterns in the images are the carrier that records the surface displacement information of an object By making use of the correlation algorithm of gray level of the pixels in the two images, the displacement fields can be obtained 11 Chapter 3 Theory 3.1 Formation of Fringe Patterns There are... 91 (a) Calibration and error analysis of displacement along X direction 91 (b) Calibration and error analysis of displacement along Y direction 91 (c) Calibration and error analysis of out -of- plane displacement 92 Figure 6.10 3-D displacement vector of the test surface 93 Figure B.1 (a) Real part of the Fourier spectrum, (b) Imaginary part of the Fourier spectrum, (c) Spectrum 116 Figure B.2 Dialog... measure microcomponents with a Long-Distance Microscope (LDM) which has a limited aperture Hence acquisition of image becomes an optimization problem of adapting the dynamic range of the scene to the dynamic range of the camera To modulate the intensity in dark and bright areas, Tiziani [47] developed a method by the use of a three-chip color camera (RGB) The three-color channels are recorded simultaneously ... Schematic diagram of planar deformation process 35 Chapter Application of Fringe Projection Method for Static Measurement This chapter describes the fringe projection method for static profile measurement. .. static and dynamic loading conditions Experimental verification for both static shape measurement and dynamic analysis is carried out on a micro membrane, a coin and a diaphragm of a speaker The... project are as follows: 1) To demonstrate the application of fringe projection method in both static and dynamic measurement; 2) To enhance the dynamic range of the measurement system; and 3)

Ngày đăng: 04/10/2015, 15:46

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan