赞
踩
by
Robi Polikar
Although the time and frequency resolution problems are results of a physical phenomenon (the Heisenberg uncertainty principle) and exist regardless of the transform used, it is possible to analyze any signal by using an alternative approach called themultiresolution analysis (MRA) (多分辨率分析). MRA, a simplied by its name,analyzes the signal at different frequencies with different resolutions.(有点自适应的感觉在里头,呵呵) Every spectral component is not resolved equally as was the case in the STFT.(这儿提出了新的概念解决问题)
MRA is designed to give good time resolution and poor frequency resolution at high frequencies and good frequency resolution and poor time resolution at low frequencies.This approach makes sense especially when the signal at hand has high frequency components for short durations and low frequency componentsfor long durations.. Fortunately, the signals that are encountered in practicalpplications are often of this type.(这就太好了,至少从直觉上我们认可了这个观点,也更容易接受下面的观点) For example, the following shows a signal of this type. It has a relatively low frequency component throughout the entire signal and relatively high frequency components for a short duration somewhere around the middle.
The continuous wavelet transform was developed as analternative approach to the short time Fourier transform to overcome the resolution problem. The wavelet analysis is done in a similar way to the STFT analysis, in the sense that the signal is multiplied with a function, {\it the wavelet}, similar to the window function in the STFT, and the transform is computed separately for different segments of the time-domain signal.However,there are two main differences between the STFT and the CWT:
1. The Fourier transforms of the windowed signals are not taken, and therefore single peak will be seen corresponding to a sinusoid, i.e., negative frequencies are not computed.
2. The width of the window is changed as the transform is computed for every single spectral component,which is probably the most significant characteristic of the wavelet transform.
The continuous wavelet transform is defined as follows
Equation 3.1
As seen in the above equation , the transformed signal is a function of two variables,tau ands , the translation and scale parameters, respectively. psi(t)is the transforming function, and it is called the mother wavelet. The term mother wavelet gets its name due to two important properties of the wavelet analysis as explained below:
The term wavelet means a small wave . The smallness refers to the condition that this (window) function is of finite length (compactly supported). The wave refers to the condition that this function is oscillatory . The term mother implies that the functions with different region of support that are used in the transformation process are derived from one main function, or the mother wavelet. In other words,the mother wavelet is a prototype 原型for generating the other window functions.
The term translation is used in the same sense as it was used in the STFT;it is related to the location of the window, as the window is shifted through the signal. This term, obviously, corresponds to time information in the transform domain.However, we do not have a frequency parameter, as we had before for the STFT. Instead, we have scale parameter which is defined as$1/frequency$. The term frequency is reserved for the STFT. Scale is described in more detail in the next section.
The parameter scale in the wavelet analysis is similar to the scale used in maps.(地图上的比例尺) As in the case of maps, high scales correspond to a non-detailed global view (of the signal), and low scales correspond to a detailed view.Similarly,in terms of frequency, low frequencies (high scales) correspond to a global information of a signal (that usually spans the entire signal), whereas highfrequencies (low scales) correspond to a detailed information of a hidden pattern in the signal (that usually lasts a relatively short time). Cosine signals corresponding to various scales are given as examples in the following figure .
Figure 3.2
Fortunately in practical applications, low scales (high frequencies) do not last for the entire duration of the signal, unlike those shown in the figure,but they usually appear from time to time as short bursts, or spikes. Highscales (low frequencies) usually last for the entire duration of the signal.(事实情形确实是这样的)
Scaling, as a mathematical operation, either dilates (扩大)or compresses(压缩) a signal.Larger scales correspond to dilated (or stretched out) signals and small scales correspond to compressed signals. All of the signals given in the figure are derived from the same cosine signal, i.e., they are dilated or compressed versions of the same function. In the above figure,s=0.05is the smallest scale, and s=1 is the largest scale.
In terms of mathematical functions, if f(t)is a given functionf(st)corresponds to a contracted (compressed) version of f(t) if s > 1 and to an expanded (dilated) version off(t) if s < 1 .
However, in the definition of the wavelet transform, the scaling term is used in the denominator, and therefore, the opposite of the above statements holds, i.e., scaless > 1 dilates the signals whereas scales s< 1 , compresses the signal. This interpretation of scale will be used throughoutthis text.(附近这些话要好好理解)
Interpretation of the above equation will be explained in this section. Let x(t) is the signal to be analyzed. The motherwavelet is chosen to serve as a prototype for all windows in the process. All the windows that are used are the dilated (or compressed) and shifted versionsof the mother wavelet. There are a number of functions that are used for this purpose. The Morlet wavelet and the Mexican hatfunction are two candidates, and they are used for the wavelet analysis of the examples which are presented later in this chapter.
Once the mother wavelet is chosen the computation starts with s=1 and the continuous wavelet transform is computed for all values of s , smaller and larger than ``1''. However, depending on the signal, a complete transform is usually not necessary. For all practical purposes, the signals arebandlimited, (实际信号一般都是有限带宽的)and therefore,computation of the transform for a limited interval of scales is usually adequate. In this study, some finite interval of values fors were used, as will be described later in this chapter.
For convenience, the procedure will be started from scale s=1 and will continue for the increasing values of s ,i.e., the analysis will start from high frequencies and proceed towards low frequencies. This first value of swill correspond to the most compressed wavelet. As the value of sis increased, the wavelet will dilate.
STEPS:The wavelet is placed at the beginning of the signal at the point which corresponds to time=0. The wavelet function at scale ``1'' is multiplied by the signal and then integrated over all times. The result of the integration is then multiplied by the constant number1/sqrt{s} . This multiplication is for energy normalization purposes so that the transformed signal will have the same energy at every scale. The final result is the value of the transformation,i.e., the value of the continuous wavelet transform at time zeroand scales=1 . In other words, it is the value that corresponds to the point tau=0 , s=1in the time-scale plane.
The wavelet at scale s=1 is then shifted towards the right by tauamount to the location t=tau , and the above equation iscomputed to get the transform value att=tau , s=1in the time-frequency plane.
This procedure is repeated until the wavelet reaches the end of the signal. One row of points on the time-scale plane for the scale s=1 is nowcompleted.
Then, s is increased by a small value. Note that, this is a continuous transform, and therefore, bothtauands must be incremented continuously .However, if this transform needs to be computed by a computer, then bothparameters are increased bya sufficiently small step size. Thiscorresponds to sampling the time-scale plane.
The above procedure is repeated for every value of s. Everycomputation for a given value ofs fills the corresponding single row ofthe time-scale plane. When the process is completed for all desired values ofs,the CWT of the signal has been calculated.
The figures below illustrate the entire process step bystep.
Figure 3.3
In Figure 3.3, the signal and the wavelet function are shown for four different values of tau. The signal is a truncated version of the signalshown in Figure 3.1. The scale value is 1,corresponding to the lowest scale, or highest frequency. Note how compact it is(the blue window). It should be as narrow as the highest frequency component that exists in the signal. Four distinct locations of the wavelet function areshown in the figure at to=2 , to=40, to=90, andto=140 . At every location, it is multiplied by the signal. Obviously, the product is nonzero only where the signal falls in the region of support of the wavelet, and it is zero elsewhere. By shifting the wavelet in time, the signalis localized in time, and by changing the value ofs ,the signal is localized in scale (frequency).(我目前的直观理解是,由于有积分运算表达的是一种低通平均的性质,而且这个窗越宽,可通过的(我们所能直观观察到的)频率应该越低,相当于高频成分被滤掉了,只有那种缓变的波形才能不至于这样消失掉,请指正)
If the signal has a spectral component that corresponds to the current value of s(which is 1 in this case), the product of the wavelet with the signal at the location where this spectral component exists gives a relatively large value. If the spectral component that corresponds to the current value of sis not present in the signal, the product value will be relatively small, or zero. The signal in Figure 3.3 has spectral components comparable to the window's width at s=1 around t=100 ms.
The continuous wavelet transform of the signal in Figure 3.3 will yieldlarge values for low scales around time 100 ms, and small values elsewhere. Forhigh scales, on the other hand, the continuous wavelet transform will givelarge values for almost the entire duration of the signal, since lowfrequencies exist at all times.
Figure 3.4
Figure 3.5
Figures 3.4 and 3.5 illustrate the same process for the scales s=5 and s=20,respectively. Note how the window width changes with increasing scale(decreasing frequency).As the window width increases, the transform starts picking up the lower frequency components.
As a result, for every scale and for every time (interval), one point of the time-scale plane is computed. The computations at one scale construct the rows of the time-scale plane, and the computations at different scales construct thecolumns of the time-scale plane.
Now, let's take a look at an example, and see how the wavelet transform really looks like. Consider the non-stationary signal in Figure 3.6.This is similar to the example given for the STFT, except at differentfrequencies. As stated on the figure, the signal is composed of four frequencycomponents at 30 Hz, 20 Hz, 10 Hz and 5 Hz.
Figure 3.6
Figure 3.7 is the continuous wavelet transform (CWT) of this signal. Notethat the axes are translation and scale, not time and frequency. However,translation is strictly related to time, since it indicates where the motherwavelet is located. The translation of the mother wavelet can be thought of as the time elapsed since t=0 .The scale, however, has a whole different story. Remember that the scale parameter s in equation 3.1 is actually inverse of frequency. In other words, whatever we said about the properties of the wavelet transform regarding the frequency resolution, inverse of it will appear on the figures showing the WT of the time-domain signal.
Figure 3.7
Note that in Figure 3.7 that smaller scales correspond to higher frequencies,i.e., frequency decreases as scale increases, therefore, that portion of the graph with scales around zero, actually correspond to highest frequencies in the analysis, and that with high scales correspond to lowest frequencies. Remember that the signal had 30 Hz (highest frequency) components first, and this appears at the lowest scale at a translationsof 0 to 30. Then comes the 20 Hz component, second highest frequency, and so on. The 5 Hz component appears at the end of thetranslation axis (as expected), and at higher scales (lower frequencies) againas expected.
Figure 3.8
Now, recall these resolution properties: Unlike the STFT which has a constant resolution at all times and frequencies, the WT has a good time and poor frequency resolution at high frequencies, and good frequency and poor time resolution at low frequencies. Figure 3.8 shows the same WT in Figure 3.7 from another angle to better illustrate the resolution properties: In Figure 3.8,lower scales (higher frequencies) have better scale resolution (narrower in scale, which means that it is less ambiguous what the exact value of the scale) which correspond to poorer frequency resolution . Similarly,higher scales have scale frequency resolution (wider support in scale, which means it is more ambitious what the exact value of the scale is) , which correspond to better frequency resolution of lower frequencies.
The axes in Figure 3.7 and 3.8 are normalized and should be evaluated accordingly.Roughly speaking the 100 points in the translation axis correspond to 1000 ms,and the 150 points on the scale axis correspond to a frequency band of 40 Hz(the numbers on the translation and scale axis do not correspond to seconds and Hz, respectively, they are just the number of samples in the computation).
In this section we will take a closer look at the resolution properties of the wavelet transform.Remember that the resolution problem was the main reason why we switched from STFT to WT.
The illustration in Figure 3.9 is commonly used to explain how time andfrequency resolutions should be interpreted. Every box in Figure 3.9corresponds to a value of the wavelet transform in the time-frequency plane. Notethat boxes have a certainnon-zeroarea, which implies that the value ofa particular point in the time-frequency plane cannot be known. All the pointsin the time-frequency plane that falls into a box isrepresented by one value of the WT.
Figure 3.9
Let's take a closer look at Figure 3.9: First thing to notice is that although the widths and heights of the boxes change, the area is constant. Thatis each box represents an equal portion of the time-frequency plane, but givingdifferent proportions to time and frequency. Note that at low frequencies, theheight of the boxes are shorter (which corresponds to better frequencyresolutions, since there is less ambiguity regarding the value of the exactfrequency), but their widths are longer (which correspond to poor timeresolution, since there is more ambiguity regarding the value of the exacttime). At higher frequencies the width of the boxes decreases, i.e., the timeresolution gets better, and the heights of the boxes increase, i.e., thefrequency resolution gets poorer.
Before concluding this section, it is worthwhile to mention how the partitionlooks like in the case of STFT. Recall that in STFT the time and frequencyresolutions are determined by the width of the analysis window, which isselected once for the entire analysis, i.e., both time and frequencyresolutions are constant. Therefore the time-frequency plane consists ofsquaresin the STFT case.
Regardless of the dimensions of the boxes, the areas of all boxes, both inSTFT and WT, are the same and determined byHeisenberg's inequality. As a summary, the area ofa box is fixed for each window function (STFT) or mother wavelet (CWT), whereasdifferent windows or mother wavelets can result in different areas. However,allareas are lower bounded by 1/4 \pi . That is, we cannot reduce the areas of the boxes asmuch as we want due to the Heisenberg's uncertainty principle. On the other hand, for a given mother wavelet the dimensions of theboxes can be changed, while keeping the area the same. This is exactlywhat wavelet transform does.
This section describes the main idea of wavelet analysis theory, which can also be considered to be the underlying concept of most of the signal analysistechniques. The FT defined by Fourier use basis functions to analyze and reconstruct a function. Every vector in a vector space can be written as alinear combination of the basis vectors in that vector space, i.e., bymultiplying the vectors by some constant numbers, and then by taking thesummation of the products. The analysis of the signal involves the estimationof these constant numbers (transform coefficients, or Fourier coefficients,wavelet coefficients, etc). The synthesis, or the reconstruction, correspondsto computing the linear combination equation.
All the definitions and theorems related to this subject can be found inKeiser's book,A Friendly Guide to Waveletsbut an introductory levelknowledge of how basis functions work is necessary to understand the underlyingprinciples of the wavelet theory. Therefore, this information will be presentedin this section.
Note: Most of the equations include letters of the Greek alphabet. Theseletters are written out explicitly in the text with their names, such astau, psi, phietc.For capital letters, the first letter of the name has been capitalized, suchas, Tau, Psi, Phietc.Also, subscripts are shown by the underscore character_ ,and superscripts are shown by the ^ character. Also note that allletters or letter names written in bold type face represent vectors, Some important points are also written in bold face, but themeaning should be clear from the context.
A basis of a vector space V is a set of linearly independentvectors, such that any vectorvin V can be written as a linearcombination of these basis vectors. There may be more than one basis for avector space. However, all of them have the same number of vectors, and thisnumber is known as thedimensionof the vector space. For example intwo-dimensional space, the basis will have two vectors.
Equation 3.2
Equation 3.2 shows how any vector v can be written as a linearcombination of the basis vectorsb_kandthe corresponding coefficientsnu^k.
This concept, given in terms of vectors, can easily be generalized tofunctions, by replacing the basis vectorsb_kwith basis functions phi_k(t), and the vectorv with a function f(t). Equation3.2 then becomes
Equation 3.2a
The complex exponential (sines and cosines)functions are the basis functions for the FT. Furthermore, they are orthogonalfunctions, which provide some desirable properties for reconstruction.
Let f(t) and g(t) be two functions in L^2 [a,b]. ( L^2 [a,b]denotes the set of square integrable functions in theinterval [a,b]). The inner product of two functionsis defined by Equation 3.3:
Equation 3.3
According to the above definition of the inner product, the CWT can bethought of as the inner product of the test signal with the basis functions psi_(tau ,s)(t):
Equation 3.4
where,
Equation 3.5
This definition of the CWT shows that the wavelet analysis is a measure of similaritybetween the basis functions (wavelets) and the signal itself. Here thesimilarity is in the sense of similar frequency content. The calculated CWTcoefficients refer to the closeness of the signal to the wavelet at thecurrent scale .
This further clarifies the previous discussion on the correlation of the signal with the wavelet at a certain scale. If the signal has a major component of the frequency corresponding to the current scale, then the wavelet (the basis function) at the current scale will be similar or close to the signal at the particular location where this frequency component occurs.Therefore, the CWT coefficient computed at this point in the time-scale plane will be a relatively large number.
Two vectors v , w are said to be orthogonal if their inner product equals zero:
Equation 3.6
Similarly, two functions $f$ and $g$ are said to be orthogonal to each other if their inner product is zero:
Equation 3.7
A set of vectors {v_1, v_2, ....,v_n} is said to be orthonormal, if they are pairwise orthogonal (两两正交)to each other,and all have length ``1''. This can be expressed as:
Equation 3.8
Similarly, a set of functions {phi_k(t)}, k=1,2,3,..., is said to be orthonormalif
Equation 3.9
and
Equation 3.10
or equivalently
Equation 3.11
where, delta_{kl} is the Kronecker delta function, defined as:
Equation 3.12
As stated above, there may be more than one set of basis functions (orvectors). Among them, the orthonormal basis functions(or vectors) are of particular importance because of the nice properties theyprovide in finding these analysis coefficients. The orthonormalbases allow computation of these coefficients in a very simple andstraightforward way using the orthonormalityproperty.
For orthonormal bases, the coefficients, mu_k , can be calculated as
Equation 3.13
and the function f(t) can then be reconstructed byEquation 3.2_a by substituting the mu_k coefficients.This yields
Equation 3.14(this is why we can use orthonomal bases to represent a signal)
Orthonormal bases may not be available for everytype of application where a generalized version,biorthogonal bases can be used. The term ``biorthogonal''refers to two different bases which are orthogonal to each other, but each do not form an orthogonal set.
In some applications, however, biorthogonal bases also may not be available in which case frames(一个新的概念) can be used. Frames constitute an important part of wavelet theory, and interested readers are referred to Kaiser's book mentioned earlier.
Following the same order as in chapter 2 for the STFT, some examples of continuous wavelet transform are presented next. The figures given in theexamples were generated by a program written to compute the CWT.
Before we close this section, I would like to include two mother wavelets commonly used in wavelet analysis. The Mexican Hat wavelet is defined as the second derivative of the Gaussian function:
Equation 3.15
which is
Equation 3.16
The Morlet wavelet is defined as
Equation 3.16a
where a is a modulation parameter, and sigmais the scaling parameter that affects the width of the window.
All of the examples that are given below correspond to real-lifenon-stationary signals. These signals are drawn from a database signals thatincludesevent related potentialsof normal people, and patients withAlzheimer's disease. Since these are not test signals like simple sinusoids, itis not as easy to interpret them. They are shown here only to give an idea ofhow real-life CWTs look like.
The following signal shown in Figure 3.11 belongs to anormal person.
Figure 3.11
and the following is its CWT. The numbers on theaxes are of no importance to us. those numbers simplyshow that the CWT was computed at 350 translation and 60 scale locations on thetranslation-scale plane. The important point to note here is the fact that thecomputation is not a true continuous WT, as it is apparent from thecomputation at finite number of locations. This is only a discretized versionof the CWT, which is explained later on this page. Note, however, that this isNOT discrete wavelet transform (DWT) which is the topic of Part IV of thistutorial.
Figure 3.12
and the Figure 3.13 plots the same transform from adifferent angle for better visualization.
Figure 3.13
Figure 3.14 plots an event related potential of a patient diagnosed withAlzheimer's disease
Figure 3.14
and Figure 3.15 illustrates its CWT:
Figure 3.15
and here is another view from a different angle
Figure 3.16
The continuous wavelet transform is a reversible transform, provided that Equation 3.18 is satisfied. Fortunately, this is a very non-restrictive requirement. The continuous wavelet transform is reversible if Equation 3.18 issatisfied, even though the basis functions are in general may not be orthonormal. The reconstruction is possible by using thefollowing reconstruction formula:
Equation 3.17 Inverse Wavelet Transform
where C_psi is a constantthat depends on the wavelet used. The success of the reconstruction depends onthis constant called,the admissibility constant , to satisfy the followingadmissibilitycondition:
Equation 3.18 Admissibility Condition
where psi^hat(xi) is the FTof psi(t). Equation 3.18 implies that psi^hat(0)= 0, which is
Equation 3.19
As stated above, Equation 3.19 is not a very restrictive requirement sincemany wavelet functions can be found whose integral is zero. For Equation 3.19to be satisfied, the wavelet must be oscillatory.
In today's world, computers are used to do most computations (well,...ok... almost all computations). It is apparent thatneither the FT, nor the STFT, nor the CWT can be practically computed by usinganalytical equations, integrals, etc. It is therefore necessary to discretize the transforms. As in the FT and STFT, the mostintuitive way of doing this is simply sampling the time-frequency (scale)plane. Again intuitively, sampling the plane with a uniformsampling rate sounds like the most natural choice. However, in the caseof WT, the scale change can be used to reduce the sampling rate.
At higher scales (lower frequencies), the samplingrate can be decreased, according to Nyquist's rule.In other words, if the time-scale plane needs to be sampled with a samplingrate ofN_1at scale s_1 , the same plane can be sampled with asampling rate ofN_2, at scale s_2 , where, s_1 < s_2 (correspondingto frequenciesf1>f2 ) and N_2 < N_1 . The actualrelationship betweenN_1 and N_2 is
Equation 3.20
or
Equation 3.21
In other words, at lower frequencies the sampling rate can be decreasedwhich will save a considerable amount of computation time.
It should be noted at this time, however, that the discretizationcan be done in any way without any restriction as far as the analysis of thesignal is concerned. If synthesis is not required, even the Nyquist criteria does not need to be satisfied. The restrictions onthe discretization and the sampling rate becomeimportant if, and only if, the signal reconstruction is desired. Nyquist's sampling rate is the minimum sampling rate thatallows the originalcontinuous timesignal to be reconstructed from its discretesamples. The basis vectors that are mentioned earlier are of particularimportance for this reason.
As mentioned earlier, the wavelet psi(tau,s) satisfyingEquation 3.18, allows reconstruction of the signal by Equation 3.17. However,this is true for the continuous transform. The question is: can we stillreconstruct the signal if we discretize the time andscale parameters? The answer is ``yes'', under certain conditions (as theyalways say in commercials: certain restrictions apply !!!).
The scale parameter s is discretized first on a logarithmic grid. Thetime parameter is then discretizedwith respect to the scale parameter ,i.e., a different sampling rate is used for every scale. In other words, thesampling is done on thedyadic sampling grid shown in Figure 3.17 :
Figure 3.17
Think of the area covered by the axes as the entire time-scale plane. TheCWT assigns a value to the continuum of points on this plane. Therefore, thereare an infinite number of CWT coefficients. First consider the discretization of the scale axis. Among that infinitenumber of points, only a finite number are taken, using a logarithmic rule. Thebase of the logarithm depends on the user. The most common value is2becauseof its convenience. If 2 is chosen, only the scales 2, 4, 8, 16, 32, 64,...etc. are computed. If the value was 3, the scales 3, 9,27, 81, 243,...etc. would have been computed. The timeaxis is then discretized according to the discretizationof the scale axis. Since the discrete scale changes by factors of 2 , the sampling rate is reduced for the time axis bya factor of 2 at every scale.
Note that at the lowest scale (s=2), only 32 points of the time axis aresampled (for the particular case given in Figure 3.17). At the next scalevalue, s=4, the sampling rate of time axis is reduced by a factor of 2 sincethe scale is increased by a factor of 2, and therefore, only 16 samples aretaken. At the next step, s=8 and 8 samples are taken in time, and so on.
Although it is called the time-scale plane, it is more accurate to call itthe translation-scale plane, because ``time'' in the transform domainactually corresponds to the shifting of the wavelet in time. For the waveletseries, the actual time is still continuous.
Similar to the relationship between continuous Fourier transform, Fourierseries and the discrete Fourier transform, there is a continuous wavelettransform, a semi-discrete wavelet transform (alsoknown as wavelet series) and a discrete wavelet transform.
Expressing the above discretization procedure inmathematical terms, the scale discretization iss= s_0^j ,and translation discretization istau= k.s_0^j.tau_0where s_0>1 and tau_0>0. Note, how thetranslation discretization is dependent on scale discretization withs_0 .
The continuous wavelet function
Equation 3.22
Equation 3.23
by inserting s = s_0^j , and tau = k.s_0^j.tau_0 .
If {psi_(j,k)} constitutes an orthonormalbasis, the wavelet series transform becomes
Equation 3.24
or
Equation 3.25
A wavelet series requires that {psi_(j,k)} are either orthonormal, biorthogonal, orframe. If{psi_(j,k)}are not orthonormal,Equation 3.24 becomes
Equation 3.26
where hat{ psi_{j,k}^*(t)} , is either the dual biorthogonalbasisordual frame (Note that * denotes the conjugate).
If {psi_(j,k) } are orthonormal or biorthogonal, the transform will be non-redundant, where asif they form a frame, the transform will be redundant. On the other hand, it ismuch easier to find frames than it is to find orthonormalor biorthogonal bases.
The following analogy may clear this concept. Consider the whole process aslooking at a particular object. The human eyes first determine the coarse viewwhich depends on the distance of the eyes to the object. This corresponds toadjusting the scale parameters_0^(-j). Whenlooking at a very close object, with great detail,j is negative andlarge (low scale, high frequency, analyses the detail in the signal). Movingthe head (or eyes) very slowly and with very small increments (of angle, ofdistance, depending on the object that is being viewed), corresponds to smallvalues of tau = k.s_0^j.tau_0 . Note thatwhenj is negative and large, it corresponds to small changes in time,tau,(high sampling rate) and large changes ins_0^-j (low scale, highfrequencies, where the sampling rate is high). The scale parameter can bethought of as magnification too.
How low can the sampling rate be and still allow reconstruction of thesignal? This is the main question to be answered to optimize the procedure. Themost convenient value (in terms of programming) is found to be ``2'' for s_0and "1" for tau. Obviously, when thesampling rate is forced to be as low as possible, the number of available orthonormal wavelets is also reduced.
The continuous wavelet transform examples that were given in this chapterwere actually the wavelet series of the given signals. The parameters werechosen depending on the signal. Since the reconstruction was not needed, thesampling rates were sometimes far below the critical value where s_0 variedfrom 2 to 10, and tau_0 varied from 2 to 8, for different examples.
This concludes Part III of this tutorial. I hope you now have a basicunderstanding of what the wavelet transform is all about. There is one thingleft to be discussed however. Even though the discretized wavelet transform canbe computed on a computer, this computation may take anywhere from a coupleseconds to couple hours depending on your signal size and the resolution youwant. An amazingly fast algorithm is actually available to compute the wavelettransform of a signal. The discrete wavelet transform (DWT) is introduced inthe final chapter of this tutorial, in Part IV.
Let's meet at the grand finale, shall we?
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。