Cos Phi Per 4

Cos Phi Per 4

Betoken processing technique

In statistical indicate processing, the goal of
spectral density estimation
(SDE) or just
spectral interpretation
is to estimate the spectral density (also known as the power spectral density) of a indicate from a sequence of fourth dimension samples of the signal.[ane]
Intuitively speaking, the spectral density characterizes the frequency content of the indicate. I purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.

Some SDE techniques presume that a bespeak is composed of a limited (normally pocket-sized) number of generating frequencies plus noise and seek to find the location and intensity of the generated frequencies. Others make no assumption on the number of components and seek to estimate the whole generating spectrum.

Overview

Example of voice waveform and its frequency spectrum

A periodic waveform (triangle wave) and its frequency spectrum, showing a “fundamental” frequency at 220 Hz followed by multiples (harmonics) of 220 Hz.

The power spectral density of a segment of music is estimated by ii different methods, for comparison.

Spectrum analysis, besides referred to as frequency domain assay or spectral density estimation, is the technical procedure of decomposing a complex bespeak into simpler parts. Equally described to a higher place, many concrete processes are all-time described as a sum of many individual frequency components. Any process that quantifies the various amounts (e.g. amplitudes, powers, intensities) versus frequency (or phase) tin can be called
spectrum analysis.

Spectrum analysis can be performed on the entire bespeak. Alternatively, a bespeak tin can be cleaved into brusque segments (sometimes called
frames), and spectrum analysis may be applied to these individual segments. Periodic functions (such as

${\displaystyle \sin(t)}$

) are particularly well-suited for this sub-division. General mathematical techniques for analyzing non-periodic functions fall into the category of Fourier analysis.

The Fourier transform of a function produces a frequency spectrum which contains all of the data about the original signal, but in a dissimilar grade. This means that the original part can exist completely reconstructed (synthesized) past an inverse Fourier transform. For perfect reconstruction, the spectrum analyzer must preserve both the amplitude and phase of each frequency component. These two pieces of information tin exist represented as a two-dimensional vector, as a circuitous number, or as magnitude (amplitude) and phase in polar coordinates (i.eastward., as a phasor). A common technique in bespeak processing is to consider the squared amplitude, or power; in this case the resulting plot is referred to as a ability spectrum.

Because of reversibility, the Fourier transform is chosen a
representation
of the function, in terms of frequency instead of fourth dimension; thus, information technology is a frequency domain representation. Linear operations that could exist performed in the time domain have counterparts that can often exist performed more than hands in the frequency domain. Frequency analysis also simplifies the agreement and interpretation of the effects of various time-domain operations, both linear and non-linear. For example, only non-linear or time-variant operations can create new frequencies in the frequency spectrum.

In exercise, virtually all software and electronic devices that generate frequency spectra utilize a discrete Fourier transform (DFT), which operates on samples of the signal, and which provides a mathematical approximation to the full integral solution. The DFT is nearly invariably implemented by an efficient algorithm called
fast Fourier transform
(FFT). The array of squared-magnitude components of a DFT is a type of power spectrum chosen periodogram, which is widely used for examining the frequency characteristics of noise-free functions such as filter impulse responses and window functions. But the periodogram does not provide processing-gain when applied to noiselike signals or even sinusoids at low signal-to-noise ratios. In other words, the variance of its spectral estimate at a given frequency does not decrease as the number of samples used in the computation increases. This can exist mitigated past averaging over time (Welch’s method[2])  or over frequency (smoothing). Welch’southward method is widely used for spectral density estimation (SDE). However, periodogram-based techniques innovate small biases that are unacceptable in some applications. So other alternatives are presented in the next section.

Techniques

Many other techniques for spectral estimation have been adult to mitigate the disadvantages of the basic periodogram. These techniques can generally be divided into
non-parametric,
parametric,
and more recently semi-parametric (besides called sparse) methods.[iii]
The non-parametric approaches explicitly judge the covariance or the spectrum of the process without assuming that the process has any detail structure. Some of the well-nigh common estimators in apply for basic applications (east.g. Welch’due south method) are non-parametric estimators closely related to the periodogram. By contrast, the parametric approaches assume that the underlying stationary stochastic process has a certain structure that can be described using a pocket-size number of parameters (for case, using an auto-regressive or moving average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. When using the semi-parametric methods, the underlying process is modeled using a not-parametric framework, with the additional assumption that the number of non-zero components of the model is small (i.e., the model is sparse). Like approaches may also be used for missing data recovery
[four]
as well as signal reconstruction.

Baca Juga :   Apec Mencanangkan Perdagangan Bebas Untuk Negara Berkembang Pada Tahun Brainly

Following is a partial list of non-parametric spectral density estimation techniques:

• Periodogram, the modulus squared of the discrete Fourier transform
• Lomb–Scargle periodogram, for which information demand not be equally spaced
• Bartlett’southward method is the boilerplate of the periodograms taken of multiple segments of the signal to reduce variance of the spectral density estimate
• Welch’southward method a windowed version of Bartlett’s method that uses overlapping segments
• Multitaper is a periodogram-based method that uses multiple tapers, or windows, to grade contained estimates of the spectral density to reduce variance of the spectral density estimate
• To the lowest degree-squares spectral analysis, based on least squares fitting to known frequencies
• Non-uniform discrete Fourier transform is used when the signal samples are unevenly spaced in fourth dimension
• Singular spectrum analysis is a nonparametric method that uses a singular value decomposition of the covariance matrix to estimate the spectral density
• Brusque-fourth dimension Fourier transform
• Critical filter is a nonparametric method based on information field theory that tin deal with noise, incomplete information, and instrumental response functions

Below is a partial list of parametric techniques:

• Autoregressive model (AR) estimation, which assumes that the
nth sample is correlated with the previous
p
samples.
• Moving-average model (MA) estimation, which assumes that the
northth sample is correlated with noise terms in the previous
p
samples.
• Autoregressive moving boilerplate (ARMA) interpretation, which generalizes the AR and MA models.
• MUltiple SIgnal Classification (MUSIC) is a popular superresolution method.
• Maximum entropy spectral estimation is an
all-poles
method useful for SDE when singular spectral features, such as sharp peaks, are expected.

And finally some examples of semi-parametric techniques:

• Thin Iterative Covariance-based Estimation (SPICE) estimation,[3]
and the more generalized

${\displaystyle (r,q)}$

-SPICE.[5]
• Iterative Adaptive Arroyo (IAA) interpretation.[6]
• Lasso, similar to To the lowest degree-squares spectral analysis but with a sparsity enforcing penalty.[seven]

Parametric estimation

In parametric spectral estimation, ane assumes that the indicate is modeled past a stationary process which has a spectral density function (SDF)

${\displaystyle S(f;a_{1},\ldots ,a_{p})}$

that is a office of the frequency

${\displaystyle f}$

and

${\displaystyle p}$

parameters

${\displaystyle a_{1},\ldots ,a_{p}}$

.[viii]
The estimation problem then becomes ane of estimating these parameters.

The most common course of parametric SDF estimate uses as a model an autoregressive model

${\displaystyle {\text{AR}}(p)}$

of order

${\displaystyle p}$

.[8]

: 392

A signal sequence

${\displaystyle \{Y_{t}\}}$

obeying a cipher mean

${\displaystyle {\text{AR}}(p)}$

procedure satisfies the equation

${\displaystyle Y_{t}=\phi _{one}Y_{t-ane}+\phi _{2}Y_{t-ii}+\cdots +\phi _{p}Y_{t-p}+\epsilon _{t},}$

where the

${\displaystyle \phi _{1},\ldots ,\phi _{p}}$

are fixed coefficients and

${\displaystyle \epsilon _{t}}$

is a white racket procedure with zippo hateful and
innovation variance

${\displaystyle \sigma _{p}^{two}}$

. The SDF for this process is

${\displaystyle S(f;\phi _{1},\ldots ,\phi _{p},\sigma _{p}^{2})={\frac {\sigma _{p}^{2}\Delta t}{\left|ane-\sum _{k=1}^{p}\phi _{k}eastward^{-2i\pi fk\Delta t}\correct|^{2}}}\qquad |f|

with

${\displaystyle \Delta t}$

the sampling fourth dimension interval and

${\displaystyle f_{N}}$

the Nyquist frequency.

There are a number of approaches to estimating the parameters

${\displaystyle \phi _{ane},\ldots ,\phi _{p},\sigma _{p}^{2}}$

of the

${\displaystyle {\text{AR}}(p)}$

process and thus the spectral density:[8]

: 452-453

Culling parametric methods include fitting to a moving average model (MA) and to a full autoregressive moving boilerplate model (ARMA).

Frequency estimation

Frequency estimation
is the process of estimating the frequency, aamplitude, and stage-shift of a point in the presence of dissonance given assumptions most the number of the components.[10]
This contrasts with the general methods to a higher place, which do not make prior assumptions nearly the components.

Unmarried tone

If one only wants to estimate the single loudest frequency, 1 can employ a pitch detection algorithm. If the dominant frequency changes over time, then the problem becomes the estimation of the instantaneous frequency as defined in the time–frequency representation. Methods for instantaneous frequency interpretation include those based on the Wigner-Ville distribution and higher social club ambivalence functions.[xi]

If 1 wants to know
all
the (possibly complex) frequency components of a received indicate (including transmitted betoken and dissonance), one uses a multiple-tone approach.

Multiple tones

A typical model for a signal

${\displaystyle 10(n)}$

consists of a sum of

${\displaystyle p}$

complex exponentials in the presence of white racket,

${\displaystyle westward(n)}$

${\displaystyle x(n)=\sum _{i=1}^{p}A_{i}e^{jn\omega _{i}}+w(n)}$

.

The power spectral density of

${\displaystyle 10(n)}$

is equanimous of

${\displaystyle p}$

impulse functions in add-on to the spectral density function due to noise.

The most common methods for frequency estimation involve identifying the racket subspace to extract these components. These methods are based on eigen decomposition of the autocorrelation matrix into a signal subspace and a dissonance subspace. After these subspaces are identified, a frequency estimation function is used to find the component frequencies from the noise subspace. The virtually pop methods of racket subspace based frequency estimation are Pisarenko’s method, the multiple bespeak classification (MUSIC) method, the eigenvector method, and the minimum norm method.

Pisarenko’s method

${\displaystyle {\hat {P}}_{\text{PHD}}\left(eastward^{j\omega }\correct)={\frac {i}{\left|\mathbf {e} ^{H}\mathbf {5} _{\text{min}}\right|^{2}}}}$

MUSIC

${\displaystyle {\hat {P}}_{\text{MU}}\left(eastward^{j\omega }\right)={\frac {1}{\sum _{i=p+ane}^{M}\left|\mathbf {eastward} ^{H}\mathbf {5} _{i}\right|^{two}}}}$

,
Eigenvector method

${\displaystyle {\chapeau {P}}_{\text{EV}}\left(e^{j\omega }\right)={\frac {1}{\sum _{i=p+i}^{M}{\frac {one}{\lambda _{i}}}\left|\mathbf {e} ^{H}\mathbf {v} _{i}\right|^{2}}}}$

Minimum norm method

${\displaystyle {\chapeau {P}}_{\text{MN}}\left(e^{j\omega }\right)={\frac {1}{\left|\mathbf {due east} ^{H}\mathbf {a} \correct|^{2}}};\ \mathbf {a} =\lambda \mathbf {P} _{due north}\mathbf {u} _{1}}$

Example calculation

Suppose

${\displaystyle x_{n}}$

, from

${\displaystyle n=0}$

to

${\displaystyle Northward-ane}$

is a time series (discrete time) with zero mean. Suppose that it is a sum of a finite number of periodic components (all frequencies are positive):

{\displaystyle {\begin{aligned}x_{n}&=\sum _{k}A_{1000}\sin(2\pi \nu _{yard}n+\phi _{m})\\&=\sum _{k}A_{thou}\left(\sin(\phi _{k})\cos(2\pi \nu _{k}north)+\cos(\phi _{k})\sin(two\pi \nu _{k}due north)\correct)\\&=\sum _{k}\left(\overbrace {a_{one thousand}} ^{A_{1000}\sin(\phi _{k})}\cos(2\pi \nu _{k}n)+\overbrace {b_{one thousand}} ^{A_{k}\cos(\phi _{thousand})}\sin(2\pi \nu _{chiliad}n)\right)\end{aligned}}}

The variance of

${\displaystyle x_{n}}$

is, for a nil-hateful part as above, given by

${\displaystyle {\frac {1}{N}}\sum _{n=0}^{N-1}x_{n}^{two}.}$

If these data were samples taken from an electrical indicate, this would be its average power (ability is energy per unit time, and so it is analogous to variance if energy is analogous to the amplitude squared).

Now, for simplicity, suppose the signal extends infinitely in time, and so nosotros pass to the limit as

${\displaystyle N\to \infty .}$

If the average power is divisional, which is almost always the case in reality, then the following limit exists and is the variance of the information.

${\displaystyle \lim _{Northward\to \infty }{\frac {ane}{N}}\sum _{n=0}^{N-i}x_{n}^{2}.}$

Over again, for simplicity, we will pass to continuous time, and presume that the indicate extends infinitely in time in both directions. Then these two formulas get

${\displaystyle 10(t)=\sum _{grand}A_{k}\sin(2\pi \nu _{k}t+\phi _{k})}$

and

${\displaystyle \lim _{T\to \infty }{\frac {1}{2T}}\int _{-T}^{T}ten(t)^{2}dt.}$

The root mean square of

${\displaystyle \sin }$

is

${\displaystyle one/{\sqrt {2}}}$

, then the variance of

${\displaystyle A_{yard}\sin(2\pi \nu _{k}t+\phi _{yard})}$

is

${\displaystyle {\tfrac {1}{two}}A_{k}^{2}.}$

Hence, the contribution to the average power of

${\displaystyle x(t)}$

coming from the component with frequency

${\displaystyle \nu _{k}}$

is

${\displaystyle {\tfrac {one}{ii}}A_{thousand}^{two}.}$

All these contributions add together up to the average power of

${\displaystyle x(t).}$

Then the power every bit a function of frequency is

${\displaystyle {\tfrac {ane}{2}}A_{1000}^{ii},}$

and its statistical cumulative distribution part

${\displaystyle S(\nu )}$

will be

${\displaystyle Southward(\nu )=\sum _{k:\nu _{thou}<\nu }{\frac {ane}{2}}A_{k}^{2}.}$

${\displaystyle S}$

is a step function, monotonically not-decreasing. Its jumps occur at the frequencies of the periodic components of

${\displaystyle ten}$

, and the value of each leap is the ability or variance of that component.

The variance is the covariance of the data with itself. If we now consider the same data but with a lag of

${\displaystyle \tau }$

, nosotros tin take the covariance of

${\displaystyle x(t)}$

with

${\displaystyle x(t+\tau )}$

, and define this to be the autocorrelation function

${\displaystyle c}$

of the bespeak (or information)

${\displaystyle ten}$

:

${\displaystyle c(\tau )=\lim _{T\to \infty }{\frac {1}{2T}}\int _{-T}^{T}x(t)x(t+\tau )dt.}$

If it exists, it is an fifty-fifty function of

${\displaystyle \tau .}$

If the boilerplate ability is bounded, then

${\displaystyle c}$

exists everywhere, is finite, and is bounded by

${\displaystyle c(0),}$

which is the average ability or variance of the data.

Information technology can exist shown that

${\displaystyle c}$

can be decomposed into periodic components with the same periods as

${\displaystyle x}$

:

${\displaystyle c(\tau )=\sum _{k}{\frac {1}{2}}A_{k}^{2}\cos(ii\pi \nu _{one thousand}\tau ).}$

This is in fact the spectral decomposition of

${\displaystyle c}$

over the dissimilar frequencies, and is related to the distribution of power of

${\displaystyle 10}$

over the frequencies: the amplitude of a frequency component of

${\displaystyle c}$

is its contribution to the average power of the bespeak.

The power spectrum of this instance is non continuous, and therefore does not take a derivative, and therefore this signal does non accept a power spectral density office. In full general, the power spectrum will usually be the sum of two parts: a line spectrum such equally in this instance, which is not continuous and does non accept a density role, and a residue, which is absolutely continuous and does have a density function.

Meet also

• Multidimensional spectral interpretation
• Periodogram
• SigSpec
• Spectrogram
• Time–frequency analysis
• Time–frequency representation
• Whittle likelihood
• Spectral power distribution

References

1. ^

P Stoica and R Moses, Spectral Analysis of Signals, Prentice Hall, 2005.

2. ^

Welch, P. D. (1967), “The use of Fast Fourier Transform for the interpretation of power spectra: A method based on fourth dimension averaging over curt, modified periodograms”,
IEEE Transactions on Audio and Electroacoustics, AU-15 (2): lxx–73, Bibcode:1967ITAE…fifteen…70W, doi:10.1109/TAU.1967.1161901

3. ^

a

b

Stoica, Petre; Babu, Prabhu; Li, Jian (Jan 2011). “New Method of Sparse Parameter Estimation in Separable Models and Its Utilize for Spectral Analysis of Irregularly Sampled Information”.
IEEE Transactions on Signal Processing.
59
(1): 35–47. Bibcode:2011ITSP…59…35S. doi:10.1109/TSP.2010.2086452. ISSN 1053-587X. S2CID 15936187.

4. ^

Stoica, Petre; Li, Jian; Ling, Jun; Cheng, Yubo (April 2009). “Missing information recovery via a nonparametric iterative adaptive arroyo”.
2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE: 3369–3372. doi:10.1109/icassp.2009.4960347. ISBN978-1-4244-2353-8.

5. ^

Sward, Johan; Adalbjornsson, Stefan Ingi; Jakobsson, Andreas (March 2017). “A generalization of the sparse iterative covariance-based reckoner”.
2017 IEEE International Briefing on Acoustics, Speech communication and Bespeak Processing (ICASSP). IEEE: 3954–3958. doi:10.1109/icassp.2017.7952898. ISBN978-1-5090-4117-half-dozen. S2CID 5640068.

6. ^

Yardibi, Tarik; Li, Jian; Stoica, Petre; Xue, Ming; Baggeroer, Arthur B. (Jan 2010). “Source Localization and Sensing: A Nonparametric Iterative Adaptive Approach Based on Weighted Least Squares”.
IEEE Transactions on Aerospace and Electronic Systems.
46
(1): 425–443. Bibcode:2010ITAES..46..425Y. doi:10.1109/TAES.2010.5417172. hdl:1721.1/59588. ISSN 0018-9251. S2CID 18834345.

7. ^

Panahi, Ashkan; Viberg, Mats (February 2011). “On the resolution of The LASSO-based DOA estimation method”.
2011 International ITG Workshop on Smart Antennas. IEEE: 1–5. doi:10.1109/wsa.2011.5741938. ISBN978-1-61284-075-viii. S2CID 7013162.

8. ^

a

b

c

d

Percival, Donald B.; Walden, Andrew T. (1992).
Spectral Assay for Physical Applications. Cambridge University Press. ISBN9780521435413.

9. ^

Burg, J.P. (1967) “Maximum Entropy Spectral Assay”,
Proceedings of the 37th Meeting of the Social club of Exploration Geophysicists, Oklahoma City, Oklahoma.

10. ^

Hayes, Monson H.,
Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc., 1996. ISBN 0-471-59431-eight.

11. ^

Lerga, Jonatan. “Overview of Signal Instantaneous Frequency Estimation Methods”
(PDF). University of Rijeka. Retrieved
22 March
2014
.

• Porat, B. (1994).
Digital Processing of Random Signals: Theory & Methods. Prentice Hall. ISBN978-0-13-063751-two.

• Priestley, Thousand.B. (1991).
Spectral Analysis and Fourth dimension Series. Academic Press. ISBN978-0-12-564922-3.

• Stoica, P.; Moses, R. (2005).
Spectral Assay of Signals. Prentice Hall. ISBN978-0-13-113956-5.

• Thomson, D. J. (1982). “Spectrum estimation and harmonic analysis”.
Proceedings of the IEEE.
lxx
(ix): 1055–1096. Bibcode:1982IEEEP..seventy.1055T. CiteSeerX10.1.one.471.1278. doi:10.1109/PROC.1982.12433. S2CID 290772.

Cos Phi Per 4

Source: https://en.wikipedia.org/wiki/Spectral_density_estimation

Baca Juga :   Tuliskan Tiga Upaya Pengembangan Dan Pelestarian Kebudayaan Nasional