Cos Phi Per 4
Cos Phi Per 4
In statistical indicate processing, the goal of
spectral density estimation
(SDE) or just
spectral interpretation
is to estimate the spectral density (also known as the power spectral density) of a indicate from a sequence of fourth dimension samples of the signal.^{[ane]}
Intuitively speaking, the spectral density characterizes the frequency content of the indicate. I purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.
Some SDE techniques presume that a bespeak is composed of a limited (normally pocketsized) number of generating frequencies plus noise and seek to find the location and intensity of the generated frequencies. Others make no assumption on the number of components and seek to estimate the whole generating spectrum.
Overview
[edit]
Spectrum analysis, besides referred to as frequency domain assay or spectral density estimation, is the technical procedure of decomposing a complex bespeak into simpler parts. Equally described to a higher place, many concrete processes are alltime described as a sum of many individual frequency components. Any process that quantifies the various amounts (e.g. amplitudes, powers, intensities) versus frequency (or phase) tin can be called
spectrum analysis.
Spectrum analysis can be performed on the entire bespeak. Alternatively, a bespeak tin can be cleaved into brusque segments (sometimes called
frames), and spectrum analysis may be applied to these individual segments. Periodic functions (such as
$\sin(t)$
) are particularly wellsuited for this subdivision. General mathematical techniques for analyzing nonperiodic functions fall into the category of Fourier analysis.
The Fourier transform of a function produces a frequency spectrum which contains all of the data about the original signal, but in a dissimilar grade. This means that the original part can exist completely reconstructed (synthesized) past an inverse Fourier transform. For perfect reconstruction, the spectrum analyzer must preserve both the amplitude and phase of each frequency component. These two pieces of information tin exist represented as a twodimensional vector, as a circuitous number, or as magnitude (amplitude) and phase in polar coordinates (i.eastward., as a phasor). A common technique in bespeak processing is to consider the squared amplitude, or power; in this case the resulting plot is referred to as a ability spectrum.
Because of reversibility, the Fourier transform is chosen a
representation
of the function, in terms of frequency instead of fourth dimension; thus, information technology is a frequency domain representation. Linear operations that could exist performed in the time domain have counterparts that can often exist performed more than hands in the frequency domain. Frequency analysis also simplifies the agreement and interpretation of the effects of various timedomain operations, both linear and nonlinear. For example, only nonlinear or timevariant operations can create new frequencies in the frequency spectrum.
In exercise, virtually all software and electronic devices that generate frequency spectra utilize a discrete Fourier transform (DFT), which operates on samples of the signal, and which provides a mathematical approximation to the full integral solution. The DFT is nearly invariably implemented by an efficient algorithm called
fast Fourier transform
(FFT). The array of squaredmagnitude components of a DFT is a type of power spectrum chosen periodogram, which is widely used for examining the frequency characteristics of noisefree functions such as filter impulse responses and window functions. But the periodogram does not provide processinggain when applied to noiselike signals or even sinusoids at low signaltonoise ratios. In other words, the variance of its spectral estimate at a given frequency does not decrease as the number of samples used in the computation increases. This can exist mitigated past averaging over time (Welch’s method^{[2]}) or over frequency (smoothing). Welch’southward method is widely used for spectral density estimation (SDE). However, periodogrambased techniques innovate small biases that are unacceptable in some applications. So other alternatives are presented in the next section.
Techniques
[edit]
Many other techniques for spectral estimation have been adult to mitigate the disadvantages of the basic periodogram. These techniques can generally be divided into
nonparametric,
parametric,
and more recently semiparametric (besides called sparse) methods.^{[iii]}
The nonparametric approaches explicitly judge the covariance or the spectrum of the process without assuming that the process has any detail structure. Some of the wellnigh common estimators in apply for basic applications (east.g. Welch’due south method) are nonparametric estimators closely related to the periodogram. By contrast, the parametric approaches assume that the underlying stationary stochastic process has a certain structure that can be described using a pocketsize number of parameters (for case, using an autoregressive or moving average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. When using the semiparametric methods, the underlying process is modeled using a notparametric framework, with the additional assumption that the number of nonzero components of the model is small (i.e., the model is sparse). Like approaches may also be used for missing data recovery
^{[four]}
as well as signal reconstruction.
Following is a partial list of nonparametric spectral density estimation techniques:
 Periodogram, the modulus squared of the discrete Fourier transform
 Lomb–Scargle periodogram, for which information demand not be equally spaced
 Bartlett’southward method is the boilerplate of the periodograms taken of multiple segments of the signal to reduce variance of the spectral density estimate
 Welch’southward method a windowed version of Bartlett’s method that uses overlapping segments
 Multitaper is a periodogrambased method that uses multiple tapers, or windows, to grade contained estimates of the spectral density to reduce variance of the spectral density estimate
 To the lowest degreesquares spectral analysis, based on least squares fitting to known frequencies
 Nonuniform discrete Fourier transform is used when the signal samples are unevenly spaced in fourth dimension
 Singular spectrum analysis is a nonparametric method that uses a singular value decomposition of the covariance matrix to estimate the spectral density
 Brusquefourth dimension Fourier transform
 Critical filter is a nonparametric method based on information field theory that tin deal with noise, incomplete information, and instrumental response functions
Below is a partial list of parametric techniques:
 Autoregressive model (AR) estimation, which assumes that the
nth sample is correlated with the previous
p
samples.  Movingaverage model (MA) estimation, which assumes that the
northth sample is correlated with noise terms in the previous
p
samples.  Autoregressive moving boilerplate (ARMA) interpretation, which generalizes the AR and MA models.
 MUltiple SIgnal Classification (MUSIC) is a popular superresolution method.
 Maximum entropy spectral estimation is an
allpoles
method useful for SDE when singular spectral features, such as sharp peaks, are expected.
And finally some examples of semiparametric techniques:
 Thin Iterative Covariancebased Estimation (SPICE) estimation,^{[3]}
and the more generalized
$(r,q)$
SPICE.^{[5]}  Iterative Adaptive Arroyo (IAA) interpretation.^{[6]}
 Lasso, similar to To the lowest degreesquares spectral analysis but with a sparsity enforcing penalty.^{[seven]}
Parametric estimation
[edit]
In parametric spectral estimation, ane assumes that the indicate is modeled past a stationary process which has a spectral density function (SDF)
$S(f;a_{1},\ldots ,a_{p})$
that is a office of the frequency
$f$
and
$p$
parameters
$a_{1},\ldots ,a_{p}$
.^{[viii]}
The estimation problem then becomes ane of estimating these parameters.
The most common course of parametric SDF estimate uses as a model an autoregressive model
${\text{AR}}(p)$
of order
$p$
.^{[8]}
^{
: 392
}
A signal sequence
$\{Y_{t}\}$
obeying a cipher mean
${\text{AR}}(p)$
procedure satisfies the equation

$Y_{t}=\phi _{one}Y_{tane}+\phi _{2}Y_{tii}+\cdots +\phi _{p}Y_{tp}+\epsilon _{t},$
where the
$\phi _{1},\ldots ,\phi _{p}$
are fixed coefficients and
$\epsilon _{t}$
is a white racket procedure with zippo hateful and
innovation variance
$\sigma _{p}^{two}$
. The SDF for this process is

$S(f;\phi _{1},\ldots ,\phi _{p},\sigma _{p}^{2})={\frac {\sigma _{p}^{2}\Delta t}{\leftane\sum _{k=1}^{p}\phi _{k}eastward^{2i\pi fk\Delta t}\correct^{2}}}\qquad f<f_{N},$
with
$\Delta t$
the sampling fourth dimension interval and
$f_{N}$
the Nyquist frequency.
There are a number of approaches to estimating the parameters
$\phi _{ane},\ldots ,\phi _{p},\sigma _{p}^{2}$
of the
${\text{AR}}(p)$
process and thus the spectral density:^{[8]}
^{
: 452453
}
Culling parametric methods include fitting to a moving average model (MA) and to a full autoregressive moving boilerplate model (ARMA).
Frequency estimation
[edit]
Frequency estimation
is the process of estimating the frequency, aamplitude, and stageshift of a point in the presence of dissonance given assumptions most the number of the components.^{[10]}
This contrasts with the general methods to a higher place, which do not make prior assumptions nearly the components.
Unmarried tone
[edit]
If one only wants to estimate the single loudest frequency, 1 can employ a pitch detection algorithm. If the dominant frequency changes over time, then the problem becomes the estimation of the instantaneous frequency as defined in the time–frequency representation. Methods for instantaneous frequency interpretation include those based on the WignerVille distribution and higher social club ambivalence functions.^{[xi]}
If 1 wants to know
all
the (possibly complex) frequency components of a received indicate (including transmitted betoken and dissonance), one uses a multipletone approach.
Multiple tones
[edit]
A typical model for a signal
$10(n)$
consists of a sum of
$p$
complex exponentials in the presence of white racket,
$westward(n)$

$x(n)=\sum _{i=1}^{p}A_{i}e^{jn\omega _{i}}+w(n)$
.
The power spectral density of
$10(n)$
is equanimous of
$p$
impulse functions in addon to the spectral density function due to noise.
The most common methods for frequency estimation involve identifying the racket subspace to extract these components. These methods are based on eigen decomposition of the autocorrelation matrix into a signal subspace and a dissonance subspace. After these subspaces are identified, a frequency estimation function is used to find the component frequencies from the noise subspace. The virtually pop methods of racket subspace based frequency estimation are Pisarenko’s method, the multiple bespeak classification (MUSIC) method, the eigenvector method, and the minimum norm method.
 Pisarenko’s method

${\hat {P}}_{\text{PHD}}\left(eastward^{j\omega }\correct)={\frac {i}{\left\mathbf {e} ^{H}\mathbf {5} _{\text{min}}\right^{2}}}$
 MUSIC

${\hat {P}}_{\text{MU}}\left(eastward^{j\omega }\right)={\frac {1}{\sum _{i=p+ane}^{M}\left\mathbf {eastward} ^{H}\mathbf {5} _{i}\right^{two}}}$
,  Eigenvector method

${\chapeau {P}}_{\text{EV}}\left(e^{j\omega }\right)={\frac {1}{\sum _{i=p+i}^{M}{\frac {one}{\lambda _{i}}}\left\mathbf {e} ^{H}\mathbf {v} _{i}\right^{2}}}$
 Minimum norm method

${\chapeau {P}}_{\text{MN}}\left(e^{j\omega }\right)={\frac {1}{\left\mathbf {due east} ^{H}\mathbf {a} \correct^{2}}};\ \mathbf {a} =\lambda \mathbf {P} _{due north}\mathbf {u} _{1}$
Example calculation
[edit]
Suppose
$x_{n}$
, from
$n=0$
to
$Northwardane$
is a time series (discrete time) with zero mean. Suppose that it is a sum of a finite number of periodic components (all frequencies are positive):

${\begin{aligned}x_{n}&=\sum _{k}A_{1000}\sin(2\pi \nu _{yard}n+\phi _{m})\\&=\sum _{k}A_{thou}\left(\sin(\phi _{k})\cos(2\pi \nu _{k}north)+\cos(\phi _{k})\sin(two\pi \nu _{k}due north)\correct)\\&=\sum _{k}\left(\overbrace {a_{one thousand}} ^{A_{1000}\sin(\phi _{k})}\cos(2\pi \nu _{k}n)+\overbrace {b_{one thousand}} ^{A_{k}\cos(\phi _{thousand})}\sin(2\pi \nu _{chiliad}n)\right)\end{aligned}}$
The variance of
$x_{n}$
is, for a nilhateful part as above, given by

${\frac {1}{N}}\sum _{n=0}^{N1}x_{n}^{two}.$
If these data were samples taken from an electrical indicate, this would be its average power (ability is energy per unit time, and so it is analogous to variance if energy is analogous to the amplitude squared).
Now, for simplicity, suppose the signal extends infinitely in time, and so nosotros pass to the limit as
$N\to \infty .$
If the average power is divisional, which is almost always the case in reality, then the following limit exists and is the variance of the information.

$\lim _{Northward\to \infty }{\frac {ane}{N}}\sum _{n=0}^{Ni}x_{n}^{2}.$
Over again, for simplicity, we will pass to continuous time, and presume that the indicate extends infinitely in time in both directions. Then these two formulas get

$10(t)=\sum _{grand}A_{k}\sin(2\pi \nu _{k}t+\phi _{k})$
and

$\lim _{T\to \infty }{\frac {1}{2T}}\int _{T}^{T}ten(t)^{2}dt.$
The root mean square of
$\sin$
is
$one/{\sqrt {2}}$
, then the variance of
$A_{yard}\sin(2\pi \nu _{k}t+\phi _{yard})$
is
${\tfrac {1}{two}}A_{k}^{2}.$
Hence, the contribution to the average power of
$x(t)$
coming from the component with frequency
$\nu _{k}$
is
${\tfrac {one}{ii}}A_{thousand}^{two}.$
All these contributions add together up to the average power of
$x(t).$
Then the power every bit a function of frequency is
${\tfrac {ane}{2}}A_{1000}^{ii},$
and its statistical cumulative distribution part
$S(\nu )$
will be

$Southward(\nu )=\sum _{k:\nu _{thou}<\nu }{\frac {ane}{2}}A_{k}^{2}.$
$S$
is a step function, monotonically notdecreasing. Its jumps occur at the frequencies of the periodic components of
$ten$
, and the value of each leap is the ability or variance of that component.
The variance is the covariance of the data with itself. If we now consider the same data but with a lag of
$\tau$
, nosotros tin take the covariance of
$x(t)$
with
$x(t+\tau )$
, and define this to be the autocorrelation function
$c$
of the bespeak (or information)
$ten$
:

$c(\tau )=\lim _{T\to \infty }{\frac {1}{2T}}\int _{T}^{T}x(t)x(t+\tau )dt.$
If it exists, it is an fiftyfifty function of
$\tau .$
If the boilerplate ability is bounded, then
$c$
exists everywhere, is finite, and is bounded by
$c(0),$
which is the average ability or variance of the data.
Information technology can exist shown that
$c$
can be decomposed into periodic components with the same periods as
$x$
:

$c(\tau )=\sum _{k}{\frac {1}{2}}A_{k}^{2}\cos(ii\pi \nu _{one thousand}\tau ).$
This is in fact the spectral decomposition of
$c$
over the dissimilar frequencies, and is related to the distribution of power of
$10$
over the frequencies: the amplitude of a frequency component of
$c$
is its contribution to the average power of the bespeak.
The power spectrum of this instance is non continuous, and therefore does not take a derivative, and therefore this signal does non accept a power spectral density office. In full general, the power spectrum will usually be the sum of two parts: a line spectrum such equally in this instance, which is not continuous and does non accept a density role, and a residue, which is absolutely continuous and does have a density function.
Meet also
[edit]
 Multidimensional spectral interpretation
 Periodogram
 SigSpec
 Spectrogram
 Time–frequency analysis
 Time–frequency representation
 Whittle likelihood
 Spectral power distribution
References
[edit]

^
P Stoica and R Moses, Spectral Analysis of Signals, Prentice Hall, 2005. 
^
Welch, P. D. (1967), “The use of Fast Fourier Transform for the interpretation of power spectra: A method based on fourth dimension averaging over curt, modified periodograms”,
IEEE Transactions on Audio and Electroacoustics, AU15 (2): lxx–73, Bibcode:1967ITAE…fifteen…70W, doi:10.1109/TAU.1967.1161901

^
^{ a }
^{ b }
Stoica, Petre; Babu, Prabhu; Li, Jian (Jan 2011). “New Method of Sparse Parameter Estimation in Separable Models and Its Utilize for Spectral Analysis of Irregularly Sampled Information”.
IEEE Transactions on Signal Processing.
59
(1): 35–47. Bibcode:2011ITSP…59…35S. doi:10.1109/TSP.2010.2086452. ISSN 1053587X. S2CID 15936187.

^
Stoica, Petre; Li, Jian; Ling, Jun; Cheng, Yubo (April 2009). “Missing information recovery via a nonparametric iterative adaptive arroyo”.
2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE: 3369–3372. doi:10.1109/icassp.2009.4960347. ISBN9781424423538.

^
Sward, Johan; Adalbjornsson, Stefan Ingi; Jakobsson, Andreas (March 2017). “A generalization of the sparse iterative covariancebased reckoner”.
2017 IEEE International Briefing on Acoustics, Speech communication and Bespeak Processing (ICASSP). IEEE: 3954–3958. doi:10.1109/icassp.2017.7952898. ISBN978150904117halfdozen. S2CID 5640068.

^
Yardibi, Tarik; Li, Jian; Stoica, Petre; Xue, Ming; Baggeroer, Arthur B. (Jan 2010). “Source Localization and Sensing: A Nonparametric Iterative Adaptive Approach Based on Weighted Least Squares”.
IEEE Transactions on Aerospace and Electronic Systems.
46
(1): 425–443. Bibcode:2010ITAES..46..425Y. doi:10.1109/TAES.2010.5417172. hdl:1721.1/59588. ISSN 00189251. S2CID 18834345.

^
Panahi, Ashkan; Viberg, Mats (February 2011). “On the resolution of The LASSObased DOA estimation method”.
2011 International ITG Workshop on Smart Antennas. IEEE: 1–5. doi:10.1109/wsa.2011.5741938. ISBN978161284075viii. S2CID 7013162.

^
^{ a }
^{ b }
^{ c }
^{ d }
Percival, Donald B.; Walden, Andrew T. (1992).
Spectral Assay for Physical Applications. Cambridge University Press. ISBN9780521435413.

^
Burg, J.P. (1967) “Maximum Entropy Spectral Assay”,
Proceedings of the 37th Meeting of the Social club of Exploration Geophysicists, Oklahoma City, Oklahoma. 
^
Hayes, Monson H.,
Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc., 1996. ISBN 047159431eight. 
^
Lerga, Jonatan. “Overview of Signal Instantaneous Frequency Estimation Methods”
(PDF). University of Rijeka. Retrieved
22 March
2014.
Further reading
[edit]

Porat, B. (1994).
Digital Processing of Random Signals: Theory & Methods. Prentice Hall. ISBN978013063751two.

Priestley, Thousand.B. (1991).
Spectral Analysis and Fourth dimension Series. Academic Press. ISBN9780125649223.

Stoica, P.; Moses, R. (2005).
Spectral Assay of Signals. Prentice Hall. ISBN9780131139565.

Thomson, D. J. (1982). “Spectrum estimation and harmonic analysis”.
Proceedings of the IEEE.
lxx
(ix): 1055–1096. Bibcode:1982IEEEP..seventy.1055T. CiteSeerX10.1.one.471.1278. doi:10.1109/PROC.1982.12433. S2CID 290772.
Cos Phi Per 4
Source: https://en.wikipedia.org/wiki/Spectral_density_estimation