Basic signal processing

Basic signal processing

Basic signal processing:
  • linear time-invariant system
  • A system can be viewed as taking an input signal and generating an output signal.
  • The condition of linearity implies that scaling holds: R(cx) = cR(x). In other words, processing a scaled version of the input signal is equivalent to scaling the processing of the input signal.
  • The condition of linearity also implies that additivity holds: R(x) + R(y) = R(x + y). In other words, summing the processed signal x and the processed signal y is equivalent to processing the sum of the original signals x and y.
  • Time-invariant (or shift-invariant) basically means the system processes the signal in the same way independent of time delays (or shift delays).
  • Linear time-invariant systems can be fully characterized as the result of convolution with some particular kernel.
  • convolution - refers to computing the dot product of the base signal with a flipped version of a kernel, and then repeating this process for all possible shifts.
  • A fundamental fact is that the convolution of x and y is identical to IFFT(FFT(x) .* FFT(y)). In other words, convolution in the time domain (or space domain) is equivalent to multiplication in the Fourier domain.
  • kernel - the particular shape of a kernel corresponds to a specific shape in the Fourier domain. For example, the broader a Gaussian kernel in the time domain, the narrower the width in the Fourier domain.
  • sine and cosine functions
  • basic characteristics of a sinusoid: frequency, amplitude, phase
  • Any weighted sum of sin and cos (e.g. a*sin + b*cos) is just another sinusoid.
  • A phase shift to, say, a sin function (e.g. functions described by the form sin(x+phase)) is all you need to generate all possible sinusoids (at a given frequency).
  • Notice that sin and cos also bear simple relationships to each other: sin(alpha) = cos(90-alpha).
  • The Fourier transform generalizes to any number of dimensions. In 1D, think time series. In 2D, think image. In 3D, think volume.
  • Fourier transform
  • In mathematical terms, you can think of the Fourier transform as just as a change of basis. The set of sinusoids that make the new basis are orthogonal and complete.
  • We use complex numbers for mathematical convenience, where the real component corresponds to cos and the imaginary component corresponds to sin. In particular, exp(j*theta) = cos(theta)+j*sin(theta). Note that this is a unit-length vector (in the complex plane).
  • amplitude spectrum vs. phase spectrum
  • Filtering can be thought of as simply upweighting and downweighting certain frequencies. Filters can be considered low-pass (keep low, discard high), band-pass (discard low and high, keep the middle), or high-pass (discard low, keep high).
  • Windowing a signal (i.e. multiplying it with a smoothly varying window) is a method for reducing edge/wraparound effects that are implicit in the Fourier transform.
  • sinc function - sin(x)/x
  • Fourier theory implies that the underlying function implied by a set of measurements can be perfectly reconstructed through a sum of weighted sinc functions. Sinc interpolation is interpolation according to that mindset.
  • Nyquist limit - the maximum frequency that you can represent with a set of n data points is n/2 cycles. If there are frequencies in the signal beyond the Nyquist limit, you are going to suffer from aliasing.
  • aliasing - if you don't sample fast enough, high frequencies are "aliased" or "bleed" into low frequencies, leading to corruption of your data.
  • antialiasing - idea of smoothing (low-pass filtering) your signal first before you sample it. This helps reduce aliasing artifacts
  • zero-padding - compute the Fourier Transform, add zeros for higher high-frequency components, and then compute the inverse Fourier Transform. This is an easy way to achieve sinc interpolation and upsample your data.
  • Subsampling - just skipping every N data points. This is simple. But there are two drawbacks. It doesn't "use" all the data you have (i.e., you might not be gaining SNR benefits by performing averaging (i.e. local running average)). And it risks aliasing artifacts.
  • Resampling - changing the resolution of your data; this implies that you are doing something sophisticated like first applying an anti-aliasing filter and then subsampling
  • Downsampling - sort of synonymous to either subsampling or resampling, depending on the context. But be careful about the choice of terminology, as things can get confusing!
  • Filtering is generally useful for getting rid of noise. For example, if you know your noise largely lives within a certain range of frequencies and if you know that your signal does not reside (much) within those frequencies, you can elect to suppress power at those frequencies.
  • Zero-phase filters - filters that might change the amplitude at various Fourier components but do not change the phase. Changing the phase will "shift" or "move" content around, and is typically not desired.
  • Smoothing - just low-pass filtering. But use of the term 'smoothing' sort of implies that the goal is to help reduce noise.
  • Filtering as regression. You can think of filtering as simply fitting a basis set of sinusoids to your data and deciding to zero-out (discard) some of the beta coefficients before reconstructing the model fit.
  • Further reading:
  •  https://www.sciencedirect.com/science/article/pii/S0896627319301746