Signal Processing
Recommended Reading: 【Signal Processing】 Signal Processing Theory Table of Contents
1. Overview
4. Filter
1. Overview
⑴ Definition
① Signal: Refers to a sequence {x(tn)} when t1, …, tn (e.g., time) are given.
② Signal Processing: Extracting information from a signal.
③ Signal Preprocessing: Refining a clean signal by removing noise or trend components from raw data.
⑵ Types of Signals
① Type 1. Continuous Signal: A signal where both t and x are continuous.
○ Storage, acquisition, and processing are extremely difficult.
○ Example: To store, acquire, and process the irrational number π, infinite capacity and time are required.
② Type 2. Discrete Signal: A signal where t is discrete, and x is continuous.
③ Type 3. Digital Signal: A signal where both t and x are discrete.
④ Sampling (digitization): Converting continuous signals into discrete signals.
⑤ Nyquist Theorem: If the sampling rate is more than twice the signal frequency, it can be sampled without aliasing error.
⑶ Signals can have more than two dimensions.
① 2D Signal: Defined as an image when the dimension of t is 2.
② 3D Signal: For example, coronary angiogram.
③ 4D Signal: For example, radiotherapy in cancer.
⑷ Time, as a domain, can be transformed in infinitely different ways (e.g., Fourier Transform, converting time to days of the week).
2. Transformation
⑴ Overview: There are infinitely many ways to transform signals.
⑵ Fourier Transformation
① Definition: A representative method to obtain frequency information from a time-domain signal.
② When a signal x(t) is given in the time domain, the transformed frequency-domain signal X(f) has the following relationship:
○ Notation: X(f) = F{x(t)}
○ Fourier Transform is a special case of the Laplace Transform.
○ Proof of the Relationship Between Fourier Transform and Its Inverse.
③ Properties of Fourier Transform
○ Property 1. Linearity: F{ax1(t) + bx2(t)} = aF{x1(t)} + bF{x2(t)} = aX1(f) + bX2(f).
○ Linearity = proportionality + superposition.
○ While all systems are fundamentally nonlinear, they satisfy quasi-linearity in micro-intervals.
○ Property 2. Scaling: F{x(at)} = (1/a) × X(f/a). Related to the uncertainty principle.
○ Property 3. Shifting: F{x(t - a)} = X(f) exp(-j 2πaf).
○ ∴ F{x(t - a)} = X(f) .
○ Meaning of phase: Indicates when the signal starts, but it is difficult to interpret, so only the magnitude is usually considered.
○ Property 4. Convolution
○ x1(t) ⨂ x2(t) = ∫ x1(t) x2(t - τ) dτ = ∫ x1(t - τ) x2(τ) dτ by definition.
○ F{x1(t) ⨂ x2(t)} = F{x1(t)} · F{x2(t)} = X1(f) · X2(f).
○ Property 5. If x(t) is a real function, F{x(-t)} = X(f) holds (where X is the complex conjugate of X).
④ Fourier Transforms of Major Functions
Time Function | Frequency Function | ||||
---|---|---|---|---|---|
exp(-t), t ≥ 0 | 1 / (1 + j 2πf) | ||||
exp(-ax²) | √(π / a) · exp(-π² k² / a) | ||||
δ(t) (Dirac Function) | 1 | ||||
dx(t) / dt | (j 2πf) X(f) | ||||
x(t - t₀) | exp(-j 2πf t₀) X(f) | ||||
exp(j 2πf₀t) x(t) | X(f - f₀) | ||||
x₁(t) ⊗ x₂(t) | X₁(f) · X₂(f) | ||||
x₁(t) · x₂(t) | X₁(f) ⊗ X₂(f) | ||||
tx(t) | (j / 2π) · (dX(f) / df) | ||||
∫ | x(t) | ² dt (Energy) | ∫ | X(f) | ² df (Energy) |
Table 1. Fourier Transforms of Major Functions.
○ The Fourier Transform of a Gaussian signal is another Gaussian: There are infinitely many signals that maintain the same shape.
⑤ 2D Fourier Transform: Applications in image processing, tomography, etc.
⑥ Continuous Fourier Transform (CFT)
○ Types: Piezoelectricity, ultrasound analysis (cf. Fresnel and Fraunhofer Diffraction).
⑦ Discrete Fourier Transform (DFT)
○ FFT (Fast Fourier Transform): Efficiently calculates DFT by dividing it into smaller Fourier Transforms.
⑧ Drawbacks
○ Computationally intensive and requires careful setting of the fundamental frequency f.
○ Cannot determine the chronological order of signals on the frequency-amplitude curve.
○ Reason: Fourier Transform assumes eternal periodic signal input, not local signal input.
○ However, phase analysis can determine the chronological order.
⑨ Applications
○ Non-invasive Tests: Tomography, Spectroscopy, etc.
○ Protein Modeling.
○ Speech and Acoustic Recognition.
⑶ Wavelet Transform
① Background: Developed to address the shortcomings of traditional Fourier Transform.
② Principle: Separates signals of similar frequencies by inner products with a few sampling frequencies.
③ Type 1. STFT (Short-Term Fourier Transform): A preliminary step to general wavelet transformation.
○ Definition: Sliding window method using gate g and its complex conjugate g*.
○ Advantage: Provides signal information in both time and frequency domains.
○ Disadvantage: Ineffective or signal distortion may occur if g’s shape and length are incorrectly set.
② Type 2. Continuous Wavelet Transform (CWT)
○ Formulation:
○ Ψ: Mother wavelet (mother basis function).
○ Ψ*: Complex conjugate of Ψ.
○ b: Shifting parameter.
○ a: Scaling (resolution) parameter.
○ CΨ-1: Constant.
○ (Reference) Time Domain: High precision in time, low (or no) precision in frequency.
○ (Reference) Frequency Domain: High precision in frequency, low (or no) precision in time.
○ Wavelet Domain: Multi-resolution precision in both time and frequency.
○ Types of Mother Wavelets: Select a mother wavelet that resembles the given signal most.
○ Daubechies (dbX).
○ Haar.
○ Mexican Hat (Sombrero).
○ Coiflets.
○ Symlets.
○ Morlet.
○ Meyer.
③ Type 3. Discrete Wavelet Transform (DWT)
○ Overview:
○ Differs significantly from CWT in analysis.
○ Uses Mallat pyramidal algorithm or QMF (Quadrature Mirror Filter).
○ Advantage: Requires less computation than Fourier Transform, leading to faster processing time.
○ Principle:
○ Signal is subdivided into high and low components like a decision tree: Utilizes high-pass and low-pass filters.
Figure 1. Principle of DWT.
○ Design of wavelet and filter functions: Can be designed with a basic filter function h.
○ IDWT: Decomposed signals can reconstruct the original signal via up-sampling.
Figure 2. Principle of IDWT.
○ 2D DWT: Alternates between rows and columns to decompose signals.
Figure 3. Principle of 2D DWT.
○ Types:
○ General Wavelet Decomposition: Filters only in the low-frequency approximation direction.
○ Wavelet Packet Decomposition: Filters further in the high-frequency detail direction, constructing a tree.
○ Arbitrary Path Decomposition: Arbitrary choice of low or high-frequency filtering.
○ UWT (Undecimated Wavelet Transform): Maintains the original data count while calculating approximation or detail coefficients.
○ Applications: Denoising, wavelet compression for images.
⑷ Numerical Integration: Refers to the area under a waveform.
① Trapezoidal Rule: Numerical integration technique.
② Simpson’s Rule: More accurate numerical integration technique than the trapezoidal rule.
③ Simpson’s 3/8 Rule: Even more accurate than Simpson’s rule.
④ Bode’s Rule.
⑤ Moving Average Method.
⑸ MSC (Multiplicative Scattering Correction)
① Corrects by constructing regression equations.
② SNV (Standard Normal Variate): Normalization correction to have a mean of 0 and a standard deviation of 1.
⑹ DCT (Discrete Cosine Transformation)
① A type of Fourier transform.
② The older version of JPEG used DCT: This version of JPEG reduced image sizes by up to 90%.
⑺ Hough Transformation
① An analysis of the frequency of intersections of high-signal parts corresponding to each position and angle of lines in an image.
② Can be used for edge detection, angle detection, etc.
③ Circle Hough Transformation: An analysis of the frequency of intersections of high-signal parts corresponding to each position and size of circles in an image.
⑻ Radon Transformation
① Similar to the Hough transformation.
3. Nyquist Theorem
⑴ Aliasing Artifact: Distortion occurring when the sampling frequency is lower than the signal frequency during sampling.
Figure 4. Aliasing Artifact.
⑵ Nyquist Theorem: For perfect reconstruction of a signal, the sampling frequency must be at least twice the maximum frequency of the signal.
⑶ Proof of Nyquist Theorem:
① Assume a signal x(t) and its Fourier transform X(f) are given. Define the maximum value of X(f) as fm.
Figure 5. x(t) and X(f).
② Assume a transformation function H(t), which is 1 at regular time intervals (Ts) and 0 elsewhere, and its Fourier transform H(f).
Figure 6. H(t) and H(f).
③ Multiplying x(t) with H(t) at each time is equivalent to sampling x(t) at points where H(t) = 1.
④ Over-Sampling: When the sampling frequency is set faster than the signal frequency. X(f) appears at intervals of fs = 1 / Ts.
Figure 7. Over-Sampling Situation.
⑤ Under-Sampling: When the sampling frequency is set slower than the signal frequency, resulting in signal overlap.
Figure 8. Under-Sampling Situation.
⑥ Condition to Avoid Under-Sampling: If fs = 1 / Ts, fs - fm > fm ⇔ fs > 2 × fm.
4. Filter
⑴ Passive Filter Circuits
① [Low-Pass Filter](https://jb243.github.io/pages/845#:-,-(lowpassfilter): Allows only low-frequency signals to pass. A disadvantage is blurring. Smoothing filter.
Figure 9. Low-Pass Filter.
② [High-Pass Filter](https://jb243.github.io/pages/845#:-,-(HighpassFilter): Allows only high-frequency signals to pass. Sharpening filter.
Figure 10. High-Pass Filter.
③ [Band-Pass Filter](https://jb243.github.io/pages/845#:-,-(passbandfilter): Allows only specific frequencies to pass.
Figure 11. Band-Pass Filter.
④ Band-Stop Filter: Selectively blocks specific frequencies.
Figure 12. Band-Stop Filter.
⑵ Digital Filter Circuits
① FIR Filter
○ Convolution
For raw data {xn}, processed data {yn} is expressed as:
yn = b0xn + b1xn-1 + … + bMxn-M = ∑ bkxn-k (where k = 0, 1, …, M).
Here, bk is called a parameter or coefficient. Additionally:
δ(x) = 0, if x ≠ 0
δ(x) = 1, if x = 0
The unit impulse function δ(x) satisfies:
hn = ∑ bkδ(n-k) (where k = 0, 1, …, M) = bn, if n = 0, 1, …, M
hn = ∑ bkδ(n-k) (where k = 0, 1, …, M) = 0, otherwise
yn = ∑ bkxn-k (where k = 0, 1, …, M) = ∑ hkxn-k (where k = -∞, …, ∞) = hn * xn = xn * hn
Hence, yn is the convolution of hn and xn.
○ Frequency Response
All functions can be expressed as infinite sums of trigonometric functions.
For an arbitrary angular frequency ω, -∞ < n < ∞, xn = Aejφejωn holds. (ref)
For k = 0, 1, …, M, yn is expressed as:
yn = ∑ bkxn-k = ∑ bke-jωkAejφejωn = ( ∑ bke-jωk ) xn = H(ejω)xn
H(ejω) is called the frequency response of the FIR system and is a function of ω.
yn can be further expressed as:
yn = frequency response e^(∠frequency response) xn
Thus, the amplitude of the raw data is scaled by frequency response , and the phase changes by ∠frequency response.
FIR filters allow for linear phase shifts, enabling phase correction.
FIR filters are particularly used in RF fields where phase information is crucial or in control applications requiring added user logic.
○ Example : Low-Pass Filter
If {bk} = {1, 2, 1}, i.e., yn = xn + 2xn-1 + xn-2:
H(ejω) = ∑ bke-jωk (where k = 0, 1, …, M) = 1 + 2e-jω + e-2jω = (2 + 2cosω)e-jω.
frequency response = 2 + 2cosω, resulting in a low-pass filter.
○ Z Transformation
FIR filters are designed by analyzing system characteristics and stability using Z transformation to determine poles and zeros.
② IIR Filter
IIR filters define processed data {yn} as:
yn = ∑ bixn-i (where i = 0, 1, …, M) + ∑ ajyn-j (where j = 1, …, N).
Like FIR filters, IIR filters determine frequency characteristics through frequency response.
Z transformation is used to analyze poles and zeros to design the system.
IIR systems are computationally complex and challenging to design, with nonlinear phase shifts that make phase restoration impossible.
IIR filters are not used in RF communications, where phase information is critical, but they allow for more ideal amplitude characteristics.
③ Kalman Filter
○ Definition : A linear filter that provides optimal output when both input and output signals are corrupted by Gaussian noise.
○ Type 1. One-Dimensional Kalman Filter:
○ Measurement uncertainty.
○ Current belief uncertainty.
○ Kalman gain : Determines the importance of the current belief.
○ Type 2. Multi-Dimensional Kalman Filter.
⑶ Active Filter Circuits
① Sallen-Key Filter
○ High-Pass Sallen-Key Filter
Figure 13. High-Pass Sallen-Key Filter.
○ Low-Pass Sallen-Key Filter
Figure 14. Low-Pass Sallen-Key Filter.
② Butterworth Filter
Figure 15. Butterworth Filter.
○ Formula
○ n : Filter order, determining how many times the signal passes through the filter.
○ Higher n approaches an ideal square wave signal.
○ ω : Normalized cutoff frequency at 1 rad/s.
○ Advantage : Most stable pass-band, ideal when constant amplitude is critical.
○ Disadvantage : Weak roll-off rate.
○ Application : Anti-aliasing.
○ Roll-off Rate : The speed at which the signal approaches 0 after passing through the filter.
③ Chebyshev Filter
Figure 16. Chebyshev Type I Filter.
Figure 17. Chebyshev Type II Filter.
○ Type I Chebyshev Filter Formula:
○ n : Filter order.
○ ε : Ripple size in the pass-band.
○ Tn : nth-order Chebyshev polynomial.
○ Type II Chebyshev Filter Formula.
○ Advantage : Steep roll-off rate, beneficial for frequency preservation.
○ Disadvantage : Pass-band or stop-band may experience damping.
④ Elliptic Filter
Figure 18. Elliptic Filter.
○ Advantage : Steepest roll-off rate.
○ Disadvantage : Pass-band or stop-band may experience damping.
⑤ Bessel Filter
Figure 19. Bessel Filter.
○ Advantage : Best phase response.
○ Disadvantage : Weakest roll-off rate.
⑥ Median Filter
○ Non-linear filter, smoothing filter.
○ Unlike low-pass filters, which cause edge blurring during noise removal, median filters avoid blurring.
○ Reason : Uses the median instead of the mean.
⑦ High-Boost Filter (HBF)
○ Linear filter. Can also be implemented as a passive filter. Sharpening filter.
○ HBF = (A - 1) × original + high-pass.
⑧ Derivative Filter
○ Linear filter. Can also be implemented as a passive filter. Sharpening filter.
○ Types : Roberts mask, Prewitt mask, Sobel mask.
Figure 20. Roberts Mask, Prewitt Mask, Sobel Mask.
Input: 2018.04.29 20:35
Revised: 2024.02.18 19:20