Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019 Computational Radar Imaging Müjdat Çetin Associate Professor, Department of Electrical and Computer Engineering Interim Director, Goergen Institute for Data Science University of Rochester, Rochester, NY Contributors : Burak Alver, Ammar Saleem, Özben Önhon, Sadegh Samadi, M. Javad Hasankhan, Abdurrahim Soğanlı, Emre Güven, Alper Güngör, Ivana Stojanovic, Kush Varshney, W. Clem Karl, Randy Moses, Lee Potter, Alan S. Willsky.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Computational Radar Imaging
Müjdat Çetin
Associate Professor, Department of Electrical and Computer EngineeringInterim Director, Goergen Institute for Data Science
University of Rochester, Rochester, NY
Contributors: Burak Alver, Ammar Saleem, Özben Önhon, Sadegh Samadi, M. Javad Hasankhan, Abdurrahim Soğanlı, Emre Güven, Alper Güngör, Ivana Stojanovic,
Kush Varshney, W. Clem Karl, Randy Moses, Lee Potter, Alan S. Willsky.
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Radar Imaging Basics
•All-weather•Day and night operation•Superposition of response from scatterers – tomographic measurements
•Synthetic aperture radar (SAR)•Computational imaging problem: Obtain a spatial map of reflectivity from radar returns
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Outline
• Sparsity-driven radar imaging– Point-enhanced and region-enhanced imaging
– ADMM & proximal operators in the case of complex-valued fields
– Wide-angle imaging and anisotropy characterization
– Model errors and autofocusing
– Moving-object imaging
• Machine learning for radar imaging– Dictionary learning
– Deep learning-based priors
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Initial motivation for our work (circa 1999*)
• Accurate localization of dominant scatterers
– Limited resolution
– Clutter and artifact energy
Some challenges for automatic decision-making from SAR images:
• Region separability
– Speckle
– Object boundaries
• Limited or sparse apertures
* This slide was found in an archaeological excavation site and is believed to be ~2000 deep-learning-years old.
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
How I got attracted to sparsity…
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
SAR Ground-plane Geometry
• Scalar 2-D complex-valued reflectivity field
• Transmitted chirp signal:
• Received, demodulated return from circular patch:
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
SAR Observation Model
• Observations are related to projections of the field:
• SAR observations are band-limited slices from the 2-D Fourier transform of the reflectivity field:
• Discrete tomographic
SAR observation model:(combining all measurements)
Observed data
SAR Forward Model Unknown field
Noise
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Conventional Image Formation
• Given SAR returns, create an estimate of the reflectivity field f
Polar format algorithm:
• Each pulse gives slice of 2-D Fourier transform of field
• Polar to rectangular resampling
• 2-D inverse DFT
Support of observed data in the spatial frequency domain
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Synthesis-based SDA
Reference conventional
image without
phase errors
Proposed approach
with phase errors
Conventional image
with phase errors
33
Wavelet dictionary
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Moving-Target Imaging
• SAR platform position uncertainties cause space-invariant defocusing of the reconstructed image, i.e., the amount of defocusing is the same for all points in the scene.
• Motion of a target in the scene can also be modeled as a phase error over the phase history data corresponding to a stationary scene.
• Moving targets in the scene cause artifacts including defocusing around the spatial neighborhood of the target in the scene.
Space-variant defocusing
Need to keep an account of the contributions from each spatial location to the phase error at each aperture position
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Sparsity-Driven Moving-Target Imaging
iits
ffAyfJff
1)(..
)(minarg),(minarg1211
2
2,,
1
Tmjmjmj
mIeee
)()()(,.......,, 21
M
.
.
.
2
1
• Involves sparsity constraints both on the reflectivity field and on the motion field.
• Have also developed a more efficient and potentially robust version based on constructing ROIs and performing space-invariant focusing within them.
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
• Each module has batch normalization and ReLU layers
[1] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deepCNN for image denoising,” IEEE Transactions on Image Processing, 26(7):3142-3155, 2017.
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Training the CNN - Synthetic ScenesUsing 64×64 ground truth images for CNN training:
• Add random phase to training images and obtain the phasehistories.
• Apply complex-valued additive noise to the phase histories.– Magnitude uniformly distributed over [0, 𝜎𝑦], where 𝜎𝑦 is the standard
deviation of the magnitude of the phase history data
– Phase uniformly distributed over [−𝜋, 𝜋]
• Perform conventional reconstructions.
• Extract 16×16 overlapping patches from these conventionalimages and their corresponding ground truths, to constructinput-output pairs.
• Augment the pairs of images through rotation by[90°, 180°, 270°].
• Train the network using these augmented pairs of images, with image reconstructed from noisy data as input and ground truth image as output.
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Synthetic Data Experiments --Training Set
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Test Set
• 5 different phase history data availability levels:
100%, 87.89%, 76.56%, 56.25%, 25%
• 2 different noise levels (𝜎𝑛 ∈ 0.1, 1 𝜎𝑦)
• Rectangular band-limitation for data reduction
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Qualitative Results: Image 7, 𝜎𝑛 = 0.1𝜎𝑦 and full data
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Qualitative Results: Image 7, 𝜎𝑛 = 0.1𝜎𝑦 and 87.89% data
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Qualitative Results: Image 7, 𝜎𝑛 = 0.1𝜎𝑦 and 76.56% data
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Qualitative Results: Image 7, 𝜎𝑛 = 0.1𝜎𝑦 and 56.25% data
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Qualitative Results: Image 7, 𝜎𝑛 = 0.1𝜎𝑦 and 25% data
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Quantitative Results
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Preliminary Results on Real Scenes from TerraSAR-X
• Training based on the Netherlands Rotterdam Harbor StaringSpotlight SAR image (1041 × 1830)
– Split into 448 non-overlapping 64 × 64 ‘‘windows’’
– 1075648 overlapping 16 × 16 patches extracted from windows
– Patches augmented with rotations of 90°, 180°, 270°
• Test set: 751 selected windows extracted from the Panama High Resolution Spotlight SAR image (2375 × 3375)
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Qualitative Results: Image 608, 𝜎𝑛 = 𝜎𝑦, full data
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Qualitative Results: Image 608, 𝜎𝑛 = 𝜎𝑦, 87.89% data
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Qualitative Results: Image 608, 𝜎𝑛 = 𝜎𝑦, 76.56% data
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Qualitative Results: Image 608, 𝜎𝑛 = 𝜎𝑦, 56.25% data
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Qualitative Results: Image 608, 𝜎𝑛 = 𝜎𝑦, 25% data
Müjdat Çetin IMA Computational Imaging Workshop, October 14-18, 2019
Conclusion• A line of inquiry that involves:
– Radar sensing
– Computational imaging
– Signal representation, compressed sensing
– Machine learning
• Sparsity is a useful asset for radar imaging especially in nonconventional
data collection scenarios (e.g., when the data are sparse, irregular, limited)
• Deep learning methods may have the potential to learn complicated spatial patterns and enable their incorporation as priors into computational radar imaging
• Radar imaging offers a rich set of inference problems that can benefit from machine learning
– Reflectivity field, anisotropy, model (parameters), object motion