Dr. Lu FANG Associate Professor, Smart Imaging Lab Tsinghua-Berkeley Shenzhen Institute Homepage: www.luvision.net Email: fanglu at sz.tsinghua.edu.cn Research interests: Computational Photography and Visual Computing Smart imaging Lab aims to integrate optics, quantum science, and information science together to explore new computational photography theories and key computational techniques to break through some deep-rooted assumptions and physical limitations in conventional imaging and sensing, such as rectilinear light propagation in geometrical optics, infinite light speed, diffraction limit in optical imaging systems. We study the intrinsic representation of cross- dimensional visual data and reveals the redundancy and sparsity in such visual big data. Project: Multi-Dimension Multi-Scale High-Resolution Computational Instrument Interdiscipline of Neural Science, Optics, Computational Imaging, Biomedical, Signal Processing Nowadays, the development of imaging tools becomes more and more crucial for biological study. For example, large-scale imaging of neural network activity in vivo, as one of the major goals of the BRAIN Initiative, is fundamental to neuroscience study. However, conventional microscopes fail in achieving both high optical resolution and large field-of-view (FoV) simultaneously. We broke up the bottlenecks by developing a novel video-rate, sub-gigapixel macroscope of centimeter scale FoV and sub-micron resolution, named as Real-time, Ultra-large-Scale imaging at High resolution (RUSH) macroscope. It is mainly composed of an objective of large FoV and high resolution, and a camera array for high throughput image sensing. To the best of our knowledge, RUSH is of the largest FoV, highest throughput macroscope at high resolution at the moment. With this novel macroscopy, we demonstrated various applications including high-content pathological slice screening, high-content drug screening, and large-scale imaging of neural network activity / neurovascular coupling / neuron-immune interaction in brain-cortex wide in vivo, etc. Project: Multiscale Camera Array for Gigapixel Videography Interdiscipline of Computational Imaging, Optical Design, Signal Processing, Computer Vision Traditionally, video systems have assumed that the resolution of the camera matches the resolution of the display, i.e., HD video uses HD cameras and displays, 4K video uses 4K cameras and displays, etc. The recent development of gigapixel and VR video systems has illustrated the potential and need for systems in which the camera captures substantially more image information than the display can show. These systems use tiled multiscale image structures to enable viewers to interactively explore the captured image stream. Size, weight, power and cost are central challenges in gigapixel video. To this end, we present an efficient method, in terms of budget, sensor bandwidth and set up labors, for gigapixel videography using a novel multiscale camera array. Our capture system owns a reference camera with a short-focus lens to capture a reference video with comparably large field-of-view, and a parallel of local-view cameras each with a long-focus lens to obtain high-definition localview videos. Such setting enables gigapixel videography by independently warping each local-view video to the reference video, and allows flexible, adaptive and moveable local-view camera setting.