Spatial resolution is one of the fundamental bottlenecks in the area of time-resolved imaging. Since each pixel measures a scene dependent time-profile, there is a technological limit on the size of pixel arrays that can be simultaneously used to perform measurements. To overcome this barrier, in this paper, we propose a low-complexity, one-bit sensing scheme. On the data capture front, the time-resolved measurements are mapped to a sequence of +1 and -1. This leads to an extremely simple implementation and at the same time poses a new form of information loss. On the image recovery front, our one-bit time-resolved imaging scheme is complemented with a non-iterative recovery algorithm that can handle the case of single and multiple light paths. Extensive computer simulations and physical experiments benchmarked against conventional Time-of-Flight imaging data corroborate our theoretical framework. Thus, our low-complexity alternative to time-resolved imaging can indeed potentially lead to a new imaging methodology.Camera sensors rely on global or rolling shutter functions to expose an image. This fixed function approach severely limits the sensors' ability to capture high-dynamic-range (HDR) scenes and resolve high-speed dynamics. Spatially varying pixel exposures have been introduced as a powerful computational photography approach to optically encode irradiance on a sensor and computationally recover additional information of a scene, but existing approaches rely on heuristic coding schemes and bulky spatial light modulators to optically implement these exposure functions. Here, we introduce neural sensors as a methodology to optimize per-pixel shutter functions jointly with a differentiable image processing method, such as a neural network, in an end-to-end fashion. Moreover, we demonstrate how to leverage emerging programmable and re-configurable sensor-processors to implement the optimized exposure functions directly on the sensor. Our system takes specific limitations of the sensor into account to optimize physically feasible optical codes and we demonstrate state-of-the-art performance for HDR and high-speed compressive imaging in simulation and experimentallly with real scenes.Lensless cameras, while extremely useful for imaging in constrained scenarios, struggle with resolving scenes with large depth variations. To resolve this, we propose imaging with a set of mask patterns displayed on a programmable mask, and introduce a computational focusing operator that helps to resolve the depth of scene points. As a result, the proposed imager can resolve dense scenes with large depth variations, allowing for more practical applications of lensless cameras. We also present a fast reconstruction algorithm for scene at multiple depths that reduces reconstruction time by two orders of magnitude. Finally, we build a prototype to show the proposed method improves both image quality and depth resolution of lensless cameras.Fuzzy objects composed of hair, fur, or feather are impossible to scan even with the latest active or passive 3D scanners. We present a novel and practical neural rendering (NR) technique called neural opacity point cloud (NOPC) to allow high quality rendering of such fuzzy objects at any viewpoint. NOPC employs a learning-based scheme to extract geometric and appearance features on 3D point clouds including their opacity. It then maps the 3D features onto virtual viewpoints where a new U-Net based NR manages to handle noisy and incomplete geometry while maintaining translation equivariance. Comprehensive experiments on existing and new datasets show our NOPC can produce photorealistic rendering on inputs from multi-view setups such as a turntable system for hair and furry toy captures.Tensor Principal Component Pursuit (TPCP) is a powerful approach in the Tensor Robust Principal Component Analysis (TRPCA), where the goal is to decompose a data tensor to a low-tubal-rank part plus a sparse residual. https://www.selleckchem.com/products/lys05.html TPCP is shown to be effective under certain tensor incoherence conditions, which can be restrictive in practice. In this paper, we propose a Modified-TPCP, which incorporates the prior subspace information in the analysis. With the aid of prior info, the proposed method is able to recover the low-tubal-rank and the sparse components under a significantly weaker incoherence assumption. We further design an efficient algorithm to implement Modified-TPCP based upon the Alternating Direction Method of Multipliers (ADMM). The promising performance of the proposed method is supported by simulations and real data applications.Recovering the shape and reflectance of non-Lambertian surfaces remains a challenging problem in computer vision since the view-dependent appearance invalidates traditional photo-consistency constraint. In this paper, we introduce a novel concentric multi-spectral light field (CMSLF) design that is able to recover the shape and reflectance of surfaces of various materials in one shot. Our CMSLF system consists of an array of cameras arranged on concentric circles where each ring captures a specific spectrum. Coupled with a multi-spectral ring light, we are able to sample viewpoint and lighting variations in a single shot via spectral multiplexing. We further show that our concentric camera and light source setting results in a unique single-peak pattern in specularity variations across viewpoints. This property enables robust depth estimation for specular points. To estimate depth and multi-spectral reflectance map, we formulate a physics-based reflectance model for the CMSLF under the surface camera (S-Cam) representation. Extensive synthetic and real experiments show that our method outperforms the state-of-the-art shape reconstruction methods, especially for non-Lambertian surfaces.We propose DistSurf-OF, a novel optical flow method for neuromorphic cameras. Neuromorphic cameras (or event detection cameras) are an emerging sensor modality that makes use of dynamic vision sensors (DVS) to report asynchronously the log-intensity changes (called "events") exceeding a predefined threshold at each pixel. In absence of the intensity value at each pixel location, we introduce a notion of "distance surface"---the distance transform computed from the detected events---as a proxy for object texture. The distance surface is then used as an input to the intensity-based optical flow methods to recover the two dimensional pixel motion. Real sensor experiments verify that the proposed DistSurf-OF accurately estimates the angle and speed of each events.