Solving inverse problems using methods from deep learning: An application in tomography
Ozan Öktem, Department of Mathematics KTH
Time: Thu 2017-10-12 09.15 - 10.00
Location: Air&Fire, SciLifeLab
Abstract: Research in theory and algorithms for solving (ill-posed) inverse problems has progressed rapidly during the last three decades. Current state-of-the-art methods for reconstruction are based on solving an optimisation problem where the need to fit measured data is balanced against avoiding overfitting by using a priori information. These generic, yet adaptable, approaches provide the best results in terms of ”quality” when they are properly set-up, e.g. as shown in some stunning applications of compressed sensing. On the other hand, such variational approaches come with three major drawbacks: (1) computational burden, (2) account for more complex task related priori information, and (3) need for proper weighting to avoid overfitting. It is e.g. due to these reasons that such methods are only recently being used in clinical practice for tomographic image reconstruction. Meanwhile, a series of papers during the early 2000s suggested the successful application of convolutional neural networks, leading to state-of-the-art results in practically any assigned image processing task. Key aspects was the use of many network layers, a huge amount of training data, GPU accelerated implementations, and wise optimisation algorithms. It was clearly tempting to use such techniques also for reconstruction, but early attempts proved futile.
The talk will outline a recently developed approach that renders (deep) convolutional neural networks applicable for solving a wide range of inverse problems, which in this talk will be image reconstruction in tomography. A key element is to embed physics models for the data and noise onto the neural network. The resulting approach outperforms current state-of-the-art in terms of ”quality”, while it also addresses the three drawbacks that comes with variational methods. Furthermore, the amount of training data and network size can be kept surprisingly small