John E. Tansley, Martin J. Oldfield and
David J.C. MacKay
We examine the problem of deconvolving blurred text. This is a task
in which there is strong prior knowledge (e.g., font
characteristics) that is hard to express computationally. These priors
are implicit, however, in mock data for which the true image is known.
When trained on such mock data, a neural network is able to learn a
solution to the image deconvolution problem which takes advantage of
this implicit prior knowledge. Prior knowledge of image positivity can
be hard--wired into the functional architecture of the network, but we
leave it to the network to learn most of the parameters of the task
from the data. We do not need to tell the network about the point
spread function, the intrinsic correlation function, or the noise process.
Neural networks have been compared with the optimal linear filter, and
with the Bayesian algorithm MemSys, on a variety of problems.
The
networks, once trained, were faster image reconstructors than MemSys,
and had similar performance.