Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
/ qmm Public
forked from forieux/qmm

Python Quadratic Majorization-Minimization (MM) optimization algorithms of half-quadratic criteria. Inverses problems, image restoration, denoising, ...

License

Notifications You must be signed in to change notification settings

travisfu/qmm

Repository files navigation

Q-MM: A Python toolbox for Quadratic Majorization-Minimization

./docs/qmm.png

Q-MM is a Python implementation of Majorize-Minimize Quadratic optimization algorithms. Algorithms provided here come from

[1] C. Labat and J. Idier, "Convergence of Conjugate Gradient Methods with a
Closed-Form Stepsize Formula," J Optim Theory Appl, p. 18, 2008.

and

[2] E. Chouzenoux, J. Idier, and S. Moussaoui, "A Majorize–Minimize Strategy
for Subspace Optimization Applied to Image Restoration," IEEE Trans. on
Image Process., vol. 20, no. 6, pp. 1517–1528, Jun. 2011, doi:
10.1109/TIP.2010.2103083.

See documentation for more background. If you use this code, please cite the references above and a citation of this toolbox will also be appreciated. You can also click ⭐ on the repo.

@software{qmm,
 title = {Q-MM: The Quadratic Majorize-Minimize Python toolbox},
 author = {Orieux, Fran\c{c}ois and Abirizk, Ralph},
 url = {https://github.com/forieux/qmm},
}

Quadratic Majorize-Minimize

The Q-MM optimization algorithms compute the minimizer of objective function like

J(x) = ∑k μk ψk(Vk·x - ωk)

where x is the unknown vector, Vk a linear operator, ωk a fixed data, μk a scalar, ψk(u) = ∑iφk(ui), and φk a function that must be differentiable, even, coercive, φ(√·) concave, and 0 < φ'(u) / u < +∞.

The optimization is done thanks to quadratic sugorate function. In particular, no linesearch or sub-iteration is necessary, and close form formula for the step are used with guaranteed convergence.

A classical example, like in the figure below that show an image deconvolution problem, is the resolution of an inverse problem with the minimization of

J(x) = ||y - H·x||2 + μ ψ(V·x)

where H is a low-pass forward model, V a regularization operator that approximate gradient (kind of high-pass filter) and ψ an edge preserving function like Huber. The above objective is obtained with k ∈ {1, 2}, ψ1(·) = ||·||2, V1 = H, ω1 = y, and ω2 = 0.

./docs/image.png

Features

  • The mmmg, Majorize-Minimize Memory Gradient algorithm. See documentation and [2] for details.
  • The mmcg, Majorize-Minimize Conjugate Gradient algorithm. See documentation and [1] for details.
  • No linesearch: the step is obtained from a close form formula without sub-iteration.
  • No conjugacy choice: a conjugacy strategy is not necessary thanks to the subspace nature of the algorithms. The mmcg algorithm use a Polak-Ribière formula.
  • Generic and flexible: there is no restriction on the number of regularizer, their type, ..., as well as for data adequacy.
  • Provided base class for objectives and losses allowing easy and fast implementation.
  • Just one file if you like quick and dirty installation, but available with pip.
  • Comes with examples of implemented linear operator.

Installation and documentation

Q-MM is essentially just one file qmm.py. We recommend using poetry for installation

poetry add qmm

The package can also be installed with pip. More options are described in the documentation.

Q-MM only depends on numpy and Python 3.6.

Example

The demo.py presents an example on image deconvolution. The first step is to implement the operators V and the adjoint VT as callable (function or methods). The user is in charge of these operators and these callable must accept a unique Numpy array x and a unique return value (partial in the functools module in the standard library is usefull here). There is no constraints on the shape, everything is vectorized internally.

After import of qmm, user must instantiate Potential objects that implement φ and Objective objects that implement μ ψ(V·x - ω)

import qmm
phi = qmm.Huber(delta=10) # φ
data_adeq = qmm.QuadObjective(H, Ht, HtH, data=data) # ||y - H·x||2
prior = qmm.Objective(V, Vt, phi, hyper=0.01) # μ ψ(V·x) = μ ∑i φ(vit·x)

Then you can run the algorithm

res = qmm.mmmg([data_adeq, prior], init, max_iter=200)

where [data_adeq, prior] means that the two objective functions are summed. For more details, see documentation.

Contribute

Author

If you are having issues, please let us know

orieux AT l2s.centralesupelec.fr

More information about me here. F. Orieux and R. Abirizk are affiliated to the Signal and Systems Laboratory L2S.

Acknowledgement

Author would like to thanks J. Idier, S. Moussaoui and É. Chouzenoux. É. Chouzenoux has also a Matlab package that implements 3MG for image deconvolution that can be found on her webpage.

License

The project is licensed under the GPLv3 license.

About

Python Quadratic Majorization-Minimization (MM) optimization algorithms of half-quadratic criteria. Inverses problems, image restoration, denoising, ...

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%

AltStyle によって変換されたページ (->オリジナル) /