Commit ef6edc27 authored by Simon Maretzke's avatar Simon Maretzke
Browse files

bit of documentation

parent efec6821
%=====================================================================
% An accelerated version of the ADMM-algorithm that solves a
% minimization problem of the form
function [result, stats] = fADMM(proxH, proxG, v_hat_start, settings)
% Implements the accelerated variant of the Alternating-Direction Method of Multipliers (ADMM)
% proposed in the article
%
% result = argmin_x H(x) + G(x)
%
% where H and G are convex functionals for which
% implementations of the proximal operators
%
% proxH(y_0,sigma) = argmin_y H(y) + 1/(2*sigma) ||y-y_0||_2^2
% proxG(x_0,sigma) = argmin_x G(x) + 1/(2*sigma) ||x-x_0||_2^2
%
% are available. (||x||_2 denotes the 2-norm of x)
%
% The algorithm is described (in a more general form than implemented
% here) in the article:
%
% Goldstein, T., O'Donoghue, B., Setzer, S., & Baraniuk, R. (2014).
% Fast alternating direction optimization methods.
% SIAM Journal on Imaging Sciences, 7(3), 1588-1623.
%
%
% Last modified on April 24 2019 by Simon Maretzke
%=====================================================================
% The algorithm iteratively solves an optimization problem of the form x_res = argmin_x H(x) + G(x)
% with convex functionals H and G, given implementations of the corresponding proximal operators
% proxH(x_0) = argmin_x H(x) + ||x-x_0||_2^2 and proxG(x_0) = argmin_x G(x) + ||x-x_0||_2^2, where
% ||y||_2 denotes the Euclidean 2-norm.
% HoloTomoToolbox
% Copyright (C) 2019 Institut fuer Roentgenphysik, Universitaet Goettingen
%
% This program is free software: you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published by
% the Free Software Foundation, either version 3 of the License, or
% (at your option) any later version.
%
% This program is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with this program. If not, see <http://www.gnu.org/licenses/>.
function [result, stats] = fADMM(proxH, proxG, v_hat_start, settings)
% Complete settings with default parameters
defaults.maxIt = 1e2;
......
function [result, stats] = projectedGradientDescent(F, initialGuess, settings)
% Computes a (local) minimum of a functional F by projected/proximal gradient-descent
% iterations. The implementation uses of Barzilai-Borwein stepsizes, non-monotone linesearch
% and a stopping rule based on the relative residual, as proposed in the article:
%
% Goldstein, T., Studer, C., & Baraniuk, R. (2014). A field guide to forward-backward
% splitting with a FASTA implementation. arXiv preprint arXiv:1411.3406.
% HoloTomoToolbox
% Copyright (C) 2019 Institut fuer Roentgenphysik, Universitaet Goettingen
%
% This program is free software: you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published by
% the Free Software Foundation, either version 3 of the License, or
% (at your option) any later version.
%
% This program is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with this program. If not, see <http://www.gnu.org/licenses/>.
% Default parameters
defaults.initialStepsize = 1;
......@@ -134,18 +157,6 @@ end
function [stepsize, cache] = bbStepsize(F, x, y, s, cache)
% Input arguments:
%
% F: Objective functional
% x: Evaluation point of the gradient descent step
% y: Value of the functional at x, y = F(x)
% s: Computed gradient descent direction s = grad(F)(x).
% cache: Cache-structure, that stores the required information on the preceding iterate
%
% Output arguments:
%
% stepsize: The computed stepsize
if ~isfield(cache, 'bb_last_step')
cache.bb_last_step = 0;
stepsize = cache.stepsize;
......@@ -172,25 +183,12 @@ function [stepsize, cache] = bbStepsize(F, x, y, s, cache)
cache.s = s;
cache.stepsize = stepsize;
end
function [stepsize, cache] = constantStepsize(F, x, y, s, cache)
% Input arguments:
%
% F: Objective functional
% x: Evaluation point of the gradient descent step
% y: Value of the functional at x, y = F(x)
% s: Computed gradient descent direction s = grad(F)(x).
% cache: Cache-structure, that stores the required information on the preceding iterate
%
% Output arguments:
%
% stepsize: The computed stepsize
stepsize = cache.stepsize;
cache.x = x;
cache.s = s;
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment