blackjax.optimizers.dual_averaging#

Module Contents#

Classes#

DualAveragingState

State carried through the dual averaging procedure.

Functions#

dual_averaging(→ tuple[Callable, Callable, Callable])

Find the state that minimizes an objective function using a primal-dual

class DualAveragingState[source]#

State carried through the dual averaging procedure.

log_x

The logarithm of the current state

log_x_avg

The time-weighted average of the values that the logarithm of the state has taken so far.

step

The current iteration step.

avg_err

The time average of the value of the quantity \(H_t\), the difference between the target acceptance rate and the current acceptance rate.

mu

Arbitrary point the values of log_step_size are shrunk towards. Chose to be \(\log(10 \epsilon_0)\) where \(\epsilon_0\) is chosen in this context to be the step size given by the find_reasonable_step_size procedure.

log_x: float[source]#
log_x_avg: float[source]#
step: int[source]#
avg_error: float[source]#
mu: float[source]#
dual_averaging(t0: int = 10, gamma: float = 0.05, kappa: float = 0.75) tuple[Callable, Callable, Callable][source]#

Find the state that minimizes an objective function using a primal-dual subgradient method.

See [Nes09] for a detailed explanation of the algorithm and its mathematical properties.

Parameters:
  • t0 (float >= 0) – Free parameter that stabilizes the initial iterations of the algorithm. Large values may slow down convergence. Introduced in [HG+14] with a default value of 10.

  • gamma – Controls the speed of convergence of the scheme. The authors of [HG+14] recommend a value of 0.05.

  • kappa (float in ]0.5, 1]) – Controls the weights of past steps in the current update. The scheme will quickly forget earlier step for a small value of kappa. Introduced in [HG+14], with a recommended value of .75

Returns:

  • init – A function that initializes the state of the dual averaging scheme.

  • update – a function that updates the state of the dual averaging scheme.

  • final – a function that returns the state that minimizes the objective function.