Optimization

qnetvo.gradient_descent(cost, init_settings, num_steps=150, step_size=0.1, sample_width=25, grad_fn=None, verbose=True, interface='autograd', optimizer=None, optimizer_kwargs={})[source]

Performs a numerical gradient descent optimization on the provided cost function. The optimization is seeded with (random) init_settings which are then varied to minimze the cost.

Parameters:
  • cost (function) – The cost function to be minimized with gradient descent.

  • init_settings (array-like[float]) – A valid input for the cost function.

  • num_steps (int, optional) – The number of gradient descent iterations, defaults to 150.

  • step_size (float, optional) – The learning rate for the gradient descent, defaults to 0.1.

  • sample_width (int, optional) – The number of steps between “sampled” costs which are printed/returned to user, defaults to 25.

  • grad_fn (function, optional) – A custom gradient function, default to None which applies the standard numerical gradient.

  • verbose (bool, optional) – If True, progress is printed during the optimization, defaults to True.

  • interface (string, default "autograd”) – Specifies the optimizer software either "autograd" or "tf" (TensorFlow).

  • optimizer (String) – Specifies the PennyLane optimizer to use. Default qml.GradientDescentOptimizer. Set to "adam" to use the qml.AdamOptimizer, note that interface="autograd" must be set.

  • optimizer_kwargs (Dict) – Keyword arguments to pass to the specified optimizer.

Returns:

Data regarding the gradient descent optimization.

Return type:

dictionary, contains the following keys:

  • opt_score (float) - The maximized reward -(min_cost).

  • opt_settings (array-like[float]) - The setting for which the optimum is achieved.

  • scores (array[float]) - The list of rewards sampled during the gradient descent.

  • samples (array[int]) - A list containing the iteration for each sample.

  • settings_history (array[array-like]) - A list of all settings found for each intermediate step of gradient descent

  • datetime (string) - The date and time in UTC when the optimization occurred.

  • step_times (list[float]) - The time elapsed during each sampled optimization step.

  • step_size (float) - The learning rate of the optimization.

Warning

The gradient_descent function minimizes the cost function, however, the general use case within this project is to maximize the violation of a Bell inequality. The maximization is assumed within gradient_descent and is applied by multiplying the cost by (-1). This is an abuse of function naming and will be resolved in a future commit by having gradient_descent return the minimized cost rather than the maximized reward. The resolution is to wrap gradient_descent with a gradient_ascent function which maximizes a reward function equivalent to -(cost).

Raises:

ValueError – If the interface is not supported.