qailab.torch.autograd#

Autograd functions for VQCs

Summary#

Classes:

ArgMax

ArgMax function.

ExpVQCFunction

Class implementing forward and backward calculations for ExpQLayer

Reference#

class qailab.torch.autograd.ExpVQCFunction(*args, **kwargs)[source]#

Bases: Function

Class implementing forward and backward calculations for ExpQLayer

static forward(fn_in: Tensor, weight: Tensor, launcher_forward: QuantumLauncher, launcher_backward: QuantumLauncher) Tensor[source]#

Calculation of forward pass.

Parameters:
  • fn_in (torch.Tensor) – Input tensor.

  • weight (torch.Tensor) – Layer weights.

  • launcher_forward (QuantumLauncher) – Qlauncher with forward pass algorithm.

  • launcher_backward (QuantumLauncher)

  • algorithm. (Qlauncher with backward pass)

  • forward (Not used in)

  • setup_context() (but needed here as it will get passed to)

Returns:

Distribution of forward pass.

Return type:

torch.Tensor

static setup_context(ctx, inputs, output)[source]#

Called after forward, saves args from forward to be later used in backward.

Parameters:
  • ctx – Context object that holds information.

  • inputs – args to forward()

  • outputs – outputs from forward()

static backward(ctx, grad_output: Tensor) tuple[Tensor, Tensor, None, None][source]#

Calculation of backward pass.

Parameters:
  • ctx – Context object supplied by autograd. Contains saved tensors and qlaunchers.

  • grad_output (torch.Tensor) – Grad from next layer.

Returns:

Grad for inputs, Grad for weights, rest irrelevant. (each forward argument needs to get something, but launchers don’t need grad)

Return type:

tuple[torch.Tensor,torch.Tensor,None,None]

class qailab.torch.autograd.ArgMax(*args, **kwargs)[source]#

Bases: Function

ArgMax function. Propagates the sum of gradient on argmax index, rest is zero.

https://discuss.pytorch.org/t/differentiable-argmax/33020

static forward(fn_in)[source]#

Forward run.

Parameters:

fn_in (torch.Tensor) – Input tensor.

Returns:

First index(es) of maximum elements.

Return type:

torch.Tensor

static setup_context(ctx, inputs, output)[source]#

Save tensors for backward pass

static backward(ctx, grad_output: Tensor) tuple[Tensor][source]#

Calculation of backward pass.

Parameters:
  • ctx – Context object supplied by autograd.

  • grad_output (torch.Tensor) – Grad from next layer.

Returns:

Grad w.r.t. input.

Return type:

tuple[torch.Tensor]