nn.build_gated_equivariant_mlp

nn.build_gated_equivariant_mlp(n_in: int, n_out: int, n_hidden: int | Sequence[int] | None = None, n_gating_hidden: int | Sequence[int] | None = None, n_layers: int = 2, activation: Callable = torch.nn.functional.silu, sactivation: Callable = torch.nn.functional.silu)[source]

Build neural network analog to MLP with `GatedEquivariantBlock`s instead of dense layers.

Parameters:
  • n_in – number of input nodes.

  • n_out – number of output nodes.

  • n_hidden – number hidden layer nodes. If an integer, same number of node is used for all hidden layers resulting in a rectangular network. If None, the number of neurons is divided by two after each layer starting n_in resulting in a pyramidal network.

  • n_layers – number of layers.

  • activation – Activation function for gating function.

  • sactivation – Activation function for scalar outputs. All hidden layers would the same activation function except the output layer that does not apply any activation function.