gnp.models module

gnp.models.gnp

class gnp.models.gnp.BlockGNP(node_dim: int, edge_dim: int, out_dim: int, layers: List[int], num_channels: int, neurons: int, nonlinearity: str, device: device)[source]

Bases: Module

Block Geometric Neural Operator

Parameters:
  • node_dim (int) -- Dimension of the input node features.

  • edge_dim (int) -- Dimension of the input edge features.

  • out_dim (int) -- Dimension of the output node features.

  • layers (list of int) -- List of integers representing the dimensions of the hidden layers.

  • num_channels (int) -- Number of blocks in the kernel factorization.

  • neurons (int) -- Number of neurons in the kernel MLP.

  • nonlinearity (str) -- Nonlinearity to use in the hidden layers.

  • device (torch.device) -- Device to run the model on.

forward(data)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class gnp.models.gnp.PatchGNP(model: Module, out_dim: int, device: device)[source]

Bases: Module

Patch Geometric Neural Operator

Parameters:
  • model (torch.nn.Module) -- Model to use for processing the input data.

  • out_dim (int) -- Dimension of the output basis.

  • device (torch.device) -- Device to run the model on.

forward(data)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

gnp.models.layers

class gnp.models.layers.BlockConv(edge_dim, d_in, d_out, num_channels, neurons, nonlinearity='ReLU')[source]

Bases: MessagePassing

Edge convolution layer with the block factorized kernel.

Parameters:
  • edge_dim (int) -- Dimension of the input edge features.

  • d_in (int) -- Dimension of the input node features.

  • d_out (int) -- Dimension of the output node features.

  • num_channels (int) -- Number of blocks in the kernel factorization.

  • neurons (int) -- Number of neurons in the kernel MLP.

  • nonlinearity (str) -- Nonlinearity to use in the hidden layers.

forward(x, edge_index, edge_attr)[source]

Runs the forward pass of the module.

message(x_j, edge_attr)[source]

Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in edge_index. This function can take any argument as input which was initially passed to propagate(). Furthermore, tensors passed to propagate() can be mapped to the respective nodes \(i\) and \(j\) by appending _i or _j to the variable name, .e.g. x_i and x_j.

update(aggr_out)[source]

Updates node embeddings in analogy to \(\gamma_{\mathbf{\Theta}}\) for each node \(i \in \mathcal{V}\). Takes in the output of aggregation as first argument and any argument which was initially passed to propagate().

class gnp.models.layers.BlockKernel(edge_dim, d_in, d_out, num_channels, neurons, nonlinearity='ReLU')[source]

Bases: Module

MLP to compute the block factorized kernel

Parameters:
  • edge_dim (int) -- Dimension of the input edge features.

  • d_in (int) -- Dimension of the input node features.

  • d_out (int) -- Dimension of the output node features.

  • num_channels (int) -- Number of blocks in the kernel factorization.

  • neurons (int) -- Number of neurons in the kernel MLP.

  • nonlinearity (str, optional) -- Nonlinearity to use in the hidden layers (default is 'ReLU').

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Module contents