particlenet

Implementation of the ParticleNet GNN model architecture.

class graphnet.models.gnn.particlenet.ParticleNeT(*args, **kwargs)[source]

Bases: GNN

ParticleNeT (dynamical edge convolutional) model.

Inspired by: https://arxiv.org/abs/1902.08570

Construct ParticleNeT.

Parameters:
  • nb_inputs (int) – Number of input features on each node.

  • nb_neighbours (int, default: 16) – Number of neighbours to used in the k-nearest neighbour clustering which is performed after each (dynamical) edge convolution.

  • features_subset (Union[List[int], slice, None], default: None) – The subset of latent features on each node that are used as metric dimensions when performing the k-nearest neighbours clustering. Defaults to [0,1,2].

  • dynamic (bool, default: True) – wether or not update the edges after every DynEdgeConv block.

  • dynedge_layer_sizes (Optional[List[Tuple[int, ...]]], default: [(64, 64, 64), (128, 128, 128), (256, 256, 256)]) – The layer sizes, or latent feature dimenions, used in the DynEdgeConv layer. Each entry in dynedge_layer_sizes corresponds to a single DynEdgeConv layer; the integers in the corresponding tuple corresponds to the layer sizes in the multi-layer perceptron (MLP) that is applied within each DynEdgeConv layer. That is, a list of size-three tuples means that all DynEdgeConv layers contain a three-layer MLP. Defaults to [(64, 64, 64), (128, 128, 128), (256, 256, 256)].

  • readout_layer_sizes (Optional[List[int]], default: [256]) – Hidden layer size in the MLP following the post-processing _and_ optional global pooling. As this is the last layer in the model, it yields the output of the DynEdge model. Defaults to [256,].

  • global_pooling_schemes (Union[str, List[str], None], default: 'mean') – The list global pooling schemes to use. Options are: “min”, “max”, “mean”, and “sum”. Default to “mean”.

  • activation_layer (Optional[str], default: 'relu') – The activation function to use in the model. Default to “relu”.

  • add_batchnorm_layer (bool, default: True) – Whether to add a batch normalization layer after each linear layer. Default to True.

  • dropout_readout (float, default: 0.1) – Dropout value to use in the readout layer(s). Default to 0.1.

  • skip_readout (bool, default: False) – Whether to skip the readout layer(s). If True, the output of the last DynEdgeConv block is returned directly.

  • args (Any)

  • kwargs (Any)

Return type:

object

forward(data)[source]

Apply learnable forward pass.

Return type:

Tensor

Parameters:

data (Data)