layers¶
Class(es) implementing layers to be used in graphnet models.
- class graphnet.models.components.layers.DynEdgeConv(nn, aggr, nb_neighbors, features_subset, **kwargs)[source]¶
Bases:
EdgeConv
,LightningModule
Dynamical edge convolution layer.
Construct DynEdgeConv.
- Parameters:
nn (
Callable
) – The MLP/torch.Module to be used within the EdgeConv.aggr (
str
, default:'max'
) – Aggregation method to be used with EdgeConv.nb_neighbors (
int
, default:8
) – Number of neighbours to be clustered after the EdgeConv operation.features_subset (
Union
[Sequence
[int
],slice
,None
], default:None
) – Subset of features in Data.x that should be used when dynamically performing the new graph clustering after the EdgeConv operation. Defaults to all features.**kwargs (
Any
) – Additional features to be passed to EdgeConv.
- class graphnet.models.components.layers.EdgeConvTito(nn, aggr, **kwargs)[source]¶
Bases:
MessagePassing
,LightningModule
Implementation of EdgeConvTito layer used in TITO solution for.
‘IceCube - Neutrinos in Deep’ kaggle competition.
Construct EdgeConvTito.
- Parameters:
nn (
Callable
) – The MLP/torch.Module to be used within the EdgeConvTito.aggr (
str
, default:'max'
) – Aggregation method to be used with EdgeConvTito.**kwargs (
Any
) – Additional features to be passed to EdgeConvTito.
- class graphnet.models.components.layers.DynTrans(layer_sizes, aggr, features_subset, n_head, **kwargs)[source]¶
Bases:
EdgeConvTito
,LightningModule
Implementation of dynTrans1 layer used in TITO solution for.
‘IceCube - Neutrinos in Deep’ kaggle competition.
Construct DynTrans.
- Parameters:
layer_sizes (
Optional
[List
[int
]], default:None
) – List of layer sizes to be used in DynTrans.aggr (
str
, default:'max'
) – Aggregation method to be used with DynTrans.features_subset (
Union
[Sequence
[int
],slice
,None
], default:None
) – Subset of features in Data.x that should be used when dynamically performing the new graph clustering after the EdgeConv operation. Defaults to all features.n_head (
int
, default:8
) – Number of heads to be used in the multiheadattention models.**kwargs (
Any
) – Additional features to be passed to DynTrans.
- class graphnet.models.components.layers.DropPath(drop_prob)[source]¶
Bases:
LightningModule
Drop paths (Stochastic Depth) per sample.
Construct DropPath.
- Parameters:
drop_prob (
float
, default:0.0
) – Probability of dropping a path during training. If 0.0, no paths are dropped. Defaults to None.
- class graphnet.models.components.layers.Mlp(in_features, hidden_features, out_features, activation=<class 'torch.nn.modules.activation.GELU'>, dropout_prob)[source]¶
Bases:
LightningModule
Multi-Layer Perceptron (MLP) module.
Construct Mlp.
- Parameters:
in_features (
int
) – Number of input features.hidden_features (
Optional
[int
], default:None
) – Number of hidden features. Defaults to None. If None, it is set to the value of in_features.out_features (
Optional
[int
], default:None
) – Number of output features. Defaults to None. If None, it is set to the value of in_features.activation (
Module
, default:<class 'torch.nn.modules.activation.GELU'>
) – Activation layer. Defaults to nn.GELU.dropout_prob (
float
, default:0.0
) – Dropout probability. Defaults to 0.0.
- class graphnet.models.components.layers.Block_rel(input_dim, num_heads, mlp_ratio, qkv_bias, qk_scale, dropout, attn_drop, drop_path, init_values, activation=<class 'torch.nn.modules.activation.GELU'>, norm_layer=<class 'torch.nn.modules.normalization.LayerNorm'>, attn_head_dim)[source]¶
Bases:
LightningModule
Implementation of BEiTv2 Block.
Construct ‘Block_rel’.
- Parameters:
input_dim (
int
) – Dimension of the input tensor.num_heads (
int
) – Number of attention heads to use in the Attention_rellayer.
mlp_ratio (
float
, default:4.0
) – Ratio of the hidden size of the feedforward network to the input size in the Mlp layer.qkv_bias (
bool
, default:False
) – Whether or not to include bias terms in the query, key, and value matrices in the Attention_rel layer.qk_scale (
Optional
[float
], default:None
) – Scaling factor for the dot product of the query and key matrices in the Attention_rel layer.dropout (
float
, default:0.0
) – Dropout probability to use in the Mlp layer.attn_drop (
float
, default:0.0
) – Dropout probability to use in the Attention_rel layer.drop_path (
float
, default:0.0
) – Probability of applying drop path regularization to the output of the layer.init_values (
Optional
[float
], default:None
) – Initial value to use for the gamma_1 and gamma_2 parameters if not None.activation (
Module
, default:<class 'torch.nn.modules.activation.GELU'>
) – Activation function to use in the Mlp layer.norm_layer (
Module
, default:<class 'torch.nn.modules.normalization.LayerNorm'>
) – Normalization layer to use.attn_head_dim (
Optional
[int
], default:None
) – Dimension of the attention head outputs in the Attention_rel layer.
- class graphnet.models.components.layers.Attention_rel(input_dim, num_heads, qkv_bias, qk_scale, attn_drop, proj_drop, attn_head_dim)[source]¶
Bases:
LightningModule
Attention mechanism with relative position bias.
Construct ‘Attention_rel’.
- Parameters:
input_dim (
int
) – Dimension of the input tensor.num_heads (
int
, default:8
) – the number of attention heads to use (default: 8)qkv_bias (
bool
, default:False
) – whether to add bias to the query, key, and value projections. Defaults to False.qk_scale (
Optional
[float
], default:None
) – a scaling factor that multiplies the dot product of query and key vectors. Defaults to None. If None, computed as :math: head_dim^(-1/2).attn_drop (
float
, default:0.0
) – the dropout probability for the attention weights. Defaults to 0.0.proj_drop (
float
, default:0.0
) – the dropout probability for the output of the attention module. Defaults to 0.0.attn_head_dim (
Optional
[int
], default:None
) – the feature dimensionality of each attention head. Defaults to None. If None, computed as dim // num_heads.
- class graphnet.models.components.layers.Block(input_dim, num_heads, mlp_ratio, dropout, attn_drop, drop_path, init_values, activation=<class 'torch.nn.modules.activation.GELU'>, norm_layer=<class 'torch.nn.modules.normalization.LayerNorm'>)[source]¶
Bases:
LightningModule
Transformer block.
Construct ‘Block’.
- Parameters:
input_dim (
int
) – Dimension of the input tensor.num_heads (
int
) – Number of attention heads to use in the MultiheadAttention layer.mlp_ratio (
float
, default:4.0
) – Ratio of the hidden size of the feedforward network to the input size in the Mlp layer.dropout (
float
, default:0.0
) – Dropout probability to use in the Mlp layer.attn_drop (
float
, default:0.0
) – Dropout probability to use in the MultiheadAttention layer.drop_path (
float
, default:0.0
) – Probability of applying drop path regularization to the output of the layer.init_values (
Optional
[float
], default:None
) – Initial value to use for the gamma_1 and gamma_2 parameters if not None.activation (
Module
, default:<class 'torch.nn.modules.activation.GELU'>
) – Activation function to use in the Mlp layer.norm_layer (
Module
, default:<class 'torch.nn.modules.normalization.LayerNorm'>
) – Normalization layer to use.