pretraining_maskpred¶
Experimental feature.
Self-supervised pretraining using BERT-style mask prediction.
- class graphnet.models.pretraining_maskpred.standard_maskpred_net(*args, **kwargs)[source]¶
Bases:
ModelA small NN that is used as a default.
Construct the default NN.
- Parameters:
args (Any)
kwargs (Any)
- Return type:
object
- class graphnet.models.pretraining_maskpred.default_mask_augment(*args, **kwargs)[source]¶
Bases:
ModelA module that produces masked nodes, target, mask and charge summary.
Construct the augmentation.
- Parameters:
args (Any)
kwargs (Any)
- Return type:
object
- class graphnet.models.pretraining_maskpred.default_loss_calc(*args, **kwargs)[source]¶
Bases:
ModelApplies the default loss logic that matches the default augment.
Construct the loss calc.
- Parameters:
args (Any)
kwargs (Any)
- Return type:
object
- class graphnet.models.pretraining_maskpred.mask_pred_frame(*args, **kwargs)[source]¶
Bases:
EasySyntaxThe BERT-Style mask prediction module.
Should be compatible with any module as long as it does not change the length of the input data in dense rep.
One needs to provide the encoder, i.e. the model to be pretrained, and an UnsupervisedTask, which is constructed from an augmentation_like module and a loss calculation (see the defaults above).
Construct the pretraining framework.
- Parameters:
args (Any)
kwargs (Any)
- Return type:
object
- forward(data)[source]¶
Forward pass, produce latent view compare against target.
per default predict summary value.
- Return type:
List[Tensor]- Parameters:
data (Data | List[Data])
Perform shared step.
Applies the forward pass and the following loss calculation, shared between the training and validation step.
- Return type:
Tensor- Parameters:
batch (List[Data])
batch_idx (int)