pretraining_maskpred

Experimental feature.

Self-supervised pretraining using BERT-style mask prediction.

class graphnet.models.pretraining_maskpred.standard_maskpred_net(*args, **kwargs)[source]

Bases: Model

A small NN that is used as a default.

Construct the default NN.

Parameters:
  • args (Any)

  • kwargs (Any)

Return type:

object

forward(data)[source]

Forward pass,linear layers plus final projection.

Return type:

Tensor

Parameters:

data (Data | Tensor)

class graphnet.models.pretraining_maskpred.default_mask_augment(*args, **kwargs)[source]

Bases: Model

A module that produces masked nodes, target, mask and charge summary.

Construct the augmentation.

Parameters:
  • args (Any)

  • kwargs (Any)

Return type:

object

forward(data)[source]

Forward pass.

Return type:

Tuple[Any, Any, Any, Any]

Parameters:

data (Data)

class graphnet.models.pretraining_maskpred.default_loss_calc(*args, **kwargs)[source]

Bases: Model

Applies the default loss logic that matches the default augment.

Construct the loss calc.

Parameters:
  • args (Any)

  • kwargs (Any)

Return type:

object

forward(pred, data, aux)[source]

Forward pass.

Return type:

Tensor

Parameters:
  • pred (Tuple[Tensor, Tensor])

  • data (Data)

  • aux (Tuple[Tensor, Tensor, Tensor])

class graphnet.models.pretraining_maskpred.mask_pred_frame(*args, **kwargs)[source]

Bases: EasySyntax

The BERT-Style mask prediction module.

Should be compatible with any module as long as it does not change the length of the input data in dense rep.

One needs to provide the encoder, i.e. the model to be pretrained, and an UnsupervisedTask, which is constructed from an augmentation_like module and a loss calculation (see the defaults above).

Construct the pretraining framework.

Parameters:
  • args (Any)

  • kwargs (Any)

Return type:

object

forward(data)[source]

Forward pass, produce latent view compare against target.

per default predict summary value.

Return type:

List[Tensor]

Parameters:

data (Data | List[Data])

validate_tasks()[source]

Verify that self._tasks contain compatible elements.

Return type:

None

shared_step(batch, batch_idx)[source]

Perform shared step.

Applies the forward pass and the following loss calculation, shared between the training and validation step.

Return type:

Tensor

Parameters:
  • batch (List[Data])

  • batch_idx (int)

give_encoder_model()[source]

Return the pretrained encoder model.

Return type:

Model

save_pretrained_model(save_path)[source]

Automates the saving of the pretrained encoder.

Return type:

None

Parameters:

save_path (str)