base_diffusion_model

Module Contents

class BaseDiffusionModel(diffuser)[source]

Initializes the object with the specified diffuser.

BaseDiffusionModel is an abstract base class for different diffusion models implementations. It defines the interface that all diffusion models should adhere to.

Parameters:

diffuser (BaseDiffuser) – The diffuser to use for the diffusion model.

Warning

Do not instantiate this class directly. Instead, build your own diffusion model by inheriting from BaseDiffusionModel. (see SimpleUnet)

diffuser: BaseDiffuser[source]

A diffuser to be used by the diffusion model.

diffuse(images)[source]

Diffuse a batch of images.

Parameters:

images (Tensor) – A tensor containing a batch of images.

Returns:

A tuple containing three tensors

  • images: Diffused batch of images.

  • noise: Noise added to the images.

  • timesteps: Timesteps used for diffusion.

Return type:

Tuple[Tensor, Tensor, Tensor]

denoise(images)[source]

Denoise a batch of images.

Parameters:

images (Tensor) – A tensor containing a batch of images to denoise.

Returns:

A list of tensors containing a batch of denoised images.

Return type:

List[Tensor]

abstract forward(x, timestep)[source]

Forward pass of the diffusion model.

The forward pass of the diffusion model, predicting the noise at a single step.

Parameters:
  • x (Tensor) – A batch of noisy images.

  • timestep (Tensor) – The timesteps of each image in the batch.

Returns:

A tensor representing the noise predicted for each image.

Return type:

Tensor

to(device='cpu')[source]

Moves the model to the specified device.

This performs a similar behaviour to the to method of PyTorch. moving the DiffusionModel and all related artifacts to the specified device.

Parameters:

device (str) – The device to which the method should move the object. Default is “cpu”.

compile(*args, **kwargs)[source]

Compiles the diffusion model.

This performs a similar behaviour to the compile method of PyTorch.

Returns:

A compiled diffusion model.