diffusion_models.gaussian_diffusion.ddimm_diffuser
Module Contents
- class DdimDiffuser(beta_scheduler, mode=DenoisingMode.Quadratic, number_of_steps=20)[source]
Initializes the class instance.
- Parameters:
beta_scheduler (BaseBetaScheduler) – The beta scheduler instance to be used.
- classmethod from_checkpoint(checkpoint)[source]
Instantiate a DDIM Diffuser from a training checkpoint.
- Parameters:
checkpoint (Checkpoint) – The training checkpoint object containing the trained model’s parameters and configuration.
- Returns:
An instance of the DdimDiffuser class initialized with the parameters loaded from the given checkpoint.
- Return type:
- to(device='cpu')[source]
Moves the data to the specified device.
This performs a similar behaviour to the to method of PyTorch. moving the GaussianDiffuser and the BetaScheduler to the specified device.
- Parameters:
device (str) – The device to which the method should move the object. Default is “cpu”.
Example
>>> ddim_diffuser = DdimDiffuser() >>> ddim_diffuser = ddim_diffuser.to(device="cuda")
- diffuse_batch(images)[source]
Diffuse a batch of images.
Diffuse the given batch of images by adding noise based on the beta scheduler.
- denoise_batch(images, model)[source]
Denoise a batch of images.
This denoises a batch images. This is the image generation process.
- Parameters:
images (Tensor) – A batch of noisy images.
model (BaseDiffusionModel) – The model to be used for denoising.
- Returns:
A list of tensors containing a batch of denoised images.
- Return type:
List[Tensor]