diffusion_models.diffusion_inference

Module Contents

class DiffusionInference(model, reverse_transforms=lambda x: ..., image_shape=(3, 64), device='cuda')[source]

A diffusion inference framework.

This is a simplified inference framework to easily start using your diffusion model.

Parameters:
  • model (BaseDiffusionModel) – The trained diffusion model.

  • reverse_transforms (Callable) – A set of reverse transforms.

  • image_shape (Tuple[int, int]) – The shape of the image to produce. This is a tuple where the first value is the number of channels and the second is the size of the image. Images are expected to be square.

  • device (str) – The device to run the inference on.

image_channels[source]

The number of channels of the image.

image_size[source]

The size of the image.

model[source]

The trained diffusion model.

reverse_transforms[source]

The set of reverse transforms.

device[source]

The device to run the inference on.

generate(number_of_images, save_gif=False)[source]

Generate a batch of images.

Parameters:
  • number_of_images (int) – The number of images to generate.

  • save_gif (bool) – Whether to save the generation process as a GIF.

Returns:

A PIL image of the generated images stacked together.

Return type:

PIL.Image.Image

get_generator(number_of_images=1)[source]

An image generator.

This method is a generator that will generate a batch of images. At every call, the generator denoises the images by one more step until the image is fully generated. This can be particularly useful for running the image generation step by step or for a streaming API.

Parameters:

number_of_images (int) – The number of images the generator should generate.