Skip to content

Getting started

Whether you are prototyping on a laptop or orchestrating large multi-GPU jobs, this guide walks you through installing GAN-Engine and running your first experiment.

Installation

1. Choose your environment

GAN-Engine targets Python 3.10–3.12 and PyTorch Lightning 1.9–2.x. Decide if you want a lightweight inference environment or full training stack.

2. Install from PyPI (inference & quick experiments)

python -m pip install gan-engine

This installs the package with minimal dependencies. Use it to load checkpoints, run inference scripts, or fine-tune existing models.

3. Install from source (training & development)

git clone https://github.com/simon-donike/GAN-Engine.git
cd GAN-Engine
python -m pip install -r requirements.txt
pre-commit install

4. Verify the installation

python -c "import gan_engine; print(gan_engine.__version__)"

First configuration

Configuration lives in gan_engine/configs/. Start by copying a template:

cp gan_engine/configs/config_MRI.yaml my_experiment.yaml

Open my_experiment.yaml and update:

  • Data.root to point to your dataset directory.
  • Data.dataset if you use a different selector.
  • Normalisation.stats_file to reference your statistics.
  • Model.Generator.in_channels/out_channels to match your modality.
  • Optional conditioning blocks (prompts, masks, class_labels) if you are exploring the upcoming inpainting or generative presets.

Running training

python -m gan_engine.train --config my_experiment.yaml

Training outputs logs, checkpoints, and validation panels into Project.output_dir.

Monitoring progress

By default, the trainer logs to Weights & Biases when credentials are available. Otherwise, it falls back to TensorBoard or CSV depending on configuration. Expect:

  • Scalar plots for each loss component.
  • Validation image grids for LR/HR/SR comparisons.
  • Histograms of pixel distributions and discriminator logits.

Running inference

Use the same config file to run inference once you have a checkpoint:

python -m gan_engine.inference \
  --config my_experiment.yaml \

Troubleshooting

  • CUDA mismatch – Install the PyTorch wheel that matches your driver (see pytorch.org).
  • Missing dependencies – Some modality-specific loaders (DICOM, SAFE, Zarr) require optional packages; install the recommended extras listed above.
  • Convergence issues – Refer to the training guideline for tips on warm-ups, ramps, and loss tuning.
  • Normalisation drift – Double-check statistics files and ensure LR/HR branches share compatible scaling.

Next steps

  • Explore the Configuration reference to fine-tune settings.
  • Read the Training chapter for optimisation strategies.
  • Learn about Inference to deploy your models.
  • Share your configs and findings with the community via issues or pull requests.