This organization is a part of the NeurIPS 2021 demonstration "Training Transformers Together".

In this demo, we've trained a model similar to OpenAI DALL-E β€” a Transformer "language model" that generates images from text descriptions. Training happened collaboratively β€” volunteers from all over the Internet contributed to the training using hardware available to them. We used LAION-400M, the world's largest openly available image-text-pair dataset with 400 million samples. Our model was based on the dalle‑pytorch implementation by Phil Wang with a few tweaks to make it communication-efficient.

See details about how it works on our website.

This organization gathers people participating in the collaborative training and provides links to the related materials:

The materials below were available during the training run itself:

  • πŸ‘‰ Starter kits for Google Colab and Kaggle (easy way to join the training)
  • πŸ‘‰ Dashboard (the current training state: loss, number of peers, etc.)
  • πŸ‘‰ Weights & Biases plots for aux peers (aggregating the metrics) and actual trainers (contributing with their GPUs)

Feel free to reach us on Discord if you have any questions πŸ™‚