Frameworks and libraries to deploy generative AI models
An Instruction-following LLaMA Model
Alpaca is an instruction-following language model developed at Stanford. The model was fine-tuned from Meta’s LLaMA 7B model, and trained on 52K instruction-following demonstrations generated in the style of self-instruct using text-davinci-003 by OpenAI. The authors released their training recipe and data, and intend to release the model weights in the future. An interactive demo is also available. Being based on Llama, Alpaca is intended only for academic research and any commercial use is prohibited.
Open Multilingual Language Model
Multilingual open LLM trained in complete transparency. With its 176 billion parameters, BLOOM is able to generate text in 46 natural languages and 13 programming languages. Any individual or institution who agrees to the terms of the model’s Responsible AI License (developed during the BigScience project itself) can use and build upon the model on a local machine or on a cloud provider - since it is embedded in the Hugging Face ecosystem, it is as easy as importing it with transformers and running it with accelerate.
Versatile conversational chatbot
ChatGPT is a chatbot based on large language models, developed by OpenAI. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT can be used for a wide variety of purposes, such as writing and debugging computer programs, composing music, writing text such as fairy tales, student essays or poetry, answering test questions, emulating a Linux system, or playing games like tic-tac-toe.
Collection of parallel components to support anybody to write their own distributed deep learning models
Colossal-AI is designed to be a unified system to provide an integrated set of training skills and utilities to the user. It includes common training utilities such as mixed precision training and gradient accumulation. Besides this, it provides an array of parallelism including data, tensor and pipeline parallelism. It also provides different pipeline parallelism methods to allow the user to scale their model across nodes efficiently, and more advanced features such as offloading.
CycleGAN is a technique for training unsupervised image translation models via the GAN architecture using unpaired collections of images from two different domains. CycleGAN has been demonstrated on a range of applications including season translation, object transfiguration, style transfer, and generating photos from paintings. Implemented in PyTorch; other implementations are available, also in TensorFlow.
AI system to create realistic images and art from a text description
DALL-E is a tool based on deep learning models, developed by OpenAI to generate digital images from natural language descriptions. DALL-E can generate imagery in multiple styles, including photorealistic imagery, paintings, and emojis. It can manipulate and rearrange objects in its images, it can correctly place design elements in novel compositions without explicit instruction, and it can "fill in the blanks" to infer appropriate details without specific prompts. Given an existing image, DALL-E 2 can produce variations of the image as unique outputs based on the original, as well as edit the image to modify or expand upon it.
Open source instruction-following LLM allowing commercial use
Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. dolly-v2-12b is a 12 billion parameter causal language model derived from EleutherAI’s Pythia-12b and fine-tuned on a ~15K record instruction corpus generated by Databricks employees and released under a permissive licence (CC-BY-SA). The model was trained for tasks including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization.
A large language model that can store, combine and reason about scientific knowledge
An artificial intelligence system developed by Meta, based on a large language model trained on over 48 million papers, textbooks, reference material, compounds, proteins and other sources of scientific knowledge. It can be used to explore literature, ask scientific questions, write scientific code, among other uses.
Fully open-source and commercially usable LLM
Fully open source LLM released by H2O AI. It provides a layer of interpretability that allows users to ask “why” a certain answer is given. Users on H2OGPT can also choose from a variety of open models and datasets, see response scores, flag issues and adjust out length, among other things.
H2O LLM Studio
Framework and no-code GUI for fine-tuning LLMs
A framework and no-code GUI developed by H2O AI, designed for fine-tuning state-of-the-art large language models (LLMs). LLM Studio provides a graphic user interface (GUI) specially designed for large language models, allowing to easily and effectively fine-tune LLMs without the need for any coding experience, using a large variety of hyperparameters. It includes the use of recent fine-tuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with a low memory footprint. It provides advanced evaluation metrics to judge answers generated by the model, and allows one to track and compare model performance visually.
Open source conversation chatbot
HuggingChat is an open source alternative to ChatGPT, developed by HuggingFace. It currently runs Open Assistant, a bot created by the Large-scale Artificial Intelligence Open Network (LAION), and based on the Large Language Model Meta AI (LLaMA). It can be accessed and used online without the need to log in or create an account.
A Dialogue Model for Academic Research
Chatbot developed at Berkeley for academic research. The language model was trained by fine-tuning Meta’s LLaMA on dialogue data gathered from the web, which includes high-quality responses to user queries from other large language models, as well as question answering datasets and human feedback datasets. The resulting model, Koala-13B, shows competitive performance to existing models as suggested by human evaluation on real-world user prompts.
state-of-the-art foundational large language model for non-commercial use
The LLaMA model is a collection of foundation language models ranging from 7B to 65B parameters. The models were released by Meta for the research community, under a licence that does not allow commercial use.
AI program to create images from textual descriptions
Midjourney is an AI program that creates images from textual descriptions. It is based on a machine learning model that takes as input a natural language description and produces an image matching that description. It is developed by an independent research lab. The tool is currently in open beta and is currently only accessible through a Discord bot.
Chat-based and open-source assistant
OpenAssistant is an open source chat-based assistant that understands tasks, can interact with third-party systems and retrieve information dynamically to do so. The project aims to make a large language model that can run on a single high-end consumer GPU. The project is backed by a worldwide crowdsourcing effort involving over 13,500 volunteers who have created 600k human-generated data points.
Open-source base to create both specialised and general purpose chatbots for various applications
OpenChatKit consists of four key components: an instruction-tuned large language model, customisation recipes to fine-tune the model, an extensible retrieval system to augment the model with live-updating information, and a moderation model to filter inappropriate or out-of-domain questions. The base model of OpenChatKit is GPT-NeoXT-Chat-Base-20B, a 20 billion parameter large language model based on EleutherAI’s GPT-NeoX model. It is fine-tuned with the OIG-43M dataset, focusing on several tasks such as multi-turn dialogue, question answering, classification, extraction, and summarisation.
An Open Reproduction of LLaMA
A permissively licensed open source reproduction of Meta AIs LLaMA large language model. This release includes a public preview of the 7B OpenLLaMA model that has been trained with 200 billion tokens. PyTorch and Jax weights of pre-trained OpenLLaMA models are also provided, as well as evaluation results and comparison against the original LLaMA models.
A Suite for Analysing Large Language Models Across Training and Scaling
The Pythia Scaling Suite is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available on Hugging Face. The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research.
Project to create fully open source, state-of-the-art LLMs
Collaborative project aimed at producing a reproducible, fully-open, leading language model. It started by reproducing LLaMA training dataset of over 1.2 trillion tokens
High-Resolution Image Synthesis with Latent Diffusion Models
Stable Diffusion is a deep learning text-to-image model that can generate detailed images based on text descriptions. It may also be used for inpainting, outpainting, and creating image-to-image translations driven by a text prompt.
Large-scale open source conversational chatbot trained via RLHF
StableVicuna, released by Stability AI, is a further instruction fine tuned version of Vicuna v0 13b, which is an instruction fine tuned LLaMA 13b model. Contrary to other open source chatbot alternatives to ChatGPT, StableVicuna has been trained via reinforced learning from human feedback (RLHF).
Machine learning model for high quality image generation
StyleGAN is a generative adversarial network introduced by Nvidia researchers, and made source available. The model is able to produce an unlimited number of portraits of fake human faces, hardly distinguishable from real faces.
Open-Source conversational Chatbot with 13 billion parameters
Vicuna is an open source chatbot based on Llama and fine-tuned using about 70K user-shared conversations collected from the ShareGPT website. According to initial assessments by the authors, Vicuna-13B has achieved over 90% quality compared to OpenAI ChatGPT with GPT-4.
MediaFutures is funded by the European Union's Horizon 2020 Programme, under grant agreement number 951962. MediaFutures is a Europe-wide consortium. This website is managed on behalf of the consortium by Eurecat, whose main address is Carrer de Bilbao, 72, 08013 Barcelona (Spain).