Huadong Chen howardchenhd
Stars
An open-source implementaion for fine-tuning Qwen-VL series by Alibaba Cloud.
Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Secrets of RLHF in Large Language Models Part I: PPO
Large-scale, Informative, and Diverse Multi-round Chat Data (and Models)
TigerBot: A multi-language multi-task LLM
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
LLM training code for Databricks foundation models
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
The official gpt4free repository | various collection of powerful language models | o4, o3 and deepseek r1, gpt-4.1, gemini 2.5
The RedPajama-Data repository contains code for preparing large datasets for training large language models.
An open-source tool-augmented conversational language model from Fudan University
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
UBC ARBERT and MARBERT Deep Bidirectional Transformers for Arabic
Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.
Let ChatGPT teach your own chatbot in hours with a single GPU!
Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
ChatLLaMA 📢 Open source implementation for LLaMA-based ChatGPT runnable in a single GPU. 15x faster training process than ChatGPT
Instruction Tuning with GPT-4
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Making large AI models cheaper, faster and more accessible
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Instruct-tune LLaMA on consumer hardware
Code and documentation to train Stanford's Alpaca models, and generate the data.