BLOOM (language model)
Find sources: "BLOOM" language model – news · newspapers · books · scholar · JSTOR (October 2022) (Learn how and when to remove this message)
| BLOOM | |
|---|---|
| Original author | BigScience research workshop |
| Initial release | July 12, 2022; 3 years ago (2022年07月12日) |
| Repository | huggingface |
| Written in | Python |
| License | BigScience Responsible AI License (RAIL) v1.0 |
| Website | bigscience |
The BigScience Large Open-science Open-access Multilingual Language Model (BLOOM) is an open-access large language model (LLM).[1] It was created by a volunteer-driven research effort to provide a transparently-created alternative to proprietary AI models.[2]
With 176 billion parameters, BLOOM is a transformer-based autoregressive model designed to generate text in 46 natural languages and 13 programming languages. The model, source code, and the data used to train it are all distributed under free licences, allowing for public research and use.[3] [4]
Development
[edit ]BLOOM is the main outcome of the BigScience initiative, a one-year-long research workshop that took place from May 2021 to May 2022.[5] The project was led by HuggingFace and involved several hundred volunteer researchers and engineers from academia and the private sector. The model was trained between March and July 2022 on the Jean Zay public supercomputer in France, managed by GENCI and IDRIS (CNRS).[6]
BLOOM's training corpus, named ROOTS, combines data extracted from the then-latest version of the web-based OSCAR corpus (38% of ROOTS) and newly collected data extracted from a manually selected and documented list of language data sources. In total, the model was trained on approximately 366 billion (1.6TB) tokens.[7] [8]
External links
[edit ]References
[edit ]- ^ "BigScience Large Open-science Open-access Multilingual Language Model" . Retrieved 2022年10月01日.
- ^ Heikkilä, Melissa (2022年07月12日). "BLOOM: Inside the radical new project to democratize AI". MIT Technology Review . Retrieved 2023年12月26日.
- ^ "The BigScience RAIL license" . Retrieved 2024年01月10日.
- ^ Le Scao T, Fan A, Akiki C, Pavlick E, Ilić S, Hesslow D, Castagné R, Luccioni A, Yvon F, Gallé M, Tow J, Rush AM, Biderman S, Webson A, Sasanka Ammanamanchi P, Wang T, Sagot B, Muennighoff N, Villanova del Moral A, Ruwase O, Bawden R, Bekman S, McMillan-Major A, Beltagy I, Nguyen H, Saulnier L, Tan S, Ortiz Suarez P, Sanh V, Laurençon H, Jernite Y, Launay J, Mitchell M, Raffel C, et al. (2022). "BLOOM: A 176B-Parameter Open-Access Multilingual Language Model". arXiv:2211.05100 [cs.CL].
- ^ "BigScience" . Retrieved 2024年01月10日.
- ^ "Release of largest trained open-science multilingual language model ever". French National Centre for Scientific Research . 2022年07月12日. Retrieved 2023年12月26日.
- ^ Laurençon H, Saulnier L, Wang T, Akiki C, Villanova del Moral A, Le Scao T, Von Werra L, Mou C, González Ponferrada C, Nguyen H, Frohberg J, Šaško M, Lhoest Q, McMillan-Major A, Dupont G, Biderman S, Rogers A, Ben allal L, De Toni F, Pistilli G, Nguyen O, Nikpoor S, Masoud M, Colombo P, de la Rosa J, Villegas P, Thrush T, Longpre S, Nagel S, Weber L, Muñoz M, Zhu J, Van Strien D, Alyafeai Z, Almubarak K, Vu MC, Gonzalez-Dios I, Soroa A, Lo K, Dey M, Ortiz Suarez P, Gokaslan A, Bose S, Adelani D, Phan L, Tran H, Yu I, Pai S, Chim J, Lepercq V, Ilic S, Mitchell M, Luccioni S, Jernite Y (2022). "The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset". arXiv:2303.03915 [cs.CL].
- ^ Heikkilä, Melissa (2022年07月12日). "BLOOM: Inside the radical new project to democratize AI". MIT Technology Review . Retrieved 2023年12月26日.
This large language model-related article is a stub. You can help Wikipedia by expanding it.