The official GitHub page for the survey paper "A Survey of Large Language Models".
-
Updated
Aug 2, 2023 - Python
The official GitHub page for the survey paper "A Survey of Large Language Models".
An open-source framework for training large multimodal models.
Painter & SegGPT Series: Vision Foundation Models from BAAI
A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".
PyTorch implementation of VALL-E(Zero-Shot Text-To-Speech), Reproduced Demo https://lifeiteng.github.io/valle/index.html
Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.
总结Prompt&LLM论文,开源数据&模型,AIGC应用
Emu: An Open Multimodal Generalist
Collection of papers and resources on Reasoning in Large Language Models (LLMs), including Chain-of-Thought (CoT), Instruction-Tuning, and others.
OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.
This repository contains a collection of papers and resources on Reasoning in Large Language Models.
Papers and Datasets on Instruction Learning / Instruction Tuning.
Sample to envision intelligent apps with Microsoft's Copilot stack for AI-infused product experiences.
[ICLR 2023] Code for the paper "Binding Language Models in Symbolic Languages"
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
Paper List for Recommend-system PreTrained Models
Research Trends in LLM-guided Multimodal Learning.
Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"
Add a description, image, and links to the in-context-learning topic page so that developers can more easily learn about it.
To associate your repository with the in-context-learning topic, visit your repo's landing page and select "manage topics."