🌟 The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Tasks, Repo
-
Updated
Dec 24, 2023 - Python
🌟 The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Tasks, Repo
Your GenAI Second Brain 🧠 A personal productivity assistant (RAG) ⚡️🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, that you can share with users ! Local & Private alternative to OpenAI GPTs & ChatGPT powered by retrieval-augmented generation
LlamaIndex (formerly GPT Index) is a data framework for your LLM applications
A cloud-native vector database, storage for next generation AI applications
Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM 等语言模型的本地知识库问答 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM) QA app with langchain
Build AI 🤖 using SQL
基于大模型搭建的微信聊天机器人,同时支持微信、企业微信、公众号、飞书接入,可选择GPT3.5/GPT4.0/Claude/文心一言/讯飞星火/通义千问/Gemini/LinkAI,能处理文本、语音和图片,访问操作系统和互联网,支持基于自有知识库进行定制企业智能客服。
Drag & drop UI to build your customized LLM flow
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Integrate cutting-edge LLM technology quickly and easily into your apps
Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
🤖 The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs ggml, gguf, GPTQ, onnx, TF compatible models: llama, llama2, rwkv, whisper, vicuna, koala, cerebras, falcon, dolly, starcoder, and many others
<⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.
An Open-Source Assistants API and GPTs alternative. Dify.AI is an LLM application development platform. It integrates the concepts of Backend as a Service and LLMOps, covering the core tech stack required for building generative AI-native applications, including a built-in RAG engine.
A high-throughput and memory-efficient inference and serving engine for LLMs
Add a description, image, and links to the llm topic page so that developers can more easily learn about it.
To associate your repository with the llm topic, visit your repo's landing page and select "manage topics."