ggml-org / llama.cpp
LLM inference in C/C++
See what the GitHub community is most excited about this week.
LLM inference in C/C++
Hyprland is an independent, highly customizable, dynamic tiling Wayland compositor that doesn't sacrifice on its looks.
Scripting platform, modding framework and VR support for all RE Engine games
JSON for Modern C++
The new Windows Terminal and the original Windows console host, all in the same place!
super repo for rocm systems projects
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.
Free self-driving car stack - fully open-source ADAS and autonomous driving system
A PSP emulator for Android, Windows, Mac and Linux, written in C++. Want to contribute? Join us on Discord at https://discord.gg/5NJB6dD or just send pull requests / issues. For discussion use the forums at forums.ppsspp.org.
Filament is a real-time physically based rendering engine for Android, iOS, Windows, Linux, macOS, and WebGL2
11.210% - Decompilation of Minecraft: Legacy Console Edition
MLX: An array framework for Apple silicon
OpenVINO™ is an open source toolkit for optimizing and deploying AI inference
FlatBuffers: Memory Efficient Serialization Library
An MCP-based chatbot | 一个基于MCP的聊天机器人
RenderDoc is a stand-alone graphics debugging tool.
Godot reverse engineering tools
Embedded Template Library
这是一个用于显示当前网速、CPU及内存利用率的桌面悬浮窗软件,并支持任务�?�显示,支持更换皮肤。
Feed reader (podcast player and also Gemini protocol client) which supports RSS/ATOM/JSON and many web-based feed services.