AI inside Unity is getting real, and MCP is a big part of what makes it practical. In this Open Source Friday, we’re joined by Andy Tsen to talk about Unity MCP (Model Context Protocol for Unity). We dig into how MCP helps tools and agents talk to Unity in a structured way, what “context” means in a game engine, and how devs can start experimenting with AI-assisted workflows. Repo: https://lnkd.in/eEf92CDH
GitHub
Software Development
San Francisco, CA 5,988,626 followers
The home of software development
About us
As the global home for all developers, GitHub is the complete AI-powered developer platform to build, scale, and deliver secure software. Over 100 million people, including developers from 90 of the Fortune 100 companies, use GitHub to build amazing things together across 330+ million repositories. With all the collaborative features of GitHub, it has never been easier for individuals and teams to write faster, better code.
- Website
-
https://github.com
External link for GitHub
- Industry
- Software Development
- Company size
- 501-1,000 employees
- Headquarters
- San Francisco, CA
- Type
- Privately Held
- Founded
- 2008
Locations
-
Primary
Get directions
88 Colin P Kelly Jr St
San Francisco, CA 94107, US
Employees at GitHub
Updates
-
GitHub Security Lab just open sourced an AI-powered vulnerability scanning framework, and it's finding real, high-impact bugs. 🔍 The Taskflow Agent is designed to detect Auth Bypasses, IDORs, Token Leaks, and other vulnerabilities that often slip through standard tooling. It's agentic, meaning it can reason through complex code paths to surface issues that rule-based scanners miss. Try out this framework for your own project. ⬇️ https://lnkd.in/de-fz3-N
-
Is AI making us all use the same tools, or is it empowering us to try new things? 🤔 The Head of GitHub Next, Idan Gazit, sees two trends colliding: • Consolidation around popular frameworks where AI excels • Lower barriers to programming languages you've never written What do you predict will win out? Gather more insights here. ⬇️ https://lnkd.in/gPKXGwv2
-
GitHub reposted this
You know what feels good on a Friday morning? Closing two of the top 10 most requested community items on your product. ✅ Support timezones in Crons ✅ Support environments without creating deploys You know what feel better? Also shipping multi-label on ARC as we finally can get to what the community asked for there as well 🎉 Being able to come back to the community and just 'engage and fix' what people have been asking for for 4 years is amazing. Next up: 🔃 Parallel steps 🔃Splitting composite Action steps 🔃Improve concurrency to not cancel pending jobs And that's just the experience pieces. Watch out for a blog post soon on what we have coming next on the security focused side of Actions 💪
-
-
Since its launch, there have been 60 million Copilot code reviews (and counting). 👀 As AI-accelerated development increases the rate of code changes, keeping up with review is getting harder. Copilot code review helps teams close that gap, catching issues early without becoming a bottleneck. If your team is shipping more code faster than ever, this blog digs into how Copilot code review fits into that workflow. 🤖 https://lnkd.in/d5Q5QP73
-
This week on Open Source Friday, we're talking with Abhi Aiyer, CTO at Mastra, about building AI applications with TypeScript. Mastra is a TypeScript-first framework for creating AI applications. We'll dig into how it works, what problems it solves, and what's next for the project. Links: Mastra GitHub: https://lnkd.in/dVuKrvHu Mastra Website: https://mastra.ai/ About Open Source Friday Every Friday, we highlight open source projects and the people building them.
Open Source Friday with Mastra
www.linkedin.com
-
GitHub reposted this
Choice has always been core to GitHub. Build how you want, where you want, with the right tools, languages, models, and now agents to get the job done. With the rise of models, there’s been a lot of discussion around benchmarks and rankings. They’re useful, but they don’t answer a harder question: how does a model behave inside a real repo, under real revision pressure? We looked at aggregated GitHub Copilot telemetry across 23M+ product requests and examined code survivability, or the percentage of AI-generated code that ultimately makes it into commits. Here are a few observations: 1️⃣ Frontier models all show meaningful code survivability. None cluster exclusively in low‑survival buckets. 2️⃣ The gap between models is narrower than benchmarks often imply. Across real repositories, most cluster surprisingly close in terms of code that survives. 3️⃣ Workflow posture appears to matter more than model choice. Exploratory coding naturally produces lower survival rates than execution-oriented work. Said plainly: when you look at millions of real development sessions, the differences between models flatten more than the discourse suggests. This isn’t about declaring a winner, but rather understanding how models actually behave once they enter the messy reality of real software projects. Long after launch hype fades, this is the signal we care about: does the code survive? Regardless of what the data says, or which model ships next, our role hasn't changed. It’s the same thing it’s always been: making sure developers have access to the best tools. Choice has never been just a feature for us. It’s the whole point (and it informs our whole approach to key tools like Copilot CLI, which lets you set multi-model fleets of subagents loose on any given task). The team’s been cooking up additional insights here. More to come if this is interesting to folks. 👀 The data certainly surprised us. 😄
-
-
Come hang and hack with us!
Rubber Duck Thursday!
www.linkedin.com
-
Raycast users: Did you know? 💡 Learn more about what Copilot coding agent can do for you. ⬇️ https://lnkd.in/g2egquxU