-
Notifications
You must be signed in to change notification settings - Fork 489
Issues: meta-llama/llama-stack
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Unable to run
llama on windows: "'llama' is not recognized as an internal or external command"
#223
opened Oct 9, 2024 by
thecoderok
Running llama-stack with 8B llama on an AWS CPU only instance throwing an error
#203
opened Oct 7, 2024 by
ShadiCopty
I am puzzled as to why stack needs to link it to the address [: ffff: 0.0.2.208]
#194
opened Oct 6, 2024 by
Itime-ren
Are there any available tools that can convert the original .pth to safetensors
#191
opened Oct 5, 2024 by
Itime-ren
stack tool cannot support large models with a .pth extension downloaded from Meta
#190
opened Oct 5, 2024 by
Itime-ren
Cannot run llama-stack on windows due to termios dependency -- Build fails with poetry
#184
opened Oct 4, 2024 by
ShadiCopty
ollama inference should verify models are downloaded before serving
good first issue
Good for newcomers
#183
opened Oct 4, 2024 by
dltn
llama-stack run with meta reference inference provider fails with ModuleNotFoundError
#180
opened Oct 3, 2024 by
romilbhardwaj
[functionality] Implement completion() methods
good first issue
Good for newcomers
#168
opened Oct 2, 2024 by
ashwinb
fbgemm-gpu isn't officially supported on mac - optional dependency?
#164
opened Oct 1, 2024 by
vinooganesh
Previous Next
ProTip!
no:milestone will show everything without a milestone.