New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Share buffers between CPU and GPU #1696
Conversation
|
Ah very interesting!
Currently, it's not the case without |
|
@ggerganov I took my best shot at it, but I'm definitely stepping into unfamiliar territory. Even without this code, these buffers seemed to be page-aligned when running with |
|
|
|
Hi, I tried this and havent got disk swap, m2 pro 16gb using llama-13b-supercot.ggmlv3.q4_0.bin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work!
|
After this PR was merged I now get this error: Any idea why? |
|
@Johnhersh The error you shared was actually added in #1706. Was it running without issues before #1696? I suspect that before that it was actually failing silently when creating that buffer, e.g. hanging or producing garbage output. The root cause is that macOS is refusing to create a buffer big enough to hold the model you're loading. It seems like macOS has an upper limit on a MTLBuffer size that is roughly half the amount of total unified memory, which a 30B model exceeds for machines with 32GB or less. Since the limit seems to be on individual buffer size and not total memory usage, last night I started playing with some ideas for how to split the |
Yes you're correct, before these it was outputting ?? repeatedly. So this error message is quite correct then. Thank you! |
|
So that I could confirm that this is not related to the Not too surprising. Just wanted to make sure that the MTLDevice.maxBufferLength limit applies to both methods of creating a MTLBuffer. So @ggerganov, seems like we might need to figure out how to split up buffers that exceed this limit. |
This updates ggml_metal_add_buffer to use MTLDevice.newBufferWithBytesNoCopy to attempt share buffers between CPU and GPU rather than re-allocating for the GPU.
I've been following #1642 and noticed that this might help some of the swapping-related issues some have been seeing with larger models and devices with < 96GB memory. With this change, I'm no longer seeing any swapping or odd/corrupted output.
Apologize if I missed any contribution steps, or if this change is missing something obvious. One thing I'm not sure about is whether this covers all possible cases. One thing to note is that newBufferWithBytesNoCopy requires a page-aligned source buffer. This seems to be the case with mmap, but not sure if it will remain true for all possible configurations / code paths.