There is a fundamental conflict between with the git model and what you are trying to do here. git clone
makes a full copy of the repository on your local machine. The idea of git
is to keep a complete copy of the repository locally. There is nearly no communication with the server with most commands. The only time there is communication is when you git fetch
(pull down all branch changes from the server) or git push
(push all local branch changes to the remote repository) or of course when you make a complete copy with git clone
.
So really, I think you are stuck. What I gather is that you want to pull down lots of different projects, run analysis on each, then delete the results. That's not something git was designed for as it is fundamentally about keeping and modifying a local copy of a repository for a long period of time.
I do question your assumption that pulling to memory would even make a difference. Network operations are a couple orders of magnitude slower than disk operations. A decent non-SSD drive is going to give you around 150 MB/s. And SSD is going to triple that. Unless you have a much better connection than I do, pulling to memory would not speed things up at all as your OS is spending all its time waiting for network requests to the git server.
If you are working with github, you may be better off with the "Download ZIP" method on each project page. This will download a branch without all the extraneous branch/history information. That should be faster than doing a git pull
for cases where you only need the latest version of one branch.