Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upFAQ, statement_cache_size=0... #507
Comments
|
We would need a bit more information to triage, like sample code and steps to reproduce. Also, you don't need to use explicit prepared statements in most cases. Asyncpg maintains an automatic prepared statement cache, which is sufficient in most cases. |
|
Thanks for your timely reply. I made a python class that does a prepare for named statements (this was discussed on another thread that you may have seen...) You don't have to examine all the code; the key thing is MY "PreparedStatement" class, and its aenter and aexit member functions: https://gitlab.com/osfda/asyncpg_utility/blob/master/asyncpg_utility.py I set up a prepared statement in its aenter, assigning it to a member of the class:
Examining your source, I did not see any way to release the prepared statement; so in one version I had the aexit do nothing (figuring the connection release would clean it up...); then just to force the issue, I also tried an explicit del of the prepared statement in the aexit (what you see at that link above...) The code which calls that statement-preparing class is simply this:
The DuplicatePreparedStatement gets triggered in the aenter of MY PreparedStatement class, when it goes to do the prepare call. It works a number of times; just not EVERY time, reliably! |
|
asyncpg uses a simple monotonic counter to generate prepared statement names. The only two times when I've seen |
|
I have multiple functions concurrently calling the fetch_something (each one independently executing run_until_complete's...) Is the pool.acquire threadsafe? (I had reasoned it would be...) "pool" is stored globally at the main module level, and that module is a Falcon app served via uwsgi; uwsgi references an "app" global variable:
uwsgi will run multiple MULTIPLE instances of the app; think that might be a problem? I think each app module should have its own distinct global pool variable... |
|
Well, I did a rewrite. In Falcon, each URL endpoint gets a class to handle it; so I added a dedicated pool for each endpoint/class, and bagged the module-level pool. In testing: so far, so good. Having the endpoint classes share a module-level pool variable did not work out (I had presumed that the pool.acquire would handle the concurrency issues with that?) In the process of doing that code restructuring, it got more efficient too (bagged some redundant connection acquisitions...) As of yet, I am not getting a performance gain from using it in Falcon's synchronous context; but if I were to dispatch asynchronous queries while I simultaneously do other asynchronous network tasks, then coordinate completion of those tasks in the synchronous endpoint class routine, I could get the benefit (I'll do that eventually...) |
I have done a create_pool, passing the argument 'statement_cache_size=0"; I am STILL periodically getting the DuplicatePreparedStatement error on statements prepared from a connection acquired from that pool.
Any other options? All other facilities appear to work reliably in your driver except this critically important prepared statement facility.
Running your latest driver, postgres 12; application is running in the context of a uwsgi server app. Connecting via a socket file (in /var/run...) to postgres...