I'm running concurrent Postgres queries like this:
UPDATE foo SET bar = bar + 1 WHERE baz = 1234
Each query affects the fixed K number of rows, and I can't find a way to enforce the order in which the rows are updated, I end up with deadlocks. Currently I fix the problem by enforcing the order by hand, but this means I have to execute many more queries than I normally would while also raising the search complexity from O(log N + K) to O(K log N).
Is there a way to improve performance without ending up vulnerable to deadlocks? I suspect that replacing the (baz)
index with the (baz, id)
index might work provided that Postgres updates the rows in the same order that it have scanned them, is this an approach worth pursuing?
CREATE TABLE
code. – ypercube Jun 17 at 10:53