It would be much more efficient to store your values in a normalized schema. That said, you can also make it work with your current setup.
Assumptions
Assuming this table definition:
CREATE TABLE tbl (tbl_id int, usr jsonb);
"user" is a reserved word and would require double quoting to be used as column name. Don't do that. I'm naming the column usr
instead.
Query
The query is not as trivial as the comments make it seem:
SELECT tbl_id, obj.value->>'count' As count
FROM (
SELECT tbl_id, usr
FROM tbl
WHERE usr @> '[{"_id":"1"}]'
) u
JOIN LATERAL jsonb_array_elements(usr) obj(value) ON obj.value->>'_id' = '1';
There are three basic steps:
- Find qualifying rows (quickly!).
Note how usr @> '[{"_id":"1"}]'
can find objects in the array with any number of additional keys.
You can support this with a generic GIN index on the JSON column, or with the more specialized operator class jsonb_path_ops
:
CREATE INDEX tbl_usr_gin_idx ON tbl
USING gin (usr jsonb_path_ops);
Related:
I achieve this by unnesting with jsonb_array_elements()
. (unnest()
is only good for Postgres arrays, not for JSON arrays.) And since we are only interested in actually matching objects I add a join condition right away.
Related:
- Extract the value for the key
'count'
.
After qualifying objects have been extracted, this is simply: obj.value->>'count'
.