I have a table with 3 million rows and 1.3GB in size. Running Postgres 9.3 on my laptop with 4GB RAM.

explain analyze
select act_owner_id from cnt_contacts where act_owner_id = 2

I have btree key on cnt_contacts.act_owner_id defined as:

CREATE INDEX cnt_contacts_idx_act_owner_id 
   ON public.cnt_contacts USING btree (act_owner_id, status_id);

The query runs in about 5 seconds

Bitmap Heap Scan on cnt_contacts  (cost=2598.79..86290.73 rows=6208 width=4) (actual time=5865.617..5875.302 rows=5444 loops=1)
  Recheck Cond: (act_owner_id = 2)
  ->  Bitmap Index Scan on cnt_contacts_idx_act_owner_id  (cost=0.00..2597.24 rows=6208 width=0) (actual time=5865.407..5865.407 rows=5444 loops=1)
        Index Cond: (act_owner_id = 2)
Total runtime: 5875.684 ms"
Why is taking so long?

work_mem = 1024MB; 
shared_buffers = 128MB;
effective_cache_size = 1024MB
seq_page_cost = 1.0         # measured on an arbitrary scale
random_page_cost = 15.0         # same scale as above
cpu_tuple_cost = 3.0
share|improve this question
    
What is the definition of the cnt_contacts_idx_act_owner_id index? – a_horse_with_no_name Feb 23 '14 at 13:06
    
CREATE INDEX cnt_contacts_idx_act_owner_id ON public.cnt_contacts USING btree (act_owner_id, status_id); – Vasil Atanasov Feb 23 '14 at 13:11
    
You should create another index just having act_owner_id at it. – frlan Feb 23 '14 at 13:13
    
won't help, tried with only act_owner_id as a key, the query results are same, no difference – Vasil Atanasov Feb 23 '14 at 13:15
2  
Why did you increase random_page_cost that much? (The default is 4.0 if I'm not mistaken). That way you are telling Postgres that you have an incredible slow harddisk with a very high latency. And the cpu_tuple_cost seems also very strange (given that the default is 0.01). Even on my pretty old slow desktop lowering random_page_cost to 2.5 improved the execution plans Postgres was creating – a_horse_with_no_name Feb 23 '14 at 13:30

You are selecting 5444 records scattered over a 1.3 GB table on a laptop. How long do you expect that to take?

It looks like your index is not cached, either because it can't be sustained in the cache, or because this is the first time you used that part of it. What happens if you run the exact same query repeatedly? The same query but with a different constant?

running the query under "explain (analyze,buffers)" would be helpful to get additional information, particularly if you turned track_io_timing on first.

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.