I have around 5 million rows in a postgres table. I'd like to know how many rows match start_time >= NOW()
, but despite having an index on start_time the query is extremely slow (in the order of several hours).
EXPLAIN SELECT COUNT(*) FROM core_event WHERE start_time >= NOW();
Aggregate (cost=449217.81..449217.82 rows=1 width=0)
-> Index Scan using core_event_start_time on core_event (cost=0.00..447750.83 rows=586791 width=0)
Index Cond: (start_time >= now())
Here's the schema information for the table:
id | integer | not null default nextval('core_event_id_seq'::regclass)
source | character varying(100) | not null
external_id | character varying(100) |
title | character varying(250) | not null
location | geometry | not null
start_time | timestamp with time zone |
stop_time | timestamp with time zone |
thumb | character varying(300) |
image | character varying(100) |
image_thumb | character varying(100) |
address | character varying(300) |
description | text |
venue_name | character varying(100) |
website | character varying(300) |
city_id | integer |
category | character varying(100) |
phone | character varying(50) |
place_id | integer |
image_url | character varying(300) |
event_type | character varying(200) |
hidden | boolean | not null
views | integer | not null
added | timestamp with time zone |
I have indexes on the following fields:
city_id
external_id (unique)
location
location_id
place_id
start_time
Is there any easy way for me to speed up the query (eg. a partial index), or am I going to have to resort to partitioning the data by date?