Sign up ×
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only takes a minute:

I have the following table in PostgreSQL:

CREATE TABLE index_test
(
    id int PRIMARY KEY NOT NULL,
    text varchar(2048) NOT NULL,
    last_modified timestamp NOT NULL,
    value int,
    item_type varchar(2046)
);
CREATE INDEX idx_index_type ON index_test ( item_type );
CREATE INDEX idx_index_value ON index_test ( value )

I make the following selects:

explain select * from index_test r where r.item_type='B';
explain select r.value from index_test r where r.value=56;

The explanation of execution plan looks like this:

Seq Scan on index_test r  (cost=0.00..1.04 rows=1 width=1576)
    Filter: ((item_type)::text = 'B'::text)'

As far as I understand, this is a full table scan. The question is: why my indexes are not used?

May be, the reason is that I have too few rows in my table? I have only 20 of them. Could you please provide me with a SQL statement to easily populate my table with random data to check the indexes issue?

I have found this article: http://it.toolbox.com/blogs/db2luw/how-to-easily-populate-a-table-with-random-data-7888, but it doesn't work for me. The efficiency of the statement does not matter, only the simplicity.

share|improve this question
1  
If efficiency doesn't matter, only simplicity, why do you care whether it's using an index or not? Indexes are performance optimisations that don't affect semantics - well, with the exception of implementation details like Pg's UNIQUE indexes as implementation for UNIQUE constraints. – Craig Ringer Mar 18 '13 at 11:06
    
Craig, "efficiency doesn't matter" relates only to the way of inserting the dummy rows, sorry for misleading phrase. – KutaBeach Mar 18 '13 at 12:17
up vote 2 down vote accepted

Maybe, the reason is that I have too few rows in my table?

Yes. For a total of 20 rows in a table a seq scan is always going to be faster than an index scan. Chances are that those rows are located in a single database block anyway, so the seq scan would only need a single I/O operation.

If you use

explain (analyze true, verbose true, buffers true) select ....

you can see a bit more details about what is really going on.

Btw: you shouldn't use text as a column name, as that is also a datatype in Postgres (and thus a reserved word).

share|improve this answer
    
@KutaBeach Note that you can force index scans for the purpose of testing whether it's possible to use an index for a particular query by using the enable_ parameters as documented in the manual. These parameters are for testing purposes only and should not generally be used in production. – Craig Ringer Mar 18 '13 at 11:23

The example you have found is for DB2, in pg you can use generate_series to do it. For example like this:

INSERT INTO index_test(data,last_modified,value,item_type) 
SELECT
    md5(random()::text),now(),floor(random()*100),md5(random()::text) 
    FROM generate_series(1,1000);
SELECT max(value) from index_test;

http://sqlfiddle.com/#!12/52641/3

The second query in above fiddle should use index only scan.

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.