Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.

I have a collection as { student_id :1, teachers : [ "....",...]}

steps done in sequence as : 1) find by {teachers : "gore"}

2) set the index as { student_id : 1 }

3) find by {teachers : "gore"}

4) set the index as { teachers : 1 }

5) find by {teachers : "gore"}

and the results(time taken) are not that much effective by indexing teachers(array) Please someone explain what is happening? I may be doing something wrong here please correct me. The results are as :

d.find({teachers : "gore"}).explain()

{ "cursor" : "BasicCursor", "nscanned" : 999999, "nscannedObjects" : 999999, "n" : 447055, "millis" : 1623, "nYields" : 0, "nChunkSkips" : 0, "isMultiKey" : false, "indexOnly" : false, "indexBounds" : { } }

d.ensureIndex({student_id : 1})

d.find({teachers : "gore"}).explain() { "cursor" : "BasicCursor", "nscanned" : 999999, "nscannedObjects" : 999999, "n" : 447055, "millis" : 1300, "nYields" : 0, "nChunkSkips" : 0, "isMultiKey" : false, "indexOnly" : false, "indexBounds" : { } }

d.ensureIndex({teachers : 1})

d.find({teachers : "gore"}).explain() { "cursor" : "BtreeCursor teachers_1", "nscanned" : 447055, "nscannedObjects" : 447055, "n" : 447055, "millis" : 1501, "nYields" : 0, "nChunkSkips" : 0, "isMultiKey" : true, "indexOnly" : false, "indexBounds" : { "teachers" : [ [ "gore", "gore" ] ] } }

share|improve this question
    
Check last explain result, "n": 447055 That means, these many documents got matched and returned by the query, so it took time. –  Abhishek Kumar Jul 28 '14 at 7:12
    
i have a problem with millis in second-last and last explain query i.e. before indexing it visits 9999999 documents in 1300 millis and after indexing it visits only half documents still it takes more time, 1501 millis –  viv Jul 28 '14 at 10:43
    
Further to the first comment on the number of results returned .. an index won't help much (and potentially could be slower) if the range of values is not very selective. Your example is finding 447,055 matches in 999,999 documents, which is almost 50% of the collection. At that ratio it can be faster to read the whole collection and compare results instead of using an index. An effective index would allow queries to narrow results faster by ensuring selectivity and reading less data (relevant index & data). –  Stennie Jul 28 '14 at 10:47
    
Also worth noting: if all the data & indexes have been loaded into memory (and the total size is less than memory) your subsequent queries will execute faster. You should time the query across several runs without using explain() to get an idea of the effective query time -- use the MongoDB query profiler or an elapsed start/stop time. The timing from explain() includes query plan evaluation, which is cached in normal query usage. –  Stennie Jul 28 '14 at 10:54
    
@Stennie thanks for your information it helped me to understand the use of indexing. But I have requirement which needs searching in both way i.e. from student_id to teachers array and teachers to student_id, and the data is too large so query should run faster, is there any other solution to this? –  viv Jul 28 '14 at 11:19

1 Answer 1

Do you have the same data inserted over and over? The fact that it is showing a BtreeCursor is a positive, but the number of nscannedObjects is too large. Do you have the same data inserted over and over again? Is it possible that you have 447055 "gore" values? If so, thats why its taking such a long time.

share|improve this answer
    
yes my requirement are same as the structure I have used for testing.Its also possible that I can have teachers in more than 50% documents –  viv Jul 28 '14 at 12:51

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.