I'm developing a location based application in which i was in the plan of dividing the globe into grids and when the client requests information, will create a table or similar data model which will expire in 5 mins and serving the same data for the clients within the same grid within that period. Which i explained in the following StackOverflow question.
http://stackoverflow.com/questions/19667637/caching-location-based-data-cluster
But i come across the term spatial databases today and now i like to know how it differs from the traditional model when searching from the databases of million records especially in terms of speed? Will it really speed up data retrieval of location based data? If so, can you explain how?
Problem: The problem is, i was in the idea of using traditional database, but if i use traditional DB and if i have more than a million records, when we execute each search query it needs to search through the the database of million records, so i planned to,
- Divide the globe by some sort of grid each of around 1 sq.km and when we showed all the values from that grid,then the next nearby grid and so on which i need to save in some buffer.
- And when someone request the data from the same grid in the next 5 mins, i don't want to search the entire DB again, and i need to give the values from the buffer and not querying the entire DB again.
Here the main problem is, buffering the data and retrieval from it to speed up the process and minimizing the server load. So i like to know that this spatial DB simplify these process in any way? Or is there any DB already available which provides any grid system like this?
@ike: What i meant as speed is the performance, i.e., the number of rows returned against the number of rows on which the query runs on(may be the entire table of few million records). I'm trying to find some way to reduce the number of rows(or the entire table) on which the query runs