I have a complex database, with around 30 tables. One table has more than 500,000 rows and another more than 15,000 and I use both in a separate database until today I decided to implement in only one database.
Before today, the table with 500,000 rows was in a MySQL database and the 15,000 row table was in PostgreSQL. In one page of heavy use, this was the result in a PHP benchmark:
getSimilarAvaiable - 0.0287 s
getUnavaiable - 0.27 s
ProcessDataOfUnavaiable - 1.4701 s
Process - 1.8622 s
TotalPageTime - 3.631 s
After I migrate everything to PostgreSQL, and use the same SQL code without any changes the result of the same page was this:
getSimilarAvaiable - 2.7465 s
getUnavaiableCars - 9.0763 s
ProcesseDataOfUnavaiableCars - 1.4167 s
ProcessCars - 1.7207 s
TotalPageTime - 14.9602 s
I put everything the same in MySQL, same index, everything, but I can't understand why there is this huge difference. What I should do to optimize this?
EDIT: Now better explained.
The 500.00 table is composed with the follow structure:
id - bigint (primary key)
plate- varchar(10) Unique key
manufacturer - varchar(30)
vin - varchar(30)
The major query is something like this:
SELECT plate, vin, 1 as n, substr(plate,1,2) as l
FROM imtt_vin WHERE substr(plate,1,1) >= 'A' and substr(plate,1,1) <= 'Z' AND
(manufacturer ILIKE '%".self::$Manufacturer."%') AND vin LIKE ?
UNION
SELECT plate, vin, 3 as n, substr(plate,4,2) as l
FROM imtt_vin WHERE substr(plate,4,1) >= 'A' and substr(plate,4,1) <= 'Z' AND
(manufacturer ILIKE '%".self::$Manufacturer."%') AND vin LIKE ?
UNION
SELECT plate, vin, 2 as n, substr(plate,7,2) as l
FROM imtt_vin WHERE substr(plate,7,1) >= 'A' and substr(plate,7,1) <= 'Z' AND
(manufacturer ILIKE '%".self::$Manufacturer."%') AND vin LIKE ?
ORDER BY n, l, plate;
EDIT2: Tested with a complex single query and I reduced it from 15 to 8/9 seconds. Even so it is too much for me.