I have a Python application which uses both SQLite and Postgresql. It has a connector class for each database:
class PostgresqlDatabase(Database):
...
class SQLite(Database):
...
Both class share the same methods and logic, and the only thing that differentiate them the SQL is the parametrization of SQL queries. Most of the SQL queries are even identical, e.g. both have a method called _get_tag
:
# postgresql method with %s
def _get_tag(self, tagcipher):
sql_search = "SELECT ID FROM TAG WHERE DATA = %s"
self._cur.execute(sql_search, ([tagcipher]))
rv = self._cur.fetchone()
return rv
# sqlite method with ?
def _get_tag(self, tagcipher):
sql_search = "SELECT ID FROM TAG WHERE DATA = ?"
self._cur.execute(sql_search, ([tagcipher]))
rv = self._cur.fetchone()
return rv
So what is my problem?
I find maintaining both classes annoying, and I feel a common class would benefit the code in the long run.
However, creating a common class, would create a complex code. The __init__
would probably have to initialize the correct underlying cursor. This would create a small starting overhead, and small performance penalty if for example I would lookup the correct string every time, e.g.
@property:
def sql_search(self):
return "SELECT ID FROM TAG WHERE DATA = {}".format(
'?' if self.db == 'SQLite' else '%s')
def _get_tag(self, tagcipher):
self._cur.execute(self.sql_search, ([tagcipher]))
rv = self._cur.fetchone()
return rv
I am also afraid this approach would be also harder to understand when first looking at it.
Leaving my personal example, I would like to know what is the most acceptable way here.
The UNIX principle is that small program make awesome tools. Does this apply to classes too? Should I keep maintaining both classes or write one more complicated class that does it all?
Is there a general rule of thumb?