Take the 2-minute tour ×
Code Review Stack Exchange is a question and answer site for peer programmer code reviews. It's 100% free, no registration required.

I have a pandas DataFrame and a (MySQL) database with the same columns. The database is not managed by me.

I want to update the values in the database in an "UPDATE... WHERE" style, updating only some columns wherever some other columns match.

Here's my code:

import sqlalchemy as sqla

def save_to_db(final_df, passwd):

    engine_str = 'mysql+mysqldb://username:{}@localhost/mydb'.format(passwd)

    engine = sqla.create_engine(engine_str)

    sm = sessionmaker(bind=engine)
    session = sm()

    metadata = sqla.MetaData(bind=engine)

    datatable = sqla.Table('AdcsLogForProduct', metadata, autoload=True)

    for ind, row in final_df.iterrows():
        u = sqla.sql.update(datatable) \
            .values({"q_ECI_B_x": row.q_ECI_B_x,
                     "q_ECI_B_y": row.q_ECI_B_y,
                     "q_ECI_B_z": row.q_ECI_B_z,
                     "q_ECI_B_s": row.q_ECI_B_s}) \
            .where(sqla.and_(datatable.c.year == row.year,
                             datatable.c.month == row.month,
                             datatable.c.day == row.day,
                             datatable.c.hours == row.hours,
                             datatable.c.minutes == row.minutes,
                             datatable.c.seconds == row.seconds,
                             datatable.c.milliseconds == row.milliseconds,
                             datatable.c.microseconds == row.microseconds))

        session.execute(u)

    session.flush()
    session.commit()

I'm doing this with plain sqlalchemy because apparently pandas' built-in SQL functions can't handle "UPDATE... WHERE" scenarios. However, this is really slow.

Isn't there a more efficient way to do this?

share|improve this question
    
I've now shown the full function, with only minimal editing. Can I take that the function inputs as given? Sorry if this takes a few iterations. –  Marco Giancotti Feb 18 at 3:30
    
Thank you for revising the question. –  200_success Feb 18 at 3:32
    
Thank you for the help. –  Marco Giancotti Feb 18 at 4:26

1 Answer 1

You have eight conditions to match for every UPDATE. A typical solution would store timestamps using a DATETIME or TIMESTAMP column, so that there is only one value to match.

For reasonable performance, ensure that the timestamp field is indexed.

share|improve this answer
    
Thank you. As I've now included in my question, I'm not in control of the database. Do you think this multiple matching could be the main bottleneck rather than the pandas side (iterrows)? –  Marco Giancotti Feb 18 at 7:37
1  
Ensure that there is an index on (year, …, microseconds). Otherwise, dump final_df to a table using .to_sql() and do one UPDATE AdcsLogForProduct log JOIN tmp ON log.year=tmp.year AND … log.microseconds=tmp.microseconds SET log.q_ECI_B_x = tmp.q_ECI_B_x, log.q_ECI_B_y = tmp.q_ECI_B_y, …. If that giant update is slow, then make your whoever is in charge of the database deal with it — you can't blame PANDAS anymore. Design decisions have consequences, and I think this performance problem is one of them. –  200_success Feb 18 at 7:51

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.