0
votes
1answer
40 views

String replace using concatenated strings from various columns

I'd like to remove a substring in a column via update statement. The substring to replace consists of multiple strings from other different columns but in strict order. The specification says: ...
2
votes
2answers
157 views

Optimizing bulk update performance in Postgresql

Using PG 9.1 on Ubuntu 12.04. It currently takes up to 24h for us to run a large set of UPDATE statements on a database, which are of the form: UPDATE table SET field1 = constant1, field2 = ...
0
votes
2answers
95 views

Use CASE to select columns in UPDATE query?

I can use CASE to choose which columns to display in a SELECT query (Postgres), like so: SELECT CASE WHEN val = 0 THEN column_x WHEN val = 1 THEN column_y ELSE 0 END AS ...
0
votes
1answer
26 views

Comparing two tables for a UUID change and fix it

I have two tables which I'm trying to reconcile the differences of in postgresql. Table A is old and needs updating. Table B is an updated, schema identical version of Table A which I have the data ...
2
votes
0answers
64 views

How to execute a non-table-locking update operation on PostgreSQL? [closed]

Looking for a good way to update the value on a column for all the rows in a database (not huge, but big enough - about 10M records), without locking the whole table, so operations can continue while ...
2
votes
2answers
326 views

postgreSQL update set only where cast is possible and ignore error?

I have 2 columns in a PostgreSQL table. The mac_address_temp column is for migration from character type to MAC-address type: mac_address | macaddr | mac_address_temp | character ...
2
votes
2answers
831 views

Most efficient way to add a serial column to a huge table

What's the fastest way to add a BIGSERIAL column to a huge table (~3 Bil. rows, ~ 174Gb)? EDIT: I want the column to be incremented values for existing rows (NOT NULL). I didn't set a fillfactor ...
1
vote
4answers
352 views

Performance degradation while updating tables having 10s of millions of records

I want to update tables ( my be 20-30 ) having 10s of millions of records each. The problem is that it is taking too much time for the update process and also at that time CPU usage also goes very ...