updating primary key of master and child tables for large tables
I have a fairly huge database with a master table with a single column GUID (custom GUID like algorithm) as primary key, and 8 child tables that have foreign key relationships with this GUID column. All the tables have approximately 3-8 million records. None of these tables have any BLOB/CLOB/TEXT or any other fancy data types just normal numbers, varchars, dates and timestamps (about 15-45 columns in each table). No partitions or other indexes other than the primary and foreign keys.
Now, the custom GUID algorithm has changed and though there are no collisions I would like to migrate all the old data to use GUIDs generated using the new algorithm. No other columns need to be changed. Number one priority is data integrity and performance is secondary.
Some of the possible solutions that I could think of were (as you will probably notice they all revolve around one idea only)
- add new column ngu_id and populate with new gu_id; disable constraints; update child tables with ngu_id as gu_id; renaname ngu_id->gu_id; re-enable constraints
- read one master record and its dependent child records from child tables; insert into the same table with new gu_id; remove all records with old gu_ids
- drop constraints; add a trigger to the master table such that all the child tables are updated; start updating old gu_id's with new new gu_ids; re-enable constraints
- add a trigger to the master table such that all the child tables are updated; start updating old gu_id's with new new gu_ids
- create new column ngu_ids on all master and child tables; create foreign key constraints on ngu_id columns; add update trigger to the master table to cascade values to child tables; insert new gu_id values into ngu_id column; remove old foreign key constraints based on gu_id; remove gu_id column and rename ngu_id to gu_id; recreate constraints if necessary;
- use on update cascade if available?
My questions are:
- Is there a better way? (Can't burrow my head in the sand, gotta do this)
- What is the most suitable way to do this? (I've to do this in oracle, sqlserver and mysql4 so, vendor specific hacks are welcome)
- What are the typical points of failure for such an exercise and how to minimize them?
If you are with me so far, thank you and hope you can help :)
Your ideas should work. the first is probably the way I would use. Some cautions and things to think about when doing this: Do not do this unless you have a current backup. I would leave both values in the main table. That way if you ever have to figure out from some old paperwork which record you need to access, you can do it. Take the database down for maintenance while you do this and put it in single user mode. The very last thing you need while doing something like this is a user attempting to make changes while you are in midstream. Of course the first action once in single user user mode is the above mentioned backup. You probably should schedule the down time for some time when the usage is lightest. Test on dev first! This should also give you anidea as to how long you will need to close production for. Also you can try several methods to see which is the fastest. Be sure to communicate in advance to users that the database will be going down at the scheduled time for maintenance and when they can expect to have it be available again. Make sure the timing is ok. It really makes people mad when they plan to stay late to run the quarterly reports and the database is not available and they didn't know it. There are a fairly large number of records, you might want to run the updates of the child tables in batches (one reason not to use cascading updates). This can be faster than trying to update 5 million records with one update. However don't try to update one record at atime or you will still be here next year doing this task. Drop indexes on the GUID field in all the tables and recreate after you are done. This should improve the performance of the change.