If you found this blog post, you are probably facing the boring data pump error: "ORA-39405: Oracle Data Pump does not support importing from a source database with TSTZ version X into a target database with TSTZ version Y.". In this blog post, I will show how to bypass this error with just some extra steps if, for whatever reason, you can't or don't want to update the timezone version of your target database.
Introduction
First, this will happen whenever the timezone version of your SOURCE database is higher than the timezone version on the TARGET, and the reason is pretty obvious: the timezone file is cumulative and knows about any version in the past but not about future versions. So, if you are moving to an older TZ release, the target database would not be able to translate that timezone into the correct one.
However, what is painful is that our data pump file has no timezone data at all most of the time. So why is Data Pump complaining about that?! The answer is, unfortunately, due to its design. But don't panic! There is already an enhancement request asking for this to be bypassed. If you want to ask for this to be prioritized, you can always open an SR and mention enhancement # 34824144.
Ok, but this blog post is about how to bypass the error. So before I go into the solution, let me explain how the data pump import process works.
- You call the import via impdp or the Datapump API (DBMS_DATAPUMP).
- Datapump will read your dump file and create the Master Control Table (usually named SYS_IMPORT_TABLE_01 or something similar) on the schema that you are connecting with the data pump.
- The Master Control Table has all the information about what was exported, when, where, and many other information, like the source database timezone version.
- Data pump will do some checks to move forward, and one of them, is checking if the target database timezone is higher or equal than the data exported on the dump file. This is when it fails and throw the ORA-39405.
- The import process starts.
To fix this problem, I will need to add a "hack" between steps 3 and 4 to modify this master table and make data pump "think" that the exported data came from a database with a lower or equal timezone version, not a higher one.
Unsuccessful Tries
I tried 4 different approaches, but only 1 of them worked.
Let me discuss each, and finally go to the real solution.
1) Using the abort_step parameter and stopping the data pump job just after the master table is created.
My first try was using the option abort_step on the data pump job to immediately kill it after the master table was created.
With that, I could manually manipulate the data pump master table changing the row that contains the timezone version information and later resuming the job. The problem is that using the abort_step parameter will actually abort the job AFTER the checks are performed and before the import process starts (between steps 4 and 5 above). So as I get the ORA error, the import job is killed and left in a state that cannot be resumed.
2) Using the master_only parameter and stopping the data pump job just after the master table is created.
This strategy is very similar to the previous one, and it actually stopped the job before the error was raised, differently from the abort_step idea.
However, the data pump job is finished successfully. In the end, I just asked it to produce the master table, right? So the job is left in a state that it cannot be resumed neither or started from the point where it stopped (after the master table is created). So no success here.
3) Running "after insert triggers" on the master table to change the TZ version information
This method is a bit more complex. My idea was to create an after-DDL trigger whenever a new table is created on my import schema. If it is a Master Table, I would also create an after-insert trigger on this table.
This second "after insert" trigger would be "FOR EACH ROW" and look for the timezone version row. If this row is being inserted, it would change the value on the fly.
This strategy worked well. However, when the data pump import process tries to insert this table, it receives the error: "ORA-26086: direct path does not support triggers". So, as the data pump process loads the Master Table with direct path, my strategy is ruined.
The Solution
Finally the solution.
First of all, this method is not supported by Oracle, me, or anyone else. Use it at your own risk. If you actually have timezone data in your dump file and try this approach, you will most probably corrupt your data.
So, my last try was what I call "The Insane Mode". The strategy here is, like strategy 3 mentioned before, to keep tracking whenever a new Master Table is created on my schema. When it is, instead of creating an "after insert" trigger, I will fire a background process that will go through an infinite loop, trying to update any row on the Master Table that has the timezone information. If it updates the row or after a 1-minute execution, it immediately stops.
With this strategy, I could finally bypass the check, and now I'm able to load Dump Files from databases with higher TSTZ versions.
Here's the SQL code to create what I mentioned. Create all objects under the user under which you are running your data pump process.
Example:
$ impdp scott/tiger directory=xxxx .... (in this case, you would create all the objects below on the SCOTT schema).
Below is what needs to be created on that schema to make the "hack" work:
1. Create table to contain log information.
CREATE TABLE "DP_MASTER_TRACK" ( "ID_DP_MASTER_TRACK" NUMBER GENERATED ALWAYS AS IDENTITY MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 NOCACHE ORDER NOCYCLE NOKEEP NOSCALE, "TABLE_NAME" VARCHAR2(30 CHAR), "TZ_SOURCE" NUMBER, "TZ_TARGET" NUMBER, "DP_MASTER_CREATED" TIMESTAMP(6), "DP_MOD_JOB_CREATED" TIMESTAMP(6), "DP_MOD_JOB_STARTED" TIMESTAMP(6), "DP_MOD_JOB_FOUND" TIMESTAMP(6), "DP_MOD_JOB_FINISHED" TIMESTAMP(6), "DP_MOD_JOB_TOTAL_LOOPS" NUMBER, "DP_MOD_JOB_FAILED_LOOPS" NUMBER ); ALTER TABLE "DP_MASTER_TRACK" ADD CONSTRAINT "D_DP_MASTER_TRACK_PK" PRIMARY KEY ("ID_DP_MASTER_TRACK");
2. Create the package that has the logic to change your Master Table
CREATE OR REPLACE PACKAGE "PC_DATA_PUMP" AS -- Created by dbarj.com.br -- Run job to track changes on Data Pump Master table. PROCEDURE TRACK_JOB ( P_TABLE_NAME IN VARCHAR2 ); -- Modify master table to mask difference on TZ. PROCEDURE MODIFY_DP_MASTER ( P_ID_DP_MASTER_TRACK IN NUMBER ); END PC_DATA_PUMP; / CREATE OR REPLACE PACKAGE BODY "PC_DATA_PUMP" AS -- Created by dbarj.com.br PROCEDURE GET_DP_MASTER_TRACK ( P_DP_MASTER_TRACK IN OUT NOCOPY DP_MASTER_TRACK%ROWTYPE ) AS BEGIN SELECT TABLE_NAME INTO P_DP_MASTER_TRACK.TABLE_NAME FROM DP_MASTER_TRACK WHERE ID_DP_MASTER_TRACK = P_DP_MASTER_TRACK.ID_DP_MASTER_TRACK; EXCEPTION WHEN NO_DATA_FOUND THEN RETURN; END GET_DP_MASTER_TRACK; PROCEDURE UPD_DP_MASTER_TRACK ( P_DP_MASTER_TRACK IN OUT NOCOPY DP_MASTER_TRACK%ROWTYPE, P_INFO_TYPE IN VARCHAR2 -- MASTER_CREATED / JOB_CREATED / JOB_INFO ) AS BEGIN IF P_INFO_TYPE = 'MASTER_CREATED' THEN INSERT INTO DP_MASTER_TRACK ( TABLE_NAME, DP_MASTER_CREATED ) VALUES ( P_DP_MASTER_TRACK.TABLE_NAME, SYSTIMESTAMP ) RETURNING ID_DP_MASTER_TRACK INTO P_DP_MASTER_TRACK.ID_DP_MASTER_TRACK; ELSIF P_INFO_TYPE = 'JOB_CREATED' THEN UPDATE DP_MASTER_TRACK SET DP_MOD_JOB_CREATED = SYSTIMESTAMP WHERE ID_DP_MASTER_TRACK = P_DP_MASTER_TRACK.ID_DP_MASTER_TRACK; ELSIF P_INFO_TYPE = 'JOB_INFO' THEN UPDATE DP_MASTER_TRACK SET DP_MOD_JOB_STARTED = P_DP_MASTER_TRACK.DP_MOD_JOB_STARTED, DP_MOD_JOB_FOUND = P_DP_MASTER_TRACK.DP_MOD_JOB_FOUND, DP_MOD_JOB_FINISHED = P_DP_MASTER_TRACK.DP_MOD_JOB_FINISHED, DP_MOD_JOB_TOTAL_LOOPS = P_DP_MASTER_TRACK.DP_MOD_JOB_TOTAL_LOOPS, DP_MOD_JOB_FAILED_LOOPS = P_DP_MASTER_TRACK.DP_MOD_JOB_FAILED_LOOPS, TZ_SOURCE = P_DP_MASTER_TRACK.TZ_SOURCE, TZ_TARGET = P_DP_MASTER_TRACK.TZ_TARGET WHERE ID_DP_MASTER_TRACK = P_DP_MASTER_TRACK.ID_DP_MASTER_TRACK; ELSE -- Nothing to do. Maybe raise an error later. NULL; END IF; COMMIT; END UPD_DP_MASTER_TRACK; PROCEDURE TRACK_JOB ( P_TABLE_NAME IN VARCHAR2 ) AS PRAGMA AUTONOMOUS_TRANSACTION; -- This must be here so we don't create any interference on the session that called the trigger V_JOB_NAME VARCHAR2(30); V_JOBARGS SYS.JOBARG_ARRAY; V_JOB_DEF SYS.JOB_DEFINITION_ARRAY; V_DP_MASTER_TRACK DP_MASTER_TRACK%ROWTYPE; BEGIN V_DP_MASTER_TRACK.TABLE_NAME := P_TABLE_NAME; UPD_DP_MASTER_TRACK(P_DP_MASTER_TRACK => V_DP_MASTER_TRACK, P_INFO_TYPE => 'MASTER_CREATED'); -- Created by Rodrigo Jorge - www.dbarj.com.br -- V_JOB_NAME := DBMS_SCHEDULER.GENERATE_JOB_NAME('JB_MOD_DP_MASTER_'); --- V_JOBARGS := SYS.JOBARG_ARRAY(); V_JOBARGS.EXTEND(1); V_JOBARGS(1) := SYS.JOBARG(ARG_POSITION => 1, ARG_VALUE => V_DP_MASTER_TRACK.ID_DP_MASTER_TRACK); --- V_JOB_DEF := SYS.JOB_DEFINITION_ARRAY(); V_JOB_DEF.EXTEND(1); -- V_JOB_DEF(1) := SYS.JOB_DEFINITION(JOB_NAME => V_JOB_NAME, PROGRAM_NAME => 'PG_MOD_DP_MASTER', ENABLED => TRUE, AUTO_DROP => TRUE, ARGUMENTS => V_JOBARGS, LOGGING_LEVEL => DBMS_SCHEDULER.LOGGING_FAILED_RUNS); V_JOB_DEF(1) := SYS.JOB_DEFINITION(JOB_NAME => V_JOB_NAME, PROGRAM_NAME => 'PG_MOD_DP_MASTER', ENABLED => FALSE, AUTO_DROP => TRUE, ARGUMENTS => V_JOBARGS, LOGGING_LEVEL => DBMS_SCHEDULER.LOGGING_FAILED_RUNS); --- DBMS_SCHEDULER.CREATE_JOBS(JOBDEF_ARRAY => V_JOB_DEF); UPD_DP_MASTER_TRACK(P_DP_MASTER_TRACK => V_DP_MASTER_TRACK, P_INFO_TYPE => 'JOB_CREATED'); --- --DBMS_SCHEDULER.RUN_JOB(JOB_NAME => V_JOB_NAME, USE_CURRENT_SESSION => FALSE); DBMS_SCHEDULER.ENABLE(NAME => V_JOB_NAME); -- As this is insider a trigger, I need to commit or ORA-06519: active autonomous transaction detected and rolled back COMMIT; --- Here I sleep some seconds just to be sure that the job starts before we move with the next steps on DP job. DBMS_SESSION.SLEEP(3); EXCEPTION WHEN OTHERS THEN RAISE_APPLICATION_ERROR(-20000, 'Error on Track Job.' || SQLCODE || ' - ' || SQLERRM); END TRACK_JOB; PROCEDURE MODIFY_DP_MASTER ( P_ID_DP_MASTER_TRACK IN NUMBER ) AS -- PRAGMA AUTONOMOUS_TRANSACTION; -- Not needed as this is called inside a job. V_NUM NUMBER; LTIME DATE := SYSDATE; V_DP_MASTER_TRACK DP_MASTER_TRACK%ROWTYPE; V_TOTAL_LOOPS NUMBER := 1; V_FAILED_LOOPS NUMBER := 0; BEGIN V_DP_MASTER_TRACK.DP_MOD_JOB_STARTED := SYSTIMESTAMP; V_DP_MASTER_TRACK.ID_DP_MASTER_TRACK := P_ID_DP_MASTER_TRACK; GET_DP_MASTER_TRACK(P_DP_MASTER_TRACK => V_DP_MASTER_TRACK); IF V_DP_MASTER_TRACK.TABLE_NAME IS NULL THEN RETURN; END IF; WHILE TRUE LOOP BEGIN EXECUTE IMMEDIATE ' SELECT TZ_SRC.PROPERTY, TZ_TGT.PROPERTY FROM "' || DBMS_ASSERT.SQL_OBJECT_NAME(V_DP_MASTER_TRACK.TABLE_NAME) || '" TZ_SRC, "' || DBMS_ASSERT.SQL_OBJECT_NAME(V_DP_MASTER_TRACK.TABLE_NAME) || '" TZ_TGT WHERE TZ_SRC.ABORT_STEP = 0 AND TZ_SRC.OPERATION=''EXPORT'' AND TZ_TGT.ABORT_STEP = 0 AND TZ_TGT.OPERATION=''IMPORT'' AND TZ_TGT.PROPERTY < TZ_SRC.PROPERTY ' INTO V_DP_MASTER_TRACK.TZ_SOURCE, V_DP_MASTER_TRACK.TZ_TARGET; EXCEPTION WHEN NO_DATA_FOUND THEN NULL; END; IF V_DP_MASTER_TRACK.TZ_SOURCE IS NOT NULL THEN BEGIN EXECUTE IMMEDIATE 'UPDATE ' || DBMS_ASSERT.SQL_OBJECT_NAME(V_DP_MASTER_TRACK.TABLE_NAME) || ' SET PROPERTY = ' || V_DP_MASTER_TRACK.TZ_TARGET || ' WHERE PROPERTY = ' || V_DP_MASTER_TRACK.TZ_SOURCE || ' AND ABORT_STEP = 0 AND OPERATION=''EXPORT'''; V_NUM := SQL%ROWCOUNT; COMMIT; IF V_NUM > 0 THEN V_DP_MASTER_TRACK.DP_MOD_JOB_FOUND := SYSTIMESTAMP; EXIT; END IF; EXCEPTION WHEN OTHERS THEN -- Exit on 00942 V_FAILED_LOOPS := V_FAILED_LOOPS + 1; NULL; END; END IF; EXIT WHEN SYSDATE = LTIME + INTERVAL '60' SECOND; -- Maximum loop duration in seconds. V_TOTAL_LOOPS := V_TOTAL_LOOPS + 1; END LOOP; V_DP_MASTER_TRACK.DP_MOD_JOB_FINISHED := SYSTIMESTAMP; V_DP_MASTER_TRACK.DP_MOD_JOB_TOTAL_LOOPS := V_TOTAL_LOOPS; V_DP_MASTER_TRACK.DP_MOD_JOB_FAILED_LOOPS := V_FAILED_LOOPS; UPD_DP_MASTER_TRACK(P_DP_MASTER_TRACK => V_DP_MASTER_TRACK, P_INFO_TYPE => 'JOB_INFO'); END MODIFY_DP_MASTER; END PC_DATA_PUMP; /
3. Create a program that will run the background job to update the Master Table
BEGIN -- Created by dbarj.com.br DBMS_SCHEDULER.CREATE_PROGRAM( PROGRAM_NAME => 'PG_MOD_DP_MASTER', PROGRAM_TYPE => 'STORED_PROCEDURE', PROGRAM_ACTION => 'PC_DATA_PUMP.MODIFY_DP_MASTER', NUMBER_OF_ARGUMENTS => 1, ENABLED => FALSE, COMMENTS => NULL ); DBMS_SCHEDULER.DEFINE_PROGRAM_ARGUMENT( PROGRAM_NAME => 'PG_MOD_DP_MASTER', ARGUMENT_NAME => 'P_ID_DP_MASTER_TRACK', ARGUMENT_POSITION => 1, ARGUMENT_TYPE => 'NUMBER', OUT_ARGUMENT => FALSE ); DBMS_SCHEDULER.ENABLE( NAME => 'PG_MOD_DP_MASTER' ); COMMIT; END; /
4. Create the trigger that will fire when a new Master Table is created
CREATE OR REPLACE TRIGGER "TR_AFTER_DP_MASTER" AFTER DDL ON SCHEMA BEGIN -- Created by dbarj.com.br -- This trigger is to workaround the issue with data pump -- complaining about time zone data on a dump that does not contain any time zone data. -- Job name must start with NOTSTZ_ for this trigger to be fired. IF ORA_SYSEVENT = 'CREATE' AND DICTIONARY_OBJ_TYPE = 'TABLE' AND ORA_DICT_OBJ_NAME LIKE 'NOTSTZ\_%' ESCAPE '\' THEN PC_DATA_PUMP.TRACK_JOB(ORA_DICT_OBJ_NAME); END IF; END; /
Running it
For this script to be fired, you must run the import with a job name prefixed with "NOTSTZ_" (check the trigger code above).
Calling impdp:
[opc@myserver ~]$ impdp myuser/mypass@mydb directory=mydir dumpfile=mydump.dmp Import: Release 21.0.0.0.0 - Production on Tue May 21 15:14:11 2024 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production ORA-39405: Oracle Data Pump does not support importing from a source database with TSTZ version 43 into a target database with TSTZ version 42. [opc@myserver ~]$
It failed as expected as I hadn't provided the job name.
Now running it again with a job name:
[opc@myserver ~]$ impdp myuser/mypass@mydb directory=mydir dumpfile=mydump.dmp job_name=NOTSTZ_JOB Import: Release 21.0.0.0.0 - Production on Tue May 21 15:15:07 2024 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Master table "myuser"."NOTSTZ_JOB" successfully loaded/unloaded Starting "myuser"."NOTSTZ_JOB": myuser/********@mydb directory=mydir dumpfile=mydump.dmp job_name=NOTSTZ_JOB Job "myuser"."NOTSTZ_JOB" successfully completed at Tue May 21 15:15:17 2024 elapsed 0 00:00:04 [opc@myserver ~]$
It worked! Cool, isn't it? Now checking the log table:
[opc@myserver ~]$ sql myuser/mypass@mydb SQLcl: Release 23.2 Production on Tue May 21 15:16:29 2024 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.23.0.1.0 SQL> select * from dp_master_track; ID_DP_MASTER_TRACK TABLE_NAME TZ_SOURCE TZ_TARGET DP_MASTER_CREATED DP_MOD_JOB_CREATED DP_MOD_JOB_STARTED DP_MOD_JOB_FOUND DP_MOD_JOB_FINISHED DP_MOD_JOB_TOTAL_LOOPS DP_MOD_JOB_FAILED_LOOPS _____________________ _____________ ____________ ____________ __________________________________ __________________________________ __________________________________ __________________________________ __________________________________ _________________________ __________________________ 1 NOTSTZ_JOB 43 42 21-MAY-24 03.15.07.870555000 PM 21-MAY-24 03.15.08.014651000 PM 21-MAY-24 03.15.08.070978000 PM 21-MAY-24 03.15.15.915644000 PM 21-MAY-24 03.15.15.915655000 PM 11821 0
You can see by the logs that the job started less than 1 second after the Master Table was created. It took about 7 seconds to find the timezone entry and update it. And it iterated over 11k times on a loop looking for that entry. This is the reason I named this method as "The Insane Mode".
And again: Use it at your own risk.
Further Reading
Kamil also has a blog post about doing this same process but with gdb: https://blog.ora-600.pl/2024/02/10/solved-ora-39405-oracle-data-pump-does-not-support-importing-from-a-source-database-with-tstz/
Mike also mentions it here: https://mikedietrichde.com/2024/01/22/ora-39405-with-impdp-when-you-have-mixed-time-zone-versions/
Have you enjoyed? Please leave a comment or give a 👍!
5 comments
Skip to comment form
Simplesmente incrível a solução, mesmo sendo a maneira insana. Muito obrigado por isso e continue com o incrível trabalho de passar seu conhecimento adiante!
Muito obrigado por compartilhar a solução. Me resolveu um problemão: preciso carregar um dump disponibilizado apenas algumas vezes por ano, com tabelas maiores do que 13G, que não daria pra ser processada nas versões free mais novas do Oracle. A versão do Oracle SE que tenho pra trabalhar também não tinha patch pra versão do TZ do dump. Sua solução foi a única salvação....
You made my day! Thanks a lot!
EXCELENTE!!!!! MEJOR explicación y solución 10/10
Many thanks....
works like a charm.