Migrating Oracle DBMS from RISC to Linux-x86 using Cross-Platform Portable Tablespaces – Part 3

Introduction

The first part of the article considered the main problem when migrating Oracle DBMS from RISC platforms to Linux x86 – the difference in storage formats (Endian) and the need to convert blocks in data files during migration. Also briefly described was the migration technology using transportable table spaces, including the option with incremental backups, which allows reducing downtime during migration.

The second part of the article described the migration algorithm using Full Transportable Export/Import (FTEX) technology using M5 scripts supplied by Oracle.

Despite the significant simplification of migration using Full Transportable Export/Import technology and M5 scripts, the author's experience shows that when migrating real production Oracle databases, a number of problems arise.

In the final, third part of the article, these problems will be described and possible solutions will be given.

Preparing the source database for migration Before starting the final stage of migration (switching the tablespaces on the source database to read-only and taking the final incremental backup), it is recommended to perform the following actions:

  1. Collect dictionary and fixed objects statistics, they are important for Data Pump performance:

    begin     
      dbms_stats.gather_fixed_objects_stats;     
      dbms_stats.gather_dictionary_stats;   
    end;
  2. check that Block Change Tracking (BCT) technology is enabled on the source database to speed up the receipt of incremental backups;

  3. disable resource manager:

       alter system set resource_manager_plan='';   
  4. disable all maintenance windows in the maintenance window group:

    select window_name from dba_autotask_window_clients;
    exec dbms_scheduler.disable(name => 'SYS.MONDAY_WINDOW');
    exec dbms_scheduler.disable(name => 'SYS.TUESDAY_WINDOW');
    ...
    exec dbms_scheduler.disable(name => 'SYS.SUNDAY_WINDOW   
  5. stop all automatic jobs;

  6. To avoid waits caused by resize operations in SGA, you should check that the size of STREAM_POOL is not less than 512M.

Long downtime for DBs containing a large number of objects in the dictionary

The problem is related to the fact that if the database contains a large number of objects (tens of thousands or more), i.e. has a large dictionary size (Oracle Diсtionary), export/import of metadata at the final stage of migration takes a long time.

To solve this problem, you should copy the metadata to the target database in advance using application deployment scripts, or if they are not available, using Oracle Datapump Export Utility (parameter content=metadata_only utilities expdp). When migrating metadata, you should also exclude segment definitions using the parameter EXCLUDED Oracle Datapump Export (expdp) utilities, as they will be migrated in the final stage by connecting the transported tablespaces to the target database on Linux x86.

When using FTEX, the transfer of metadata other than segment definitions can also be disabled using the parameter EXCLUDED utilities expdp.

After connecting tablespaces to the target database, you should recompile all PL/SQL objects (packages, functions, procedures, triggers, etc.) using a system script. utlrp.sqlbecause these objects are in an invalid state (INVALID) – they were created when there were no segments (tables and indexes) in the target database yet.

Of course, after the metadata is copied to the target DB, the source DB should be prevented from changing object definitions. This can be done administratively (by preventing changes from being “rolled out” to the production DB), or by using a DDL trigger that will throw an exception to any user DDL operation.

There is a subtle nuance to the above method of reducing downtime related to sequences (sequences).
The thing is that if you create sequences on the target DB in advance, then by the time you switch to the DB on Linux x86, on the productive DB on the RISC server, their values ​​will already be different – they will have greater values. After all, after creating sequences on the Linux x86 server, the current productive DB on the RISC server continued its work!

Fig. 1. Divergence of sequence values ​​during advanced copying of metadata

Fig. 1. Divergence of sequence values ​​during advanced copying of metadata

To align the current sequence values ​​on the target database with the values ​​on the source database on the RISC server, a script was developed reset_seq.sql This script is executed on the target database, immediately after the application on the source database is stopped and the unloading of segment metadata has begun. Starting with Oracle Database 18c, the current sequence value can be shifted with a simple command ALTER SEQUENCE RESTART START WITH

Long downtime for databases containing a large number of segments

If the tablespaces being moved contain a large number of segments (hundreds of thousands or more), there is a long downtime when exporting/importing segment metadata (tablespace unplug/plug operations).
The point is that prior to Oracle Database 21c, export/import of metadata for transportable tablespaces only works in one thread.

This is especially critical for large ERP systems SAP R/3 and Oracle E-Business Suite, whose databases contain millions of segments.
For such databases, the operations of unloading and loading segment metadata can take many hours.

To solve this problem, we can recommend several methods that can be used together.

Split the database into several closed sets of tablespaces and migrate them in parallel
The idea is to divide all user tablespaces into several closed sets (self-contained), and then migrate them simultaneously.

If it is not possible to select such sets on the fly, for example, all segments are in one large table space, then you can create a new table space and move there segments that have no connections with other table spaces.
To move segments online, use the commands alter table … move online, alter index … rebuild online, alter table … move lobor the DBMS_REDEFINITION package if the Oracle DBMS version does not support moving segments online.

For one of the projects, the author had to develop a set of scripts for online movement of segments from one tablespace to another.

After migration, segments in new tablespaces can be moved back, if such a requirement exists, of course.

It should be noted that the Full Transportable Export/Import technology cannot be used for the above method, since it migrates all user tablespaces entirely. In this case, the usual transportable tablespace technology with incremental backups and Oracle V4 perl scripts should be used.

Delete small indexes on source DB before final step and recreate on target DB

This method is easy to implement and allows you to significantly reduce the number of segments to migrate, and therefore the downtime for exporting/importing segment metadata at the final stage of migration.

Before taking the final incremental backup on the source database on the RISC server, small indexes, the size of which does not exceed several gigabytes, are deleted (the size is selected based on the acceptable downtime and hardware performance on the target database). After connecting the converted tablespaces on the Linux server, these indexes are re-created using the operator CREATE INDEX … ONLINE.

To automate the process, the author uses a developed script del_small_ind.sqlwhich dumps index creation DDL statements into a separate file and removes them from the source database, disabling UNIQUE and PK integrity constraints if the index being removed is needed to maintain their functionality. At the same time, the script del_small_ind.sql generates a script create_small_ind.sql creating these indexes and enabling the corresponding integrity constraints on the target database.

Script del_small_ind.sql is launched after the application on the source database has been stopped, before the operation of switching tablespaces to Read Only mode. Script create_small_ind.sql runs on the target DB after the converted tablespaces are connected.

Delete auxiliary (intermediate) tables on the source DB before the final step and recreate them on the target DB
A large application often has a large set of tables that store only intermediate data. For example, staging tables in information and analytical data warehouses. During the application's operation, the contents of such tables are deleted and “re-uploaded” anew.

Such tables can also be removed from the source database before switching tablespaces to read-only mode. These tables should be created in advance on the target database using the appropriate CREATE TABLE commands. To automate this operation, it is convenient to use the utility expdp with the parameters TABLES and CONTENT=METADATA_ONLY, and then import the resulting dump into the original database using the utility impdp.

Transfer static reference tables in advance
The idea is to migrate static tables to a DB on a Linux x86 server in advance. Static tables are tables that do not change, at least for the period of migration you can prohibit changes in them. An example would be a currency directory in a financial application, or a service tariff directory in a billing system.

Such tables are removed from the source database before the tablespaces are set to read-only mode. You should create these tables in the target database in advance using the appropriate CREATE TABLE commands and migrate the data from the source database to them.
To automate this operation, it is convenient to use the utility expdp with the TABLES parameters, and then import the resulting dump into the original database using the utility impdp.

Please note that these static tables will have to be loaded into the target database into new tablespaces (after all, there are no tablespaces from the source database yet!) – they must be created in advance on the target database, and these tables must be imported into them using the REMAP_TABLESPACES parameter of the utility. impdp.

To ensure that these tables remain unchanged, it is recommended to set them to read-only mode on the source database.
After switching to the target database, you must remember to return these tables to read write mode

Remove unused tables
Very often in large applications that are constantly evolving, unused tables accumulate in the data schema, for example: backup copies of tables, tables for functionality that has already been removed from the application. Therefore, before the migration project, it is recommended to clean the application schemas from unused tables.

The author had experience of a project where the problem of long downtime for export-import of segment metadata during migration of Oracle DBMS from RISC to Linux x86 was solved in this way – by deleting unused tables that had accumulated over many years of changing the application data scheme.

Temporarily move system tablespace and online redologists to RAM disk
Another original way to reduce the time of importing segment metadata into the target DB is to reduce the SGA size for the instance on the target Linux x86 server, create a RAM disk in the freed up memory, and then transfer the SYSTEM tablespace files and online redologies to it.

The point is that in the process of connecting new table spaces and recompiling PL/SQL code in the target DB, the bulk of changes occur in the system table space and, obviously, in the online redolog. The speed of access to it will increase dramatically if these files are placed in memory.

In the final stage of migration, the SYSTEM tablespace and online redo log files are moved to a regular disk, the instance is stopped, the RAM disk is deleted, and the freed memory is returned to the SGA volume, followed by the instance start.

Clearing the Recycle Bin and dematerializing empty segments on the target DB
Of course, you should not forget to empty the trash bin if it is enabled:

  SQL> purge dba recyclebin; 

It is also recommended to delete empty segments on the source database, that is, to switch them to deferred segment creation mode:

  exec dbms_space_admin.drop_empty_segments; 

In this case, the segment itself will be removed from the tablespace, but its description in the database dictionary will remain.

Time is saved due to the fact that the volume of segment metadata is reduced, since there is no need to save the description of the segments in the dump file and there is no need to then create a segment on the target database when connecting the tablespace.

Migration of Directories and BFILEs

Obviously, the contents of the file system directories on the source server are not automatically transferred to the target server.

Therefore, if an application or DBA tools use references to directories in the Oracle database – DIRECTORY objects and, especially, external LOB objects (BFILE), then it is necessary to manually copy them to the target server, having previously created directories on the file system and similar references to them in the database (directory). The contents of the used directories from the source server should be transferred to the target server only after the application is stopped.

To automate the checking of the use of directories in the source database and the generation of a DDL script for their creation on the target database, the author has developed and is using the script check_dirs.sql

Migrating Oracle Advanced Queuing Queues

When using migration with portable tablespaces and V4 scripts, only the tables with queue messages are migrated, the queue declarations themselves are not migrated during the expdp/impdp operation. To solve this problem, the queues on the target must be recreated anew, then defining the corresponding tables with messages that were migrated from the source DB, more details on this in the document “How Advanced Queuing (AQ) Objects Are Exported And Imported. (Doc ID 2291530.1)

Even when using M5 and FTEX scripts, there are very often problems with exporting/importing Advanced Queuing queues – they are very often not transferred correctly.

It is recommended, in any case, to migrate AQ queue definitions separately, then defining a corresponding message table for each queue after connecting the tablespaces to the target DB.

To automate the process of unloading the script for creating queues and the script for connecting tables to the newly created queue on the target DB, the author developed a script recreate_queues.sql.

Error while migrating large number of data files

The author encountered this unpleasant problem in one of the projects.
If the database is large and consists of a large number of data files, and this happens when using tablespaces created in smallfile mode, then when importing segment metadata from the source database, an error occurs: ORA-01240: too many data files to add in one command.

The problem is related to the fact that when importing segment metadata, the string with the full names of the data files must fit into one redo entry in the log file. A description of this problem is given in a document on the Oracle Support website: “ORA-39123 ORA-1240 Error On Import Of Transportable Tablespace (Doc ID 740983.1)“.

The solution proposed in this paper is simple: rename the data files by shortening their names, the path on the file system can be shortened by making a short symbolic link to the file name and rename the data file name in the DBMS to the new name with this symbolic link.

If there are too many data files, and a line with a full list of files still does not fit into one redo record, then it is recommended to create tablespaces in bigfile mode and move all segments there. To solve this problem, the author had to develop a script move_segments.sql to move segments online to new bigfile tablespaces.

Migrate statistics before migrating tablespaces

When using M5 scripts for migration, there is a small peculiarity related to the transfer of statistics for the Oracle SQL optimizer: statistics are not transferred to the target database (by default, the exclude=statistics parameter is passed to the expdp utility).
This makes sense because statistics should be migrated in advance – this will significantly save downtime during migration.

On the source database, the current relevant statistics should be uploaded to the stage table.

Fig. 2. Unloading statistics and copying it to the target server

Fig. 2. Unloading statistics and copying it to the target server

Transfer of the stage table with statistics from the source DB to the target DB is performed via the usual Datapump Export, or the stage table will be transferred automatically during the migration of the corresponding space.

Verifying Metadata After Migration

It is critically important, after the tablespaces are successfully connected to the target DB on Linux x86 and the PL/SQL code is compiled, to carefully check that nothing is “broken” or “lost along the way”:

  • check data blocks for corruption using the RMAN utility (validate database check logical);

  • check for logical corruption in Oracle Dictionary using the hcheck.sql system script – “Script to Check Data Dictionary for Known Problems (Doc ID 136697.1)“;

  • check the state of all objects in the dictionary: whether objects have appeared in an invalid state (INVALID) and whether all objects have been transferred, for this purpose the author has developed a script check_objects.sql to verify the states of all objects, except segments, on the source and target databases, with control over their compliance with the source database;

  • check that all segments that were in the source database are also present in the target database, for this purpose the script developed by the author is used check_segments.sql ;

  • collect statistics on the dictionary and on fixed objects.

Conclusion

The project to migrate Oracle DBMS from the RISC platform to Linux x86 only at first glance seems like a simple task.

Of course, for small databases, for which a long downtime is acceptable, the easiest way to do this is to use Oracle Datapump data export/import technology.

However, for large databases containing a large number of objects, and for which the business regulates a small amount of downtime, other technologies must be used.

The Oracle Transportable Tablespace technologies with incremental backups and Oracle Full Transportable Export/Import discussed in the article allow you to solve this problem.

These are complex technologies, but with careful preparation, testing and availability of ready-made tools, the migration task is quite solvable.

The issues of target server sizing on Linux x86 and performance testing during migration are beyond the scope of this article.
These are big topics and, apparently, they will require writing a separate article.

The author expresses gratitude to the company ELKARO for the server provided for testing on the Sun Solaris 11.4 for SPARC platform.

Igor Melnikov

Independent consultant

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *