Migrating Application Contexts in Oracle Schema-based Migrations

In my current project we are migrating a whole load of highly shared databases to a new platform, with each database hosting schema, service and end-user accounts for multiple applications. Usually, in my experience, it is pretty difficult to align the people, processes and business cycles for each application such that we can migrate shared databases like these using a simple image copy or backup/restore, so we end up doing schema-level migrations, for which we usually find export/import or datapump is the best solution, assuming they are fast enough.

One challenge with using these logical backup/restore utilities is that whilst export will capture everything owned by the schema, it won’t pick up related pan-database objects, such as roles, public synonyms and public database links. Normally we identify and handle these by careful analysis of the existing databases and consultation with the application support folks.

This week an application threw an unexpected error when it was being configured against a migrated database schema. The error suggested a privileges problem, but it turned out to relate to another of the pan-database object types that a schema-level migration won’t handle – application contexts. So what’s that and how to handle it?

What is an application context?

With our migration hat on we maybe don’t need to worry too much about what application contexts are and do, but we do need to know how to migrate them and troubleshoot any common issue that might arise. The Oracle 11g online documentation obviously provides the low-down, and from that…

“An application context is a set of name-value pairs that Oracle Database stores in memory… You can use application contexts to authenticate both database and nondatabase users”

…it’s a way to enforce data access control. For example, in applications which maintain their own end user account information and log in to the database using a common database account, application contexts provide a way to filter what data application users can get at.

ORA-01031: Insufficient Privileges on DBMS_SESSION

In the particular example I was looking at an application schema had been migrated to the new database with no apparent issues. The database users and roles in that target database had been configured in exactly the same way as they were in the original database, but when the app support folks tried to execute their application installation process it burped with the following error:

[EVT]28-10-2013 10:21:07[D]ERROR[P][A][N]http2001-Processor25[T]com.symyx.ras.db.Database[C]clearAuthorizedUser DBException:    01031: insufficient privileges

ORA-06512: at “SYS.DBMS_SESSION”, line 115

From that ORA-01031 error, the immediate thought of my friends in the app support team was that we were missing an execute privilege on DBMS_SESSION, but reading again it looks like it is actually executing DBMS_SESSION and fell over at line 115 because it couldn’t see something else. Time for google! I found a few references to this issue and this one on OTN provides a good example…

https://forums.oracle.com/thread/2238829

I still never got the bottom of why that particular error is generated, but in summary, this error appears if you try to execute DBMS_SESSION.SET_CONTEXT when that context doesn’t exist. In my case we had forgotten to migrate contexts.

Workarounds for missing Application Contexts

The workaround to my particular issue is just to create the missing context(s). In this case we used dbms_metadata on the source database, for example…

Select dbms_metadata.get_ddl(‘CONTEXT’, namespace ) ddl from dba_context where schema = ‘SCOTT’;

…and executed the output from that on the target database to create the missing context. Following that the application installation completed without any issue.

Identifying Required Application Contexts

But we need to make sure we don’t miss this again for future migrations, so how to ensure we capture and migrate it going forward?

Normally, as part of our initial analysis for each application to be migrated we would generate some reports against the existing databases to gather all the information we will need to prepare the target database and plan the migration. So we need to ensure our reports also tell us about any contexts that exist; here are three examples of queries that we can include…

— List the DDL statements for all contexts that exist for the schemas we are interested in…

SELECT dbms_metadata.get_ddl(‘CONTEXT’, namespace) ddl from dba_context where schema = ‘SCOTT’

/

— List any objects in our application schemas that have dependencies on DBMS_SESSION. This may imply use of contexts, though DBMS_SESSION has many unrelated procedures…

SELECT owner, count(*) FROM dba_dependencies WHERE referenced_name = ‘DBMS_SESSION’ GROUP BY owner ORDER BY 1

/

— List any objects in our application schema that make use of CONTEXT-related procedures (if application source code text is visible) …

SELECT name, type, COUNT(*) FROM dba_source WHERE text LIKE ‘%CONTEXT%’ AND owner = ‘SCOTT’ GROUP BY name, type ORDER BY 1

/

Migrating Application Contexts

Once we know we have some contexts to migrate there are a couple of easy options for migrating them.

Option 1: Use expdp. The data we are missing lives in the SYS schema, so to extract it we need to perform a separate full export using the INCLUDE parameter, thus avoiding any other objects being included in the export set…

expdp … INCLUDE=CONTEXT FULL=Y

This method will get us the contexts from our source database, but the awkward bit is that it will get ALL contexts, not just the ones required for the application schemas we are working with. So after the import we need to either tidy up the target database to remove redundant contexts or just leave them in place. EIther way its a little messy.

Option 2: Use the DBMS_METADATA.GET_DDL function as mentioned in the workaround above…

SELECT dbms_metadata.get_ddl(‘CONTEXT’, namespace) ddl from dba_context where schema = ‘SCOTT’

This is much cleaner and means we will only touch contexts that relate to the schemas we are currently working with.

Advertisements

Export/Import Validation Gotchas

Export/Import Validation Gotchas

When using Datapump, Original Export/Import or some other logical backup/restore mechanism, we need to validate the migrated databases to ensure all objects have been copied and that they are in the correct state (Valid/Invalid, Enabled/Disabled, etc).

The export and import logs ought to provide enough information for that validation; if we export/import one schema and both export and import have completed without errors then the migration has probably succeeded. However, the log files dont present information in a particularly helpful format – for example if an object is imported but wont compile the log file will contain some pretty ugly looking messages. And even if they are error-free, it can be awkward doing a comparison of an export and import log to identify issues.

So it is normal to present the required information in a standard format using predefined reports based on simple SQL queries running against the data dictionaries of the source and target databases.

But beware: I recently met issues in a couple of migrations that appeared to show discrepencies between the source and target databases. In both cases they turned out to be due to Bugs…

Bug# 9935857 – LOB Indexes are missing from DBA_OBJECTS

Facts:

  • Source and target databases were queried by doing a count per object_type from the DBA_OBJECTS view.
  • Source Database – 11.1
  • Source Database Index count for the migrated schema : 629
  • Target Database  – 11.2
  • Target Database index count for the migrated schema: 648

Root Cause and Workaround:

I noticed straightaway that the difference in the number of indexes was 19, which happened to be how many LOB objects belonged to that schema. I poked around on MOS and found the bug, which was causing LOB indexes to be discounted when querying the DBA_OBJECTS view. The workaround was to count all indexes via the DBA_INDEXES view and to UNION join that to the count of DBA_OBJECTS…

SELECT … FROM dba_objects WHERE object_type <> ‘INDEX’…

UNION

SELECT … FROM dba_indexes…

References: Document 1582869.1 refers

Bug# 5523375 – expdp does not export disabled PK Constraints

Facts:

  • Source Database – 11.1
  • Target Database  – 11.2
  • Two DISABLED PK constraints in a schema in the source database were missing from the target database after a datapump import.
  • No related errors in the export or import log files

Root Cause and Workarounds:

The root cause was found to be bug# 5523375, which is believed to affect releases prior to 11gR2. the bug is on the export side and the documented workaround is be to enable the PK before export, then disable it afterwards. The simpler solution is just to prepare a script to recreate the PK in its DISABLED state following import. Of course, if the PK is DISABLED then it is probably redundant, but in the migration project we cannot make that judgement and it is important not to get dragged into remediating application design issues if that isnt in the remit of our project.

Conclusion

My methodology requires that production migrations are rehearsed wherever possible and these kinds of issues underline the value of doing that – better to have time to research and explain these discrepancies and prepare a workaround following a rehearsal than to try and investigate and troubleshoot it in the middle of a live migration, with all of the stress and time limitations that suggests.