Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Oracle Goldengate 11g Complete Cookbook
Oracle Goldengate 11g Complete Cookbook

Oracle Goldengate 11g Complete Cookbook: Dig deep into administering Oracle Goldengate 11g using this comprehensive cookbook. From the very basics of installation to advanced features like migration, you'll learn the practical way through code scripts and examples.

eBook
€8.99 €45.99
Paperback
€58.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Oracle Goldengate 11g Complete Cookbook

Chapter 1. Installation and Initial Setup

The following recipes will be covered in this chapter:

  • Installing Oracle GoldenGate in a x86_64 Linux-based environment

  • Installing Oracle GoldenGate in a Windows environment

  • Enabling supplemental logging in the source database

  • Supported datatypes in Oracle GoldenGate

  • Preparing the source database for GoldenGate setup

  • Preparing the target database for GoldenGate setup

  • Setting up a Manager process

  • Setting up a Classic Capture Extract process

  • Setting up an Integrated Capture Extract process

  • Setting up a Datapump process

  • Setting up a Replicat process

Introduction


Database replication is always an interesting challenge. It requires a complex setup and strong knowledge of the underlying infrastructure, databases, and the data held in them to replicate the data efficiently without much impact on the enterprise system. Oracle GoldenGate gains a lot of its popularity from the simplicity in its setup. In this chapter we will cover the basic steps to install GoldenGate and set up various processes.

Installing Oracle GoldenGate in a x86_64 Linux-based environment


This recipe will show you how to install Oracle GoldenGate in a x86_64 Linux-based environment.

Getting ready

In order to install Oracle GoldenGate, we must have downloaded the binaries from the Oracle Technology Network website for your Linux platform. We have downloaded Oracle GoldenGate Version 11.2.0.1.0.1 in this recipe. Ensure that you check the checksum of the file once you have downloaded it.

Tip

You can find the Oracle GoldenGate binaries for x86_64 Linux at http://www.oracle.com/technetwork/middleware/GoldenGate/downloads/index.html?ssSourceSiteId=ocomen.

How to do it...

Oracle GoldenGate binaries are installed in a directory called GoldenGate Home. This directory should be owned by the OS user (ggate) which will be the owner of GoldenGate binaries. This user must be a member of the dba group. After you have downloaded the binaries, you need to uncompress the media pack file by using the unzip utility as given in the following steps:

  1. Log in to the server using the ggate account.

  2. Create a directory with this user as shown in the following command:

    mkdir installation_directory
    
  3. Change the directory to the location where you have copied the media pack file and unzip it. The media pack contains the readme files and the GoldenGate binaries file. The GoldenGate binaries file for the 64-bit x86 Linux platform is called fbs_ggs_Linux_x64_ora11g_64bit.tar.

  4. Extract the contents of this file into the GoldenGate Home directory as shown in the following command:

    tar –xvf fbs_ggs_Linux_x64_ora11g_64bit.tar –C installation_directory
    
  5. Create GoldenGate directories as follows:

    cd installation_directory
    ./ggsci
    create subdirs
    exit
    

    Note

    You must have Oracle database libraries added to the shared library environment variable, $LD_LIBRARY_PATH before you run ggsci. It is also recommended to have $ORACLE_HOME & $ORACLE_SID set to the correct Oracle instance.

How it works...

Oracle provides GoldenGate binaries in a compressed format. In order to install the binaries you unzip the compressed file, and then expand the archive file into a required directory. This unpacks all the binaries. However, GoldenGate also requires some important subdirectories under GoldenGate Home which are not created by default. These directories are created using the CREATE SUBDIRS command. The following is the list of the subdirectories that get created with this command:

Subdirectory

Contents

dirprm

It contains parameter files

dirrpt

It contains report files

dirchk

It contains checkpoint files

dirpcs

It contains process status files

dirsql

It contains SQL scripts

dirdef

It contains database definitions

dirdat

It contains trail files

dirtmp

It contains temporary files

dirout

It contains output files

Note

Oracle GoldenGate binaries need to be installed on both the source and target systems. The procedure for installing the binaries is the same in both environments.

Installing Oracle GoldenGate in a Windows environment


In this recipe we will go through the steps that should be followed to install the GoldenGate binaries in the Windows environment.

Getting ready

In order to install Oracle GoldenGate, we must have downloaded the binaries from the Oracle Technology Network website for your Windows platform. We have downloaded GoldenGate Version 11.2.0.1.0.1 in this recipe. Ensure that you check the checksum of the file once you have downloaded it.

Tip

You can find the Oracle GoldenGate binaries for x86_64 Windows at http://www.oracle.com/technetwork/middleware/GoldenGate/downloads/index.html?ssSourceSiteId=ocomen.

How to do it...

Oracle GoldenGate binaries are installed in a directory called GoldenGate Home. After you have downloaded the binaries, you need to uncompress the media pack file by using the unzip utility:

  1. Log in to the server as the Administrator user.

  2. Create a directory for GoldenGate Home.

  3. Unzip the contents of the media pack file to the GoldenGate Home directory.

  4. Create GoldenGate directories as shown in the following command:

    cd installation_directory
    ggsci
    create subdirs
    exit
    

How it works...

Oracle provides GoldenGate binaries in a compressed format. The installation involves unzipping the file into a required directory. This unpacks all the binaries. However, GoldenGate also requires some important subdirectories under GoldenGate Home which are not created by default. These directories are created using the CREATE SUBDIRS command. The following is the list of the subdirectories that get created with this command:

Subdirectory

Contents

dirprm

It contains parameter files

Dirrpt

It contains report files

Dirchk

It contains checkpoint files

dirpcs

It contains process status files

dirsql

It contains SQL scripts

dirdef

It contains database definitions

dirdat

It contains trail files

dirtmp

It contains temporary files

dirout

It contains output files

Enabling supplemental logging in the source database


Oracle GoldenGate replication can be used to continuously replicate the changes from the source database to the target database. GoldenGate mines the redo information generated in the source database to extract the changes. In order to update the correct rows in the target database, Oracle needs sufficient information to be able to identify them uniquely. Since it relies on the information extracted from the redo buffers, it requires extra information columns to be logged into the redo records generated in the source database. This is done by enabling supplemental logging in the source database. This recipe explains how to enable supplemental logging in the source database.

Getting ready

We must have a list of the tables that we want to replicate between two environments.

How to do it…

Oracle GoldenGate requires supplemental logging to be enabled at the database level and table level. Use the following steps to enable the required supplemental logging:

  1. Enable database supplemental logging through sqlplus as follows:

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    
  2. Switch a database LOGFILE to bring the changes into effect:

    ALTER DATABASE SWITCH LOGFILE;
    
  3. From the GoldenGate Home, log in to GGSCI:

    ./ggsci
    
  4. Log in to the source database from ggsci using a user which has privileges to alter the source schema tables as shown in the following command:

    GGSCI> DBLOGIN USERID <USER> PASSWORD <PW>
    
  5. Enable supplemental logging at the table level as follows:

    GGSCI> ADD TRANDATA <SCHEMA>.<TABLE_NAME>
    
  6. Repeat step 5 for all the tables that you want to replicate using GoldenGate.

How it works…

Supplemental logging enables the database to add extra columns in the redo data that is required by GoldenGate to correctly identify the rows in the target database. We must enable database-level minimum supplemental logging before we can enable it at the table level. When we enable it at the table level, a supplemental log group is created for the table that consists of the columns on which supplemental logging is enabled. The columns which form a part of this group are decided based on the key constraints present on the table. These columns are decided based on the following priority order:

  1. Primary key

  2. First unique key alphanumerically with no nullable columns

  3. First unique key alphanumerically with nullable columns

  4. All columns

GoldenGate only considers unique keys which don't have any virtual columns, any user-defined types, or any function-based columns. We can also manually specify which columns we want to be a part of the supplemental log group.

Tip

You can enable supplemental logging on all tables of a schema using the following single command:

GGSCI> ADD TRANDATA <SCHEMA>.*

If possible, do create a primary key in each source and target table that is part of the replication. The pseudo key consisting of all columns, created by GoldenGate, can be quite inefficient.

There's more…

There are two ways to enable supplemental logging. The first method is to enable it using GGSCI, using the ADD TRANDATA command. The second method is to use sqlplus and run the ALTER TABLE ADD SUPPLEMENTAL LOG DATA command. The latter method is more flexible and allows a person to specify the name of the supplemental log group. However, when you use Oracle GoldenGate to add supplemental logging it creates supplemental log group names using the format, GGS_<TABLE_NAME>_<OBJECT_NUMBER>. If the overall supplemental log group name is longer than 30 characters, GoldenGate truncates the table name as required. Oracle support recommends that we use the first method for enabling supplemental logging for objects to be replicated using Oracle GoldenGate. The GGS_* supplemental log group format enables GoldenGate to quickly identify the supplemental log groups in the database.

If you are planning to use GoldenGate to capture all transactions in the source database and convert them into INSERT for the target database, for example, for reporting/auditing purposes, you'll need to enable supplemental logging on all columns of the source database tables.

See also

  • For information about how to replicate changes to a target database and maintain an audit record, refer to the recipe Mapping the changes to a target table and storing the transaction history in a history table in Chapter 4, Mapping and Manipulating Data

Supported datatypes in Oracle GoldenGate


Oracle GoldenGate has some restrictions in terms of what it can replicate. With every new release, Oracle is adding new datatypes to the list of what is supported. The list of the datatypes of the objects that you are planning to replicate should be checked against the list of supported datatypes for the GoldenGate version that you are planning to install.

Getting ready

You should have identified the various datatypes of the objects that you plan to replicate.

How to do it…

The following is a high-level list of the datatypes that are supported by Oracle GoldenGate v11.2.1.0.1:

  • NUMBER

  • BINARY FLOAT

  • BINARY DOUBLE

  • CHAR

  • VARCHAR2

  • LONG

  • NCHAR

  • NVARCHAR2

  • RAW

  • LONG RAW

  • DATE

  • TIMESTAMP

  • CLOB

  • NCLOB

  • BLOB

  • SECUREFILE and BASICFILE

  • XML datatypes

  • User defined/Abstract datatypes

  • SDO_GEOMETRY, SDO_TOPO_GEOMETRY, and SDO_GEORASTER are supported

How it works…

There are some additional details that one needs to consider while evaluating the supported datatypes for a GoldenGate version. For example, the user-defined datatypes are only supported if the source and target tables have the same structures. Both Classic and Integrated Capture modes support XML types which are stored as XML, CLOB, and XML binary. However, XML type tables stored as Object Relational are only supported in Integrated Capture mode.

There's more…

The support restrictions apply to a few other factors apart from the datatypes. Some of these are as Manipulating Data:

  • INSERTs, UPDATEs and DELETEs are supported on regular tables, IOTs, clustered tables and materialized views

  • Tables created as EXTERNAL are not supported

  • Extraction from compressed tables is supported only in Integrated Capture mode

  • Materialized views created with ROWID are not supported

  • Oracle GoldenGate supports replication of the sequences only in uni-directional mode

Preparing the source database for GoldenGate setup


Oracle GoldenGate architecture consists of Extract process in the source database. This process mines the redo information and extracts the changes occurring in the source database objects. These changes are then written to the trail files. There are two types of Extract processes – Classic Capture and Integrated Capture. The Extract process requires some setup to be done in the source database. Some of the steps in the setup are different depending on the type of the Extract process. GoldenGate requires a database user to be created in the source database and various privileges to be granted to this user. This recipe explains how to set up a source database for GoldenGate replication.

Getting ready

You must select a database user ID for the source database setup. For example, GGATE_ADMIN.

How to do it…

Run the following steps in the source database to set up the GoldenGate user as follows:

sqlplus sys/**** as sysdba
CREATE USER GGATE_ADMIN identified by GGATE_ADMIN;
GRANT CREATE SESSION, ALTER SESSION to GGATE_ADMIN;
GRANT ALTER SYSTEM TO GGATE_ADMIN;
GRANT CONNECT, RESOURCE to GGATE_ADMIN;
GRANT SELECT ANY DICTIONARY to GGATE_ADMIN;
GRANT FLASHBACK ANY TABLE to GGATE_ADMIN;
GRANT SELECT ANY TABLE TO GGATE_ADMIN;
GRANT SELECT ON DBA_CLUSTERS TO GGATE_ADMIN;
GRANT EXECUTE ON DBMS_FLASHBACK TO GGATE_ADMIN;
GRANT SELECT ANY TRANSACTION To GGATE_ADMIN;

The following steps are only required for Integrated Capture Extract (Version 11.2.0.2 or higher):

EXEC DBMS_GoldenGate_AUTH.GRANT_ADMIN_PRIVILEGE('GGATE_ADMIN');
GRANT SELECT ON SYS.V_$DATABASE TO GGATE_ADMIN;

The following steps are only required for Integrated Capture Extract (Version 11.2.0.1 or earlier):

EXEC DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('GGATE_ADMIN');
GRANT BECOME USER TO GGATE_ADMIN;
GRANT SELECT ON SYS.V_$DATABASE TO GGATE_ADMIN;

Set up a TNS Entry for the source database in $ORACLE_HOME/network/admin/tnsnames.ora.

How it works…

The preceding commands can be used to set up the GoldenGate user in the source database. The Integrated Capture required some additional privileges as it needs to interact with the database log mining server.

You will notice that in the previous commands, we have granted SELECT ANY TABLE to the GGATE_ADMIN user. In production environments, where least required privileges policies are followed, it is quite unlikely that such a setup would be approved by the compliance team. In such cases, instead of granting this privilege, you can grant the SELECT privilege on individual tables that are a part of the source replication configuration. You can use dynamic SQL to generate such commands.

In our example schema database, we can generate the commands for all tables owned by the user SCOTT as follows:

select 'GRANT SELECT ON '||owner||'.'||table_name||' to GGATE_ADMIN;' COMMAND from dba_tables where owner='SCOTT'
COMMAND
------------------------------------------------------------------
GRANT SELECT ON SCOTT.DEPT to GGATE_ADMIN;
GRANT SELECT ON SCOTT.EMP to GGATE_ADMIN;
GRANT SELECT ON SCOTT.BONUS to GGATE_ADMIN;
GRANT SELECT ON SCOTT.SALGRADE to GGATE_ADMIN;

There's more…

In this recipe we saw the steps required to set up a the GoldenGate user in the database. The Extract process required various privileges to be able to mine the changes from the redo data. At this stage it's worth discussing the two types of Extract processes and the differences between both.

The Classic Capture mode

The Classic Capture mode is the traditional Extract process that has been there for a while. In this mode, GoldenGate accesses the database redo logs (also, archive logs for older transactions) to capture the DML changes occurring on the objects specified in the configuration files. For this, at the OS level, the GoldenGate user must be a part of the same database group which owns the database redo logs. If the redo logs of the source database are stored in an ASM diskgroup this capture method reads it from there. This capture mode is available for other RDBMS as well. However, there are some datatypes that are not supported in Classic Capture mode. One of the biggest limitations of the Classic Capture mode is its inability to read data from the compressed tables/tablespaces.

The Integrated Capture mode

In case of the Integrated Capture mode, GoldenGate works directly with the database log mining server to receive the data changes in the form of logical change records (LCRs). An LCR is a message with a specific format that describes a database change. This mode does not require any special setup for the databases using ASM, transparent data encryption, or Oracle RAC. This feature is only available for databases on Version 11.2.0.3 or higher. This Capture mode supports extracting data from source databases using compression. It also supports various object types which were previously not supported by Classic Capture.

Integrated Capture can be configured in an online or downstream mode. In the online mode, the log miner database is configured in the source database itself. In the downstream mode, the log miner database is configured in a separate database which receives archive logs from the source database. This mode offloads the log mining load from the source database and is quite suitable for very busy production databases. If you want to use the Integrated Capture mode with a source database Version 11.2.0.2 or earlier, you must configure the Integrated Capture mode in downstream capture topology, and the downstream mining database must be on Version 11.2.0.3 or higher.

Tip

You will need to apply a Bundle Patch specified in MOS Note 1411356.1 for full support of the datatypes offered by Integrated Capture.

See also

  • Refer to the recipe S etting up an Integrated Capture Extract process later in this chapter and Creating an Integrated Capture with a downstream database for compressed tables in Chapter 7, Advanced Administration Tasks – I

Preparing the target database for GoldenGate setup


On the target side of the GoldenGate architecture, the collector processes receive the trail files shipped by the Extract/Datapump processes from the source environment. The collector process receives these files and writes them locally on the target server. For each row that gets updated in the source database, the Extract process generates a record and writes it to the trail file. The Replicat process in the target environment reads these trail files and applies the changes to the target database using native SQL calls. To be able to apply these changes to the target tables, GoldenGate requires a database user to be set up in the target database with some privileges on the target objects. The Replicat process also needs to maintain its status in a table in the target database so that it can resume in case of any failures. This recipe explains the steps required to set up a GoldenGate user in the target database.

Getting ready

You must select a database user ID for a target database setup. For example, GGATE_ADMIN, because the GoldenGate user also requires a table in the target database to maintain its status. It needs some quota assigned on a tablespace to be able to create a table. You might want to create a separate tablespace, grant quota and assign it as default for the GGATE_ADMIN user. We will assign a GGATE_ADMIN_DAT tablespace to the GGATE_ADMIN user in this recipe.

How to do it…

Run the following steps in the target database to set up a GoldenGate user:

sqlplus sys/**** as sysdba
CREATE USER GGATE_ADMIN identified by GGATE_ADMIN DEFAULT TABLESPACE GGATE_ADMIN_DAT;
ALTER USER GGATE_ADMIN QUOTA UNLIMITED ON GGATE_ADMIN_DAT;
GRANT CREATE SESSION, ALTER SESSION to GGATE_ADMIN;
GRANT CONNECT, RESOURCE to GGATE_ADMIN;
GRANT SELECT ANY DICTIONARY to GGATE_ADMIN;
GRANT SELECT ANY TABLE TO GGATE_ADMIN;
GRANT INSERT ANY TABLE, UPDATE ANY TABLE, DELETE ANY TABLE TO GGATE_ADMIN;
GRANT CREATE TABLE TO GGATE_ADMIN;

How it works…

You can use these commands to set up a GoldenGate user in the target database. The GoldenGate user in the target database requires access to the database plus update/insert/delete privileges on the target tables to apply the changes. In the preceding commands, we have granted SELECT ANY TABLE, UPDATE ANY TABLE, DELETE ANY TABLE, and INSERT ANY TABLE privileges to the GGATE_ADMIN user. However, if for production database reasons your organization follows the least required privileges policy, you will need to grant these privileges on the replicated target tables individually. If the number of replicated target tables is large, you can use dynamic SQL to generate such commands. In our example demo database, we can generate these commands for the SCOTT schema objects as follows:

select 'GRANT SELECT, INSERT, UPDATE, DELETE ON '||owner||'.'||table_name||' to GGATE_ADMIN;' COMMAND from dba_tables where owner='SCOTT'
COMMAND
------------------------------------------------------------------
GRANT SELECT, INSERT, UPDATE, DELETE ON SCOTT.DEPT to GGATE_ADMIN;
GRANT SELECT, INSERT, UPDATE, DELETE ON SCOTT.EMP to GGATE_ADMIN;
GRANT SELECT, INSERT, UPDATE, DELETE ON SCOTT.SALGRADE to GGATE_ADMIN;
GRANT SELECT, INSERT, UPDATE, DELETE ON SCOTT.BONUS to GGATE_ADMIN;

There's more…

The replicated changes are applied to the target database on a row-by-row basis. The Replicat process needs to maintain its status so that it can be resumed in case of failure. The checkpoints can be maintained in a database table or in a file on disk. The best practice is to create a Checkpoint table and use it to maintain the replicat status. This also enhances the performance as the replicat applies the changes to the database using asynchronous COMMIT with the NOWAIT option. If you do not use a Checkpoint table, the replicat maintains the checkpoint in a file and applies the changes to the databases using a synchronous COMMIT with the WAIT option.

Setting up a Manager process


The Manager process is a key process of a GoldenGate configuration. This process is the root of the GoldenGate instance and it must exist at each GoldenGate site. It must be running on each system in the GoldenGate configuration before any other GoldenGate processes can be started. This recipe explains how to create a GoldenGate Manager process in a GoldenGate configuration.

Getting ready

Before setting up a Manager process, you must have installed GoldenGate binaries. A Manager process requires a port number to be defined in its configuration. Ensure that you have chosen the port to be used for the GoldenGate manager instance that you are going to set up.

How to do it…

In order to configure a Manager process, you need to create a configuration file. The following are the steps to create a parameter file for the Manager process:

  1. From the GoldenGate Home directory, run the GoldenGate software command line interface (GGSCI):

    ./ggsci
    
  2. Edit the Manager process configuration as follows:

    EDIT PARAMS MGR
    
  3. This command will open an editor window. You need to add the manager configuration parameters in this window as follows:

    PORT <PORT NO>
    DYNAMICPORTLIST <specification>
    AUTOSTART ER*
    AUTORESTART ER*, RETRIES 3, WAITMINUTES 3
    PURGEOLDEXTRACTS <specification>
    

    For example:

    PORT 7809
    DYNAMICPORTLIST 7810-7820, 7830
    AUTOSTART ER t*
    AUTORESTART ER t*, RETRIES 4, WAITMINUTES 4
    PURGEOLDEXTRACTS /u01/app/ggate/dirdat/tt*, USECHECKPOINTS, MINKEEPH
    OURS 2
    
  4. Save the file and exit the editor window.

  5. Start the Manager process by using the following code:

    GGSCI> START MGR
    

How it works…

All GoldenGate processes use a parameter file for configuration. In these files various parameters are defined. These parameters control the way the process functions. The steps to create the Manager process are broadly described as follows:

  1. Log in to the GoldenGate command line interface.

  2. Create a parameter file.

  3. Start the Manager process.

  4. When you start the Manager process you will get the following output:

    GGSCI (prim1-ol6-112.localdomain) 2> start mgr
    Manager started.
    

    You can check the status of the Manager process using the status command as follows:

    GGSCI (prim1-ol6-112.localdomain) 3> status mgr
    Manager is running (IP port prim1-ol6-112.localdomain.7809).
    

The Manager process performs the following administrative and resource management functions:

  • Monitor and restart Oracle GoldenGate processes

  • Issue threshold reports, for example, when throughput slows down or when synchronization latency increases

  • Maintain trail files and logs

  • Report errors and events

  • Receive and route requests from the user interface

The preceding parameters specified are defined as follows:

  • Port no: This is the port used by the Manager process itself.

  • Dynamic port list: Range of ports to be used by other processes in the GoldenGate instance. For example, Extract, Datapump, Replicat, and Collector processes.

  • Autostart ER*: To start the GoldenGate processes when the Manager process starts.

  • Autorestart ER*: To restart the GoldenGate process in case it fails. The RETRIES option controls the maximum number of restart attempts and the WAITMINUTES option controls the wait interval between each restart attempt in minutes.

  • Purgeoldextracts: To configure the automatic maintenance of GoldenGate trail files. The deletion criteria is specified using MINKEEPHOURS/MINKEEPFILES. The GoldenGate Manager process deletes the old trail files which fall out of this criteria.

There's more…

The Manager process can be configured to perform some more administrative tasks. The following are some other key parameters that can be added to the Manager process configuration:

  • STARTUPVALIDATONDELAY(Secs): Use this parameter to set a delay in seconds after which the Manager process checks that the processes are started after it starts up itself.

  • LAGREPORT: The Manager process writes the lag information of a process to its report file. This parameter controls the interval after which the Manager process performs this function.

Setting up a Classic Capture Extract process


A GoldenGate Classic Capture Extract process runs on the source system. This process can be configured for initially loading the source data and for continuous replication. This process reads the redo logs in the source database and looks for changes in the tables that are defined in its configuration file. These changes are then written into a buffer in the memory. When the extract reads a commit command in the redo logs, the changes for that transaction are then flushed to the trail files on disk. In case it encounters a rollback statement for a transaction in the redo log, it discards the changes from the memory. This type of Extract process is available on all platforms which GoldenGate supports. This process cannot read the changes for compressed objects. In this recipe you will learn how to set up a Classic Capture process in a GoldenGate instance.

Getting ready

Before adding the Classic Capture Extract process, ensure that you have completed the following steps in the source database environment:

  1. Enabled database minimum supplemental logging.

  2. Enabled supplemental logging for tables to be replicated.

  3. Set up a manager instance.

  4. Created a directory for the source trail files.

  5. Decided a two-letter initial for naming the source trail files.

How to do it…

The following are the steps to configure a Classic Capture Extract process in the source database:

  1. From the GoldenGate Home directory, run the GoldenGate software command line interface (GGSCI) as follows:

    ./ggsci
    
  2. Edit the Extract process configuration as follows:

    EDIT PARAMS EGGTEST1
    
  3. This command will open an editor window. You need to add the extract configuration parameters in this window as follows:

    EXTRACT <EXTRACT_NAME>
    USERID <SOURCE_GG_USER>@SOURCEDB, PASSWORD ******
    EXTTRAIL <specification>
    TABLE <replicated_table_specification>;
    

    For example:

    EXTRACT EGGTEST1
    USERID GGATE_ADMIN@DBORATEST, PASSWORD ******
    EXTTRAIL /u01/app/ggate/dirdat/st
    TABLE scott.*;
    
  4. Save the file and exit the editor window.

  5. Add the Classic Capture Extract to the GoldenGate instance as follows:

    ADD EXTRACT <EXTRACT_NAME>, TRANLOG, <BEGIN_SPEC>
    

    For example:

    ADD EXTRACT EGGTEST1, TRANLOG, BEGIN NOW
    
  6. Add the local trail to the Classic Capture configuration as follows:

    ADD EXTTRAIL /u01/app/ggate/dirdat/st, EXTRACT EGGTEST1
    
  7. Start the Classic Capture Extract process as follows:

    GGSCI> START EXTRACT EGGTEST1
    

How it works…

In the preceding steps we have configured a Classic Capture Extract process to replicate all tables for a SCOTT user. For this we first configure an Extract process parameter file and add the configuration parameter to it. Once the parameter file is created, we then add the Extract process to the source manager instance. This is done using the ADD EXTRACT command in step 5. In step 6, we associate a local trail file with the Extract process and then we start it. When you start the Extract process you will see the following output:

GGSCI (prim1-ol6-112.localdomain) 11> start extract EGGTEST1
Sending START request to MANAGER ...
EXTRACT EGGTEST1 starting

You can check the status of the Extract process using the following command:

GGSCI (prim1-ol6-112.localdomain) 10> status extract EGGTEST1
EXTRACT EGGTEST1: STARTED

There's more…

There are a few additional parameters that can be specified in the extract configuration as follows:

  • EOFDELAY secs: This parameter controls how often GoldenGate should check the source database redo logs for new data

  • MEGABYTES <N>: This parameter controls the size of the extract trail file

  • DYNAMICRESOLUTION: Use this parameter to enable extract to build the metadata for each table when the extract encounters its changes for the first time.

If your source database ie this parameter to enable extract to build the metadata for each table when the exs a very busy OLTP production system and you cannot afford to add additional load of GoldenGate process on it, you can however offload GoldenGate processing to another server by adding some extra configuration. You will need to configure the source database to ship the redo logs to a standby site and set up a GoldenGate manager instance on that server. The Extract processes will be configured to read from the archived logs on the standby system. For this you specify an additional parameter as follows:

TRANLOGOPTIONS ARCHIVEDLOGONLY ALTARCHIVEDLOGDEST <path>

Tip

If you are using Classic Capture in ALO mode for the source database using ASM, you must store the archive log files on the standby server outside ASM to allow Classic Capture Extract to read them.

See also

  • The recipe, Configuring an Extract process to read from an Oracle ASM instance and the recipe, Setting up a GoldenGate replication with multiple process groups in Chapter 2, Setting up GoldenGate Replication

Setting up an Integrated Capture Extract process


Integrated Capture is a new form of GoldenGate Extract process which works directly with the database log mining server to receive the data changes in the form of LCRs. This functionality is based on the Oracle Streams technology. For this, the GoldenGate Admin user requires access to the log miner dictionary objects. This Capture mode supports extracting data from the source databases using compression. It also supports some object types that are not supported by the Classic Capture. In this recipe, you will learn how to set up an Integrated Capture process in a GoldenGate instance.

Getting ready

Before adding the Integrated Capture Extract, ensure that you have completed the following steps in the source database environment:

  1. Enabled database minimum supplemental logging.

  2. Enabled supplemental logging for tables to be replicated.

  3. Set up a manager instance.

  4. Created a directory for source trail files.

  5. Decided a two-letter initial for naming source trail files.

  6. Created a GoldenGate Admin database user with extra privileges required for Integrated Capture in the source database.

How to do it…

You can follow the given steps to configure an Integrated Capture Extract process:

  1. From the GoldenGate Home directory, run the GoldenGate software command line interface (GGSCI) as follows:

    ./ggsci
    
  2. Edit the Extract process configuration as follows:

    EDIT PARAMS EGGTEST1
    
  3. This command will open an editor window. You need to add the extract configuration parameters in this window as follows:

    EXTRACT <EXTRACT_NAME>
    USERID <SOURCE_GG_USER>@SOURCEDB, PASSWORD ******
    TRANLOGOPTIONS MININGUSER <MINING_DB_USER>@MININGDB, &
    MININGPASSWORD *****
    EXTTRAIL <specification>
    TABLE <replicated_table_specification>;
    

    For example:

    EXTRACT EGGTEST1
    USERID GGATE_ADMIN@DBORATEST, PASSWORD ******
    TRANLOGOPTIONS MININGUSER OGGMIN@MININGDB, &
    MININGPASSWORD *****
    EXTTRAIL /u01/app/ggate/dirdat/st
    TABLE scott.*;
    
  4. Save the file and exit the editor window.

  5. Register the Integrated Capture Extract process to the database as follows:

    DBLOGIN USERID <SOURCE_GG_USER>@SOURCEDB, PASSWORD ******
    MININGDBLOGIN USERID 
    <MININGUSER>@MININGDB, PASSWORD ******
    REGISTER EXTRACT <EXTRACT_NAME> DATABASE
    
  6. Add the Integrated Capture Extract to the GoldenGate instance as follows:

    ADD EXTRACT <EXTRACT_NAME>, INTEGRATED TRANLOG, <BEGIN_SPEC>
    

    For example:

    ADD EXTRACT EGGTEST1, INTEGRATED TRANLOG, BEGIN NOW
    
  7. Add the local trail to the Integrated Capture configuration as follows:

    ADD EXTTRAIL /u01/app/ggate/dirdat/st, EXTRACT EGGTEST1
    
  8. Start the Integrated Capture Extract process as follows:

    GGSCI> START EXTRACT EGGTEST1
    

How it works…

The steps for configuring an Integrated Capture process are broadly the same as the ones for the Classic Capture process. We first create a parameter file in steps 1 to 4. In step 5, we add the extract to the GoldenGate instance. In step 6, we add a local extract trail file and in the next step we start the Extract process.

When you start the Extract process you will see the following output:

GGSCI (prim1-ol6-112.localdomain) 11> start extract EGGTEST1
Sending START request to MANAGER ...
EXTRACT EGGTEST1 starting

You can check the status of the Extract process using the following command:

GGSCI (prim1-ol6-112.localdomain) 10> status extract EGGTEST1
EXTRACT EGGTEST1: RUNNING

As described earlier, an Integrated Capture process can be configured with the mining dictionary in the source database or in a separate database called a downstream mining database. When you configure the Integrated Capture Extract process in the downstream mining database mode, you need to specify the following parameter in the extract configuration file:

TRANLOGOPTIONS MININGUSER OGGMIN@MININGDB, MININGPASSWORD *****

You will also need to connect to MININGDB using MININGUSER before registering the Extract process:

MININGDBLOGIN USERID <MININGUSER>@MININGDB, PASSWORD ******

This mining user has to be set up in the same way as the GoldenGate Admin user is set up in the source database.

Tip

If you want to use Integrated Capture mode with a source database which is running on Oracle database Version 11.2.0.2 or earlier, you must configure the Integrated Capture process in the downstream mining database mode and the downstream database must be on Version 11.2.0.3 or higher.

There's more…

Some additional parameters that should be added to the extract configuration are as follows:

  • TRANLOGOPTIONS INTEGRATEDPARAMS: Use this parameter to control how much memory you want to allocate to the log miner dictionary. This memory is allocated out of the Streams pool in the SGA:

    TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 164)
    
  • MEGABYTES <N>: This parameter controls the size of the extract trail file.

  • DYNAMICRESOLUTION: Use this parameter to enable extract to build the metadata for each table when the extract encounters its changes for the first time.

See also

Setting up a Datapump process


Datapumps are secondary Extract processes which exist only in the GoldenGate source environments. These are optional processes. When the Datapump process is not configured, the Extract process does the job of extracting and transferring the data to the target environment. When the Datapump process is configured, it relieves the main Extract process from the task of transferring the data to the target environment. The Extract process can then solely focus on extracting the changes from the source database redo and write it to local trail files.

Getting ready

Before adding the Datapump extract, you must have a manager instance running. You should have added the main extract and a local trail location to the instance configuration. You will also need the target environment details, for example, hostname, manager port no., and the remote trail file location.

How to do it…

Just like other GoldenGate processes, the Datapump process requires creating a parameter file with some parameters. The following are the steps to configure a Datapump process in a GoldenGate source environment:

  1. From the GoldenGate Home, run the GoldenGate Software Command Line Interface (GGSCI) as follows:

    ./ggsci
    
  2. Edit the Datapump process configuration as follows:

    EDIT PARAMS PGGTEST1
    
  3. This command will open an editor window. You need to add the Datapump configuration parameters in this window as follows:

    EXTRACT <DATAPUMP_NAME>
    USERID <SOURCE_GG_USER>@SOURCEDB, PASSWORD ******
    RMTHOST <HOSTNAME_IP_TARGET_SYSTEM>, MGRPORT <TARGET_MGRPORT>
    RMTTRAIL <specification>
    TABLE <replicated_table_specification>;
    

    For example:

    EXTRACT PGGTEST1
    USERID GGATE_ADMIN@DBORATEST, PASSWORD ******
    RMTHOST stdby1-ol6-112.localdomain, MGRPORT 7809
    RMTTRAIL /u01/app/ggate/dirdat/rt
    TABLE scott.*;
    
  4. Save the file and exit the editor window.

  5. Add the Datapump extract to the GoldenGate instance as follows:

    ADD EXTRACT PGGTEST1, EXTTRAILSOURCE /u01/app/ggate/dirdat/tt
    
  6. Add the remote trail to the Datapump configuration as follows:

    ADD RMTTRAIL /u01/app/ggate/dirdat/rt, EXTRACT PGGTEST1
    
  7. Start the Datapump process as follows:

    GGSCI> START EXTRACT PGGTEST1
    

How it works…

Once you have added the parameters to the Datapump parameter file and saved it, you need to add the process to the GoldenGate instance. This is done using the ADD EXTRACT command in step 5. In step 6,, we associate a remote trail with the Datapump process and in step 7 we start the Datapump process. When you start the Datapump process you will see the following output:

GGSCI (prim1-ol6-112.localdomain) 10> start extract PGGTEST1
Sending START request to MANAGER ...
EXTRACT PGGTEST1 starting

You can check the status of the Datapump process using the following command:

GGSCI (prim1-ol6-112.localdomain) 10> status extract PGGTEST1
EXTRACT PGGTEST1: RUNNING

Tip

If you are using virtual IPs in your environment for the target host, always configure the virtual IP in the datapump RMTHOST configuration. This virtual IP should also be resolved through DNS. This will ensure automatic discovery while configuring monitoring for GoldenGate configurations.

There's more…

The following are some additional parameters/options that can be specified in the datapump configuration:

  • RMTHOSTOPTIONS: Using this option for the RMTHOST parameter, you can configure additional features such as encryption and compression for trail file transfers.

  • EOFDELAY secs: This parameter controls how often GoldenGate should check the local trail file for new data.

  • MEGABYTES <N>: This parameter controls the size of a remote trail file.

  • PASSTHRU: This parameter is used to avoid lookup in database or definitions files in datapump are not doing any conversions and so on.

  • DYNAMICRESOLUTION: Use this parameter to enable extract to build the metadata for each table when the extract encounters its changes for the first time.

See also

  • Refer to the recipes, Encrypting database user passwords Encrypting the trail files in Chapter 2, Setting up GoldenGate Replication

Setting up a Replicat process


The Replicat processes are the delivery processes which are configured in the target environment. These processes read the changes from the trail files on the target system and apply them to the target database objects. If there are any transformations defined in the replicat configuration, the Replicat process takes care of those transformations as well. You can define the mapping information in the replicat configuration. The Replicat process will then apply the changes to the target database based on the mappings.

Getting ready

Before setting up replicat in the target system, you must have configured and started the Manager process.

How to do it…

Follow the following steps to configure a replicat in the target environment:

  1. From the GoldenGate Home directory, run the GoldenGate software command line interface (GGSCI) as follows:

    ./ggsci
    
  2. Log in to the target database through GGSCI as shown in the following code:

    GGSCI> DBLOGIN, USERID <USER> PASSWORD <PW>
    
  3. Add the Checkpoint table as shown in the following code:

    GGSCI> ADD CHECKPOINTTABLE <SCHEMA.TABLE>
    
  4. Edit the Replicat process configuration as shown in the following code:

    GGSCI> EDIT PARAMS RGGTEST1
    
  5. This command will open an editor window. You need to add the replicat configuration parameters in this window as shown in the following code:

    REPLICAT <REPLICAT_NAME>
    USERID <TARGET_GG_USER>@TARGETDB, PASSWORD ******
    DISCARDFILE <DISCARDFILE_SPEC>
    MAP <mapping_specification>;
    

    For example:

    REPLICAT RGGTEST1
    USERID GGATE_ADMIN@TGORTEST, PASSWORD ******
    DISCARDFILE /u01/app/ggate/dirrpt/RGGTEST1.dsc, APPEND, MEGABYTES 500
    MAP SCOTT.*, SCOTT.*;
    
  6. Save the file and exit the editor.

  7. Add the replicat to the GoldenGate instance as shown in the following code:

    GGSCI> ADD REPLICAT <REPLICAT> EXTTRAIL <PATH>
    

    For example:

    ADD REPLICAT RGGTEST1, EXTTRAIL /u01/app/ggate/dirdat/rt
    
  8. Start the Replicat process as shown in the following code:

    GGSCI> START REPLICAT <REPLICAT>
    

How it works…

In the preceding procedure we first create a Checkpoint table in the target database. As the name suggests, the Replicat process uses this table to maintain its checkpoints. In case the Replicat process crashes and it is restarted, it can read this Checkpoint table and start applying the changes from the point where it left.

Once you have added a Checkpoint table, you need to create a parameter file for the Replicat process. Once the process parameter file is created, it is then added to the GoldenGate instance. At this point, we are ready to start the Replicat process and apply the changes to the target database. You should see an output similar to the following:

GGSCI (stdby1-ol6-112.localdomain) 10> start replicat RGGTEST1
Sending START request to MANAGER ...
REPLICAT RGGTEST1 starting

You can check the status of the Replicat process using the following command:

GGSCI (stdby1-ol6-112.localdomain) 10> status replicat RGGTEST1
REPLICAT RGGTEST1: RUNNING

There's more…

Following are the common parameters that are specified in the replicat configuration:

  • DISCARDFILE: This parameter is used to specify the name of the discard file. If the Replicat process is unable to apply any changes to the target database due to any errors, it writes the record to the discard file.

  • EOFDELAY secs: This parameter controls how often GoldenGate should check the local trail file for new data.

  • REPORTCOUNT: This parameter controls how often the Replicat process writes its progress to the report file.

  • BATCHSQL: This parameter is used to specify the BATCHSQL mode for replicat.

  • ASSUMETARGETDEFS: This parameter tells the Replicat process to assume that the source and target database object structures are the same.

See also

  • Read Setting up GoldenGate replication between tables with different structures using defgen recipe in Chapter 2, Setting Up GoldenGate Replication

  • Steps to configure a BATCHSQL mode recipe in Chapter 6, Monitoring, Tuning, and Troubleshooting GoldenGate for further information

Left arrow icon Right arrow icon

Key benefits

  • Various recipes that will help you to set up Goldengate in various environments
  • Practical examples of Goldengate management tasks
  • Step by step instructions with various examples and scripts

Description

Oracle Goldengate 11g Complete Cookbook is your complete guide to all aspects of Goldengate administration. The recipes in this book will teach you how to setup Goldengate configurations for simple and complex environments requiring various filtering and transformations. It also covers various aspects of tuning and troubleshooting the replication setups using exception handling, custom fields, and logdump utility.The book begins by explaining some basic tasks like Installation and Process groups setup. You will then be introduced to some further topics including DDL replication and various options to perform Initial Loads. You will then learn some advanced administration tasks such as Multi Master replication setup and conflict resolution. Further recipes, contain the cross platform replication and high availability options for Goldengate.

Who is this book for?

Oracle Goldengate 11g Complete Cookbook is aimed at Database Administrators, Architects, and Middleware Administrators who are keen to know more about Oracle Goldengate. Whether you are handling Goldengate environments on a day-to-day basis, or using it just for migration, this book provides the necessary information required to successfully complete your administration tasks. The reader is expected to have some knowledge of Oracle databases.

What you will learn

  • Configure DML and DDL Goldengate replication
  • Tune and Troubleshoot Goldengate replication
  • Cross Platform replication using Goldengate
  • Monitor Goldengate Replication using OEM 12c
  • What to do when GoldenGate replication breaks
  • Reverse changes applied by Goldengate
  • High Availability Setup for Goldengate
  • Learn about GoldenGate Veridata and GoldenGate Director
  • Migrate Oracle Streams environment to Goldengate
Estimated delivery fee Deliver to Estonia

Premium delivery 7 - 10 business days

€25.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 26, 2013
Length: 362 pages
Edition : 1st
Language : English
ISBN-13 : 9781849686143
Vendor :
Oracle
Category :
Languages :
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Estonia

Premium delivery 7 - 10 business days

€25.95
(Includes tracking information)

Product Details

Publication date : Sep 26, 2013
Length: 362 pages
Edition : 1st
Language : English
ISBN-13 : 9781849686143
Vendor :
Oracle
Category :
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 153.97
Oracle Goldengate 11g Complete Cookbook
€58.99
Oracle GoldenGate 11g Implementer's guide
€45.99
Oracle Database 12c Backup and Recovery Survival Guide
€48.99
Total 153.97 Stars icon
Banner background image

Table of Contents

9 Chapters
Installation and Initial Setup Chevron down icon Chevron up icon
Setting up GoldenGate Replication Chevron down icon Chevron up icon
DDL Replication and Initial Load Chevron down icon Chevron up icon
Mapping and Manipulating Data Chevron down icon Chevron up icon
Oracle GoldenGate High Availability Chevron down icon Chevron up icon
Monitoring, Tuning, and Troubleshooting GoldenGate Chevron down icon Chevron up icon
Advanced Administration Tasks – I Chevron down icon Chevron up icon
Advanced Administration Tasks – Part II Chevron down icon Chevron up icon
GoldenGate Veridata, Director, and Monitor Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.7
(9 Ratings)
5 star 33.3%
4 star 33.3%
3 star 11.1%
2 star 11.1%
1 star 11.1%
Filter icon Filter
Top Reviews

Filter reviews by




Amazon Customer Apr 19, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Fabulous book- very user friendly! Love how well concepts are broken down for easy understanding :)
Amazon Verified review Amazon
BIKRAMJIT BHULLAR Apr 04, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Majority of the cookbooks cater to seasoned programmers so they can quickly lookup a solution without having to reinvent the wheel. However, this particular cookbook can also be used by less experienced Administrator as part of "How It Work and How to do it." methodology.After learning the Oracle GoldenGate. I would read the problem statement and then try to implement the solution by myself. I would compare my solution to that of the author's. Or in certain cases, cheat by copying the given solution.I found it to be a complete guide to the concepts, architecture, configuration, maintenance, troubleshooting and performance considerations for Oracle GoldenGate.Instead of navigating through links and multiple documents in Oracle documentation, I was able to get the necessary operational details in this one book.A very helpful guide for anyone who wants to setup replication using Oracle Goldengate or perform maintenance, monitoring and troubleshooting of an Oracle GoldenGate environment.I would recommend this book to anybody who is interested in mastering in Oracle GoldenGate.
Amazon Verified review Amazon
Amazon Customer May 09, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
As an Oracle Consultant, finding books that are clear and concise and not based on entry level descriptions it is nice to come across a book that pays homage to the basics but is not afraid to delve in to subject matter that is a bit beyond the obvious/initial concerpts.Ankur's book delivers stage by stage guides to building a strong working knowledge of Oracle Golden Gate [OGG] showing the depth and flexibility that can be achieved using OGG as part of your replication strategy [or by extension, your DR strategy]The book is well laid out, each chapter builds on the previous whilst also allowing you to jump directly to areas of interest that go beyond the initial stumbling blocks encountered whilst getting to grips with OGG.A strong recommendation for anyone who's been aware of OGG but yet not gotten truly involved with large scale, complex, replication projects. Anyone upgrading from a streams environment to OGG should definitely look at the Advanced Administration Guide for some well worked examples of how to consider a upgrade.
Amazon Verified review Amazon
Amazon Customer Apr 10, 2016
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
I best thing I like about the book is its organization and easy-to-find sections. The sections I first used in the book was the GoldenGate HA chapter. I had referred to a couple of other reference documents but couldn't get it right at first go. I found instructions by Ankur pretty close to what I wanted to do. Ever since I have used the book as a ready reckoner for anything GoldenGate related. Thanks Ankur.
Amazon Verified review Amazon
Tomas Frastia Dec 21, 2013
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
I really like books that have a simple and straightforward description of what and how i.e. exactly how to install, how to set it and what it means, etc.and in my personal opinion, this book fully meets that definition. The Oracle GoldenGate 11g Complete Cookbook is intended primarily for people who have very good administration and development experience with Oracle databases. The book is very easy to read with well written, examples are very well written and explained exactly how to proceed. I thing with this book you get very greate introduction to Oracle Gate, how to setup datapums, how to use the data filtering and mapping, how to store the transaction history etc.. Personally, the most interesting chapters for me was: “5. Oracle GoldenGate High Availability” – where are described how to create a highly available GoldenGate configuration with different file systems e.g. ocfs2, dbfx, acfs, etc. “7.Advanced Administration Task-1″ – where are described how to use the revese utility, how to a downstream database wth Integrated Capture etc. I was surprised (but praise for it), that the authors of this book, do not forget, how to monitor the GolgenGate with OEM12c and described it in the last chapter. So my conclusion is that if you want to use Oracle Golden Gate, or if you already use this tool, then you must have this book.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela