Splitting large tables into row ranges across process groups
What if you have some large tables with a high data change rate within a source schema and you cannot logically separate them from the remaining tables due to referential constraints? GoldenGate provides a solution to this problem by "splitting" the data within the same schema via the @RANGE
function. The @RANGE
function can be used in the Data Pump and Replicat configuration to "split" the transaction data across a number of parallel processes.
The Replicat process is typically the source of performance bottlenecks because, in its normal mode of operation, it is a single-threaded process that applies operations one at a time by using regular DML. Therefore, to leverage parallel operation and enhance throughput, the more Replicats the better (dependant on the number of CPUs and memory available on the target system).
The RANGE function
The way the @RANGE
function works is it computes a hash value of the columns specified in the input...