The ARCHIVELOG mode
Oracle lets you save filled redo log files to one or more offline destinations to improve the recoverability of your data by having all transactions saved in case of a crash, reducing any possibility of data loss. The copy of the redo log file containing transactions against your database made to a different location is known as an ARCHIVELOG file, and the process of turning redo log files into archived redo log files is called archiving.
Understanding the ARCHIVELOG mode
An archived redo log file is a physical copy of one of the filled members of a redo log group. Remember that redo log files are cyclical files that are overwritten by the Oracle database and are only archived (backup copy of the file before being overwritten) when the database is in the ARCHIVELOG
mode. Each redo log file includes all redo entries and the unique log sequence number of the identical member of the redo log group. To make this point more clear, if you are multiplexing your redo log file (recommended to a minimum of two members per group), and if your redo log group 1 contains two identical member files such as redolog_1a.rdo
and redolog_1b.rdo
, then the archive process (ARCn) will only archive one of these member files, not both. If redo log file redolog_1a.rdo
becomes corrupted, then the ARCn process will still be able to archive the identical surviving redo log file redolog_1b.rdo
. The archived redo log generated by the ARCn process will contain a copy of every group created since you enabled archiving in your database.
When the database is running in the ARCHIVELOG
mode, the LGWR process cannot reuse and hence overwrite a given redo log group until it has been archived. This is to ensure the recoverability of your data. The background process ARCn will automate the archiving operation and the database will start multiple archive processes as necessary (the default number of processes is four) to ensure that the archiving of filled redo log files does not fall behind.
You can use archived redo logs to:
Recover a database
Update and keep a standby database in sync with a primary database
Get information about the history of a database using the
LogMiner
utility
In the ARCHIVELOG
mode, the Oracle Database engine will make copies of all online redo log files via an internal process called ARCn. This process will generate archive copies of your redo log files to one or more archive log destination directories. The number and location of destination directories will depend on your database initialization parameters.
To use the ARCHIVELOG
mode, you will need to first set up some configuration parameters. Once your database is in the ARCHIVELOG
mode, all database activity regarding your transactions will be archived to allow your data recoverability and you will need to ensure that your archival destination area always has enough space available. If space runs out, your database will suspend all activities until it becomes able once again to back up your redo log files in the archival destination. This behavior happens to ensure the recoverability of your database.
Tip
Never use the extension .log
for redo log files. As mentioned earlier, use a different extension such as, for example, .rdo
. This is because anyone, including you, can delete .log
files by mistake when running out of space.
Preparing for the ARCHIVELOG mode
When setting your database to work in the ARCHIVELOG
mode, please never forget to:
Configure your database in a proper way. Some examples of what to do when configuring a database are:
Read the Oracle documentation: It's always important to follow Oracle recommendations in the documentation.
Have a minimum of three control files: This will reduce the risk of losing a control file.
Set the
CONTROL_FILE_RECORD_KEEP_TIME
initialization parameter to an acceptable value: Doing so will set the number of days before a reusable record in the control file can be reused. It will also control the period of time that your backup information will be stored in the control file.Configure the size of redo log files and groups appropriately: If not configured properly, the Oracle Database engine will generate constant checkpoints that will create a high load on the buffer cache and I/O system affecting the performance of your database. Also, having few redo log files in a system will force the LGWR process to wait for the ARCn process to finish before overwriting a redo log file.
Multiplex online redo log files: Do this to reduce the risk of losing an online redo log file.
Enable block checksums: This will allow the Oracle Database engine to detect corrupted situations.
Enable database block checking: This allows Oracle to perform block checking for corruption, but be aware that it can cause overhead in most applications depending on workload and the parameter value.
Log checkpoints to the alert log: Doing so helps you determine whether checkpoints are occurring at a desired frequency.
Use fast-start fault recovery feature: This is used to reduce the time required for cache recovery. The parameter
FAST_START_MTTR_TARGET
is the one to look over here.Use Oracle restart: This is used to enhance the availability of a single instance (non-RAC) and its components.
Never use the extension
.log
for redo log files: As mentioned earlier, anyone including you, can delete.log
files by mistake when running out of space.Use block change tracking: This is used to allow incremental backups to run to completion more quickly than otherwise.
Always be sure to have enough available space in the archival destination.
Always make sure that everything is working as it is supposed to be working. Never forget to implement a proactive monitoring strategy using scripts or Oracle Enterprise Manager (OEM). Some important areas to check are:
Database structure integrity
Data block integrity
Redo integrity
Undo segment integrity
Transaction integrity
Dictionary integrity
Checking the status of the ARCHIVELOG mode
You can determine which mode or if archiving, is being used in your instance by issuing an SQL query to the log_mode
field in the v$database
(ARCHIVELOG
indicates archiving is enabled and NOARCHIVELOG
indicates that archiving is not enabled) or by issuing the SQL archive log list
command:
SQL> SELECT log_mode FROM v$database; LOG_MODE ------------------- ARCHIVELOG SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 8 Next log sequence to archive 10 Current log sequence 10
Specifying parameters
When in the ARCHIVELOG
mode, you can choose between generating archive redo logs to a single location or multiplexing them. The most important parameters you need to be familiar with when setting your database to work in this mode are:
LOG_ARCHIVE_DEST_n
: Use this parameter to specify from one to ten different archival locations (n can be a number between 1 and 10).LOG_ARCHIVE_FORMAT
: This parameter will specify the default filename format when archiving the redo log files. The following variables can be used to format the file:%s
—log sequence number%S
—log sequence number, zero filled%t
—thread number%T
—thread number, zero filled%a
—activation ID%d
—database ID%r
—resetlogs ID
One example of how to make use of these parameters could be something like this:
alter system set log_archive_format="orcl_%s_%t_%r.arc" scope=spfile
. This command will create archive log files with a name that will contain the word"orcl"
that is the database ID, the log sequence number, the thread number, and the resetlogs ID.LOG_ARCHIVE_MIN_SUCCEED_DEST
: This defines the minimum number of archival destinations that must succeed in order to allow a redo log file to be overwritten
Viewing the status of archival destinations
You can also check the status of your archival destinations by querying the V$ARCHIVE_DEST
view, in which the following variable characteristics will determine the status:
Valid/Invalid: This indicates whether the disk location or service name specified is valid or not
Enabled/Disabled: This indicates the availability state of the location and if the database can use it
Active/Inactive: This indicates whether there was a problem accessing the destination
The FRA (called Flashback Recovery Area before Oracle 11g R2, and now called Fast Recovery Area) is a disk location in which the database can store and manage all files related to backup and recovery operations. Flashback database provides a very efficient mechanism to rollback any unwanted database change. We will talk in more depth about FRA and Flashback database in Chapter 4, User Managed Backup and Recovery.
Placing a database into the ARCHIVELOG mode
Now let's take a look at a very popular example that you can use to place your database in the ARCHIVELOG
mode, and use the FRA as a secondary location for the archive log files. To achieve all this you will need to:
Set up the size of your FRA to be used by your database. You can do this by using the command:
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=<M/G> SCOPE=both;
Specify the location of the FRA using the command:
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST= '/u01/app/oracle/fast_recovery_area' scope=both;
Define your archive log destination area using the command:
SQL> ALTER SYSTEM SET log_archive_dest_1= 'LOCATION=/DB/u02/backups/archivelog' scope=both;
Define your secondary archive log area to use the FRA with the command:
SQL> ALTER SYSTEM SET log_archive_dest_10= 'LOCATION=USE_DB_RECOVERY_FILE_DEST';
Shutdown your database using the command:
SQL> SHUTDOWN IMMEDIATE
Start your database in mount mode using the command:
SQL> STARTUP MOUNT
Switch your database to use the
ARCHIVELOG
mode using the command:SQL> ALTER DATABASE ARCHIVELOG;
Then finally open your database using the command:
SQL> ALTER DATABASE OPEN;
When in the ARCHIVELOG
mode, you are able to make hot backups using RMAN
. You are able to perform some user-managed backups using the alter database begin backup
command (used to allow you to make a consistent backup of your entire database files). You may also use the alter tablespace <Tablespace_Name> begin backup
command to make a backup of all datafiles associated to a tablespace.
Now that you know everything you are supposed to know about the ARCHIVELOG
mode, let's take a deeper look in what is redo and why it is so important to the recoverability of our database.
Differences between redo and undo
Another common question relates to the difference between redo log entries and undo information saved as part of transaction management. While redo and undo data sound almost like they could be used for the same purpose, such is not the case. The following table spells out the differences:
undo |
redo | |
---|---|---|
Record of |
how to undo a change |
how to reproduce a change |
Used for |
rollback, read-consistency |
rolling forward database changes |
Stored in |
undo segments |
redo log files |
Protect Against |
inconsistent reads in multiuser systems |
data loss |
In the end, an undo segment is just a segment like any other (such as a table, an index, a hash cluster, or a materialized view). The important point here is in the name, and the main rule you need to understand is that if you modify part of a segment (any segment, regardless of its type), you must generate redo so that the change can be recovered in the event of a media or instance failure. Therefore, if you modify the table EMPLOYEE
, the changes made to the EMPLOYEE
blocks are recorded in the redo log buffer, and consequently to the redo log files (and archive log files if running in the ARCHIVELOG
mode). The changes made to EMPLOYEE
also have to be recorded in UNDO
because you might change your mind and want to rollback the transaction before issuing a commit to confirm the changes made. Therefore, the modification to the table EMPLOYEE
causes entries to be made in an undo segment, but this is a modification to a segment as well. Therefore, the changes made to the undo segment also have to be recorded in the redo log buffer to protect your data integrity in case of a disaster.
If your database crashes and you need to restore a set of datafiles from five days ago, including those for the UNDO
tablespace, Oracle will start reading from your archived redo, rolling the five day old files forward in time until they were four, then three, then two, then one. This will happen until the recovery process gets to the time where the only record of the changes to segments (any segment) was contained in the current online redo log file, and now that you have used the redo log entries to roll the data forward until all changes to all segments that had ever been recorded in the redo, have been applied. At this point, your undo segments have been repopulated and the database will start rolling back those transactions which were recorded in the redo log, but which weren't committed at the time of the database failure.
I can't emphasize enough, really, that undo segments are just slightly special tables. They're fundamentally not very different than any other tables in the database such as EMPLOYEE
or DEPARTMENT
, except that any new inserts into these tables can overwrite a previous record, which never happens to a table like EMPLOYEE
, of course. If you generate undo when making an update to EMPLOYEE
, you will consequently generate redo. This means that every time undo is generated, redo will also be generated (this is the key point to understand here).
Oracle Database stores the before and after image in redo because redo is written and generated sequentially and isn't cached for a long period of time in memory (as mentioned in the What is redo section in this chapter). Hence, using redo to rollback a mere mistake, or even a change of mind, while theoretically possible, would involve wading through huge amounts of redo sequentially, looking for the before image in a sea of changes made by different transactions, and all of these will be done by reading data off disk to memory as a normal recovery process. UNDO
, on the other hand, is stored in the buffer cache (just as the table EMPLOYEE
is stored in the buffer cache), so there's a good chance that reading the information needed will require only logical I/O and not physical. Your transaction will also be dynamically pointed to where it's written in UNDO
, so you and your transaction can jump straight to where your UNDO
is, without having to navigate through a sea of undo generated by all other transactions.
In summary, you need redo for recovery operations and undo for consistency in multiuser environments and to rollback any changes of mind. This in my personal opinion, is one of the key points that makes Oracle superior to any other database in the market. Other databases merely have transaction logs which serve both purposes, and suffer in performance and flexibility terms accordingly.
Facing excessive redo generation during an online backup?
One of the most common questions I see on the Oracle Technology Network (OTN) forums is why so much redo is generated during an online backup operation. When a tablespace is put in the backup mode, the redo generation behavior changes but there is not excessive redo generated. There is additional information logged into the online redo log file during a hot backup the first time a block is modified in a tablespace that is in the hot backup mode. In other words, as long as the tablespace is in the backup mode, Oracle will write the entire block to disk, but later it generates the same redo. This is done due as Oracle cannot guarantee that a block was not copied while it was being updated as part of the backup.
In the hot backup mode, only two things are different:
The first time a block is changed in a datafile that is in the hot backup mode, the entire block is written to the redo log file, and not just the changed bytes. This is because you can get into a situation in which the process copying the datafile and the database writer (DBWR) are working on the same block simultaneously. Hence, the entire block image is logged so that during recovery, the block is totally rewritten from redo and is consistent with itself.
The datafile headers which contain the System Change Number (SCN) of the last completed checkpoint are not updated while a file is in the hot backup mode. The DBWR process constantly writes to the datafiles during the hot backup. The SCN recorded in the header tells us how far back in the redo stream one needs to go to recover the file.
Tip
To limit the effect of this additional logging, you should ensure to place only one tablespace at a time in the backup mode and bring the tablespace out of the backup mode as soon as you have finished backing it up. This will reduce the number of blocks that may have to be logged to the least possible.