Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Oracle Database 11gR2 Performance Tuning Cookbook

You're reading from   Oracle Database 11gR2 Performance Tuning Cookbook Shifting your Oracle Database into top gear takes a lot of know-how and fine-tuning ability. The 80+ recipes in this Cookbook will give you those skills along with the ability to troubleshoot if things starts running slowly.

Arrow left icon
Product type Paperback
Published in Jan 2012
Publisher Packt
ISBN-13 9781849682602
Length 542 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Ciro Fiorillo Ciro Fiorillo
Author Profile Icon Ciro Fiorillo
Ciro Fiorillo
Arrow right icon
View More author details
Toc

Table of Contents (21) Chapters Close

Oracle Database 11gR2 Performance Tuning Cookbook
Credits
About the Author
Acknowledgement
About the Reviewers
www.PacktPub.com
Preface
1. Starting with Performance Tuning FREE CHAPTER 2. Optimizing Application Design 3. Optimizing Storage Structures 4. Optimizing SQL Code 5. Optimizing Sort Operations 6. Optimizing PL/SQL Code 7. Improving the Oracle Optimizer 8. Other Optimizations 9. Tuning Memory 10. Tuning I/O 11. Tuning Contention Dynamic Performance Views A Summary of Oracle Packages Used for Performance Tuning Index

Acquiring data using a data dictionary and dynamic performance views


In the Oracle database, there are many views which can be queried to acquire data about the database state. They are divided into data dictionary views, with a name similar to DBA_*, and dynamic performance views, named something similar to V$_*.

Getting ready

When we use a standard template in Oracle DBCA to create a database, both data dictionary views and dynamic performance views are in place after database creation. If we prefer to use our own scripts to create the database, we need to launch at least the catalog.sql and catproc.sql scripts to populate the data dictionary with the views we need. These scripts are located in the rdbms/admin subdirectory of the Oracle Home directory.

To collect timing information in the dynamic performance views, we have to set the parameter TIMED_STATISTICS=TRUE in the init.ora file of our database instance. We can also accomplish this requirement with the following SQL statement:

ALTER SYSTEM SET TIMED_STATISTICS = TRUE SCOPE = BOTH;

Tip

Please note that the default value for the TIMED_STATISTICS parameter is already TRUE and that there isn't any perceptible performance gain in changing this default value to FALSE.

How to do it...

We can query the data dictionary views and the dynamic performance views like any other view in the database, using SQL statements.

We can also query DBA_VIEWS, which is a data dictionary view showing other views in the database:

select view_name from dba_views
  where view_name like 'DBA%' order by 1

We can query the V$FIXED_TABLE view to get a list of all the V$ dynamic performance views and X$ tables:

select name from V$FIXED_TABLE order by 1;

Tip

You can find the definition of each view we will use in the book in Appendix A, Dynamic Performance Views

How it works...

Data dictionary views are owned by the user SYS and there is a public synonym for each of them. They expose data about database objects, for example, tables and indexes.

In Oracle Database 11gR2 Enterprise Edition, the database installed from the DBCA template will have more than 800 data dictionary views available. We will present the data dictionary views that we need in our recipes when we have to query them.

Even dynamic performance views are owned by the user SYS; they are synonyms to V_$* views. Those views are based on X$ tables, which are undocumented structures populated at instance start-up. The data dictionary view contains two kinds of data, namely, fields that store information on the characteristics of the object, and other fields that collect information dynamically from object usage.

For example, in the DBA_TABLES there are fields about the physical structure of the table (such as TABLESPACE_NAME, PCT_FREE, INITIAL_EXTENT) and other fields which expose statistics on the table contents (such as NUM_ROWS, AVG_SPACE, AVG_ROW_LEN).

To collect these statistical data we have to perform the ANALYZE statement. For a table, we will execute the following statement:

ANALYZE TABLE hr.employees COMPUTE STATISTICS;

To speed up and automate the analysis of many objects, we can use DBMS_UTILITY.analyze_schema or DBMS_UTILITY.analyze_database to analyze all the objects in a schema in the first case, or in the database in the latter. To analyze the objects of the HR schema, we will execute the following statement:

EXEC DBMS_UTILITY.analyze_schema('HR','COMPUTE');

Tip

For both the ANALYZE command and the DBMS_UTILITY functions, we have two choices, which are either to compute the statistics or to estimate these values based on the analysis of a restricted set of data. When ESTIMATE is chosen, we have to specify the number of rows to use for the sample or a percentage.

Oracle advises us to use another method to compute statistics, namely, the DBMS_STATS package, which allows deleting statistics, exporting, importing, and gathering statistics in parallel. The following statement analyses the schema HR:

EXEC DBMS_STATS.gather_schema_stats('HR');

Note

ANALYZE and the use of DBMS_UTILITY illustrated earlier are supported for backward compatibility only; use the package DBMS_STATS to collect statistics.

Similarly, we can gather statistics on tables, indexes, or database. Even with DBMS_STATS we can use the ESTIMATE method, as in the first of the following examples:

EXEC DBMS_STATS.gather_database_stats(estimate_percent => 20);
EXEC DBMS_STATS.gather_table_stats('HR', 'EMPLOYEES');
EXEC DBMS_STATS.gather_index_stats('HR', 'EMP_JOB_IX');

Using the DBMS_STATS package we can also delete statistics, as shown:

EXEC DBMS_STATS.delete_table_stats('HR', 'EMPLOYEES');

To transfer statistics between different databases, we have to use a statistics table, as shown in the following steps:

  1. Create the statistics table on the source database.

  2. Export the statistics from the data dictionary to the statistics table.

  3. Move the statistics table (Export/Import, Datapump, Copy) to the target database.

  4. Import the statistics from the statistics table to the data dictionary.

  5. Drop the statistics table.

The corresponding statements to execute on the source database are as follows:

EXEC DBMS_STATS.create_stat_table('DBA_SCHEMA', 'MY_STAT_TABLE');
EXEC DBMS_STATS.export_schema_stats('DBA_SCHEMA', 'MY_STAT_TABLE', NULL, 'APP_SCHEMA');

With these statements we have created the statistics table MY_STAT_TABLE in the DBA_SCHEMA and populated it with data from the APP_SCHEMA (for example, HR).

Then we transfer the MY_STAT_TABLE to the target database; using the export/import command line utilities we export the table from source database and then import the table into the target database, in which we execute the following statements:

EXEC DBMS_STATS.import_schema_stats('APP_SCHEMA', 'MY_STAT_TABLE', NULL, 'DBA_SCHEMA');
EXEC DBMS_STATS.drop_stat_table('DBA_SCHEMA', 'MY_STAT_TABLE');

In the example, we have transferred statistics about the entire schema APP_SCHEMA. We can choose to transfer statistics for the entire database, a table, an index, or a column, using the corresponding import_* and export_* procedures of the DBMS_STATS package.

There's more...

The COMPUTE STATISTICS and ESTIMATE STATISTICS parameters of the ANALYZE command are supported only for backward compatibility by Oracle. However, there are other functionalities of the command that allow validating the structure of a table, index, cluster, materialized views, or to list the chained or migrated rows:

ANALYZE TABLE employees VALIDATE STRUCTURE;
ANALYZE TABLE employees LIST CHAINED ROWS INTO CHAINED_ROWS;

The first statement validates the structure of the EMPLOYEES table, while the second command lists the chained rows of the same table into the CHAINED_ROWS table (created with the script utlchain.sql or utlchn1.sql.)

See also

  • Avoiding row chaining in Chapter 3, Optimizing Storage Structures

You have been reading a chapter from
Oracle Database 11gR2 Performance Tuning Cookbook
Published in: Jan 2012
Publisher: Packt
ISBN-13: 9781849682602
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image