Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Oracle Database 11gR2 Performance Tuning Cookbook
Oracle Database 11gR2 Performance Tuning Cookbook

Oracle Database 11gR2 Performance Tuning Cookbook: Shifting your Oracle Database into top gear takes a lot of know-how and fine-tuning ability. The 80+ recipes in this Cookbook will give you those skills along with the ability to troubleshoot if things starts running slowly.

eBook
€8.99 €36.99
Paperback
€45.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Oracle Database 11gR2 Performance Tuning Cookbook

Chapter 1. Starting with Performance Tuning

Performance tuning is a complex process, which requires a deep knowledge of both physical and logical database structures. In this chapter, we will introduce the process and methodology to adopt in performance tuning an Oracle Database, covering the following recipes:

  • Reviewing the performance tuning process

  • Exploring the example database

  • Acquiring data using a data dictionary and dynamic performance views

  • Analyzing data using Statspack report

  • Diagnosing performance issues using the alert log

  • Analyzing data using Automatic Workload Repository (AWR)

  • Analyzing data using Automatic Database Diagnostic Monitor (ADDM)

  • A working example

Introduction


There are a wide range of issues that could lead to poor performance. Performance of our Oracle database problems could be related to different areas of the system:

  • Application design

  • Application code

  • Memory

  • I/O

  • Resource contention

  • Operating System

  • CPU

When we want to tune a database in a proactive way, we can follow the previous list from the top to the bottom.

Issues in the first two areas generally lead the database to very bad performance and to scalability issues. The most common performance problems in an Oracle database related to application design and coding are as follows:

  • Incorrect session management

  • Poorly designed cursor management

    • Binding variables

    • Cursor sharing

    • Non-set operations

  • Inadequate relational design

  • Improper use of storage structures

Let's explain each performance problem listed in the previous paragraph. Troubles related to memory, input/output, contention, and operating systems will be explored in the following chapters. A well-tuned application can lead to a significant performance improvement, so it's natural to concentrate the first efforts on performance tuning to application design and coding.

Incorrect session management

Poor session management can lead to scalability problems. For example, if a web page logs on to a database, gets some data, and logs off; the time spent for the log on procedure could be an order of magnitude greater than the time required to execute the queries needed to bring the data which the user has requested.

Poorly designed cursor management

There are different problems related to cursor management.

The first rule in writing applications which connect to an Oracle database is to always use bind variables, which means not to include parameters in SQL statements as literals.

For example, we could code something like the following (using SQL*Plus, connected as user HR):

SQL>SELECT * FROM hr.jobs WHERE job_id = 'SA_MAN';

This is equivalent to the following:

SQL>VARIABLE JOBID VARCHAR2(10)
SQL>EXEC :JOBID := 'SA_MAN'
SQL>SELECT * FROM hr.jobs WHERE job_id = :JOBID;

The big difference between the two examples is in the way the database parses the statements when they are called more than once with different values. Executing the statements the second time, in the first case will require a hard parse, whereas in the second case, Oracle will reuse the execution plan prepared at the time of the first execution, resulting in a huge performance gain.

Note

This behavior is due to the way Oracle checks whether a SQL statement is already in memory or needs to be parsed. A hash value of the SQL string is calculated, and is compared to the hash values already in memory. If we supply a different literal value each time, a new hash value will get generated for a SQL statement and hence Oracle has to parse the statement every time.

Using bind variables will not change the SQL string so Oracle has to parse the statement only once; from there on it will find the hash value in memory—if it doesn't age out—thus reusing the execution plan already existing in memory.

Cursor sharing is another problem related to the parse process. We can set the database parameter CURSOR_SHARING to the values SIMILAR or FORCE, to mitigate the drawbacks related to not using bind variables. In this situation, the database will parse two queries with a different SQL text to a single cursor; for example:

SQL>SELECT * FROM hr.jobs WHERE job_id = 'SA_MAN';
SQL>SELECT * FROM hr.jobs WHERE job_id = 'AC_ACCOUNT';

Both of these statements will be parsed to a single cursor if the parameter CURSOR_SHARING is set to one of the values mentioned.

When a query is dynamically built by the application—for example, to reflect different types of user-defined filters or sorting options—it's important that the statement is built always in the same way—using bind variables, of course—to facilitate the reuse of the cursors, mostly if the CURSOR_SHARING parameter is set to the value EXACT.

Another common problem related to cursor management, is the use of non-set operations. While for the human mind it is simpler to think of an algorithm as an iterative sequence of steps, relational databases are optimized for set operations. Many a times developers code something like the following example code:

CREATE OR REPLACE PROCEDURE example1 (
  JOBID IN hr.jobs.job_id%TYPE) IS
BEGIN
  DECLARE 
  l_empid hr.employees.employee_id%TYPE;
  l_sal hr.employees.salary%TYPE;
  CURSOR jc IS SELECT e.employee_id, e.salary
    FROM hr.employees e 
      INNER JOIN hr.jobs j ON j.job_id = e.job_id
    WHERE e.job_id = JOBID 
    AND e.salary > (j.max_salary - j.min_salary) / 2;
 BEGIN
  OPEN jc;
  LOOP
    FETCH jc INTO l_empid, l_sal;
    EXIT WHEN jc%NOTFOUND;
    DBMS_OUTPUT.PUT_LINE(TO_CHAR(l_empid) || ' ' ||
     TO_CHAR(l_sal));
    UPDATE hr.employees SET salary = l_sal * 0.9 
      WHERE employee_id = l_empid;
  END LOOP;
  CLOSE jc;
 END;
END;

This example is trivial, but it's good enough to explain the concept. In the procedure, there is a loop on the employees of a certain job, which decreases the salaries that are higher than the average for a particular job. The stored procedure compiles and executes well, but there is a better way to code this example, shown as follows:

CREATE OR REPLACE PROCEDURE example2 (
  JOBID IN hr.jobs.job_id%TYPE) IS
BEGIN
  UPDATE hr.employees e SET
    e.salary = e.salary * 0.9
  WHERE e.job_id = JOBID 
  AND e.salary > (SELECT (j.max_salary - j.min_salary) / 2 FROM hr.jobs j
     WHERE j.job_id = e.job_id);
END;

In the latter version we have only used one statement to achieve the same results. Besides the code length, the important thing here is that we thought in terms of set-operations, rather than in an iterative way. Relational databases perform better when we use this type of operation. We will see how much and why in Chapter 4, Optimizing SQL Code and Chapter 6, Optimizing PL/SQL Code, in the Introducing arrays and bulk operations and Array processing and bulk-collect recipes, respectively.

Inadequate relational design

A big issue could be the relational design of the database. Here we are not discussing academic ways to design a database system, because in the real-world sometimes a relational design could be less-than-perfect in terms of normalization, for example, to provide better performance in the way the data is used.

When we speak about bad relational design, we mean problems like over-normalization, which often leads to an overabundance of table joins to obtain the desired results.

Often, over-normalization is a problem which arises when we try to map an object-oriented model to a relational database: a good volume and operations analysis could help in designing the logical model of the database. For example, introducing a redundant column to a table can lead to better performance because the redundant data, otherwise, have to be calculated by scanning (in most cases) a big table.

Another big issue in relational design is related to the use of incorrect indexes on a table. Based on the data selection approach an application is going to take, correct indexes should be set on the table, and this is one of the design considerations while creating a relational database model.

Improper use of storage structures

The Oracle database logical structure is determined by the tablespace(s) and by the schema objects. Wrong choices about these structures often lead to bad performance.

While designing an Oracle database, we have a rich set of schema objects, and we have to answer questions like "Which is better, a bitmap index or a reverse key index?", looking at both the application and data.

In the latest releases of Oracle database, many operations to alter storage structures can be performed with the database online, with minimal performance decay, and without service shortage.

We will examine in depth the problems we have just been presented with in later chapters, namely, session management and relational design in Chapter 2, cursor management in Chapter 4, and storage structures in Chapter 3.

OK, let's begin!

Reviewing the performance tuning process


Tuning the performance of an Oracle database is a complex task, which requires in-depth knowledge in different areas. There are a lot of forums, documents, and tutorials online responding to many performance tuning issues related to Oracle Database; often, however, the information gathered from these sources may not be enough to solve the peculiar problem we are experiencing, because of different database versions, different server architectures, and a wide number of variables which make it difficult to find the correct recipe to resolve the symptoms we are facing.

Many would-be DBAs approach a performance problem with a bad attitude; that is, they pretend to solve performance issues without investigating the problem, or with little knowledge about what happens under the hood. Often this approach leads to solutions which don't work or—in the worst case—seem to work temporarily, presenting the same problem or another one after a while.

In the following section, we will see the performance tuning process adopted in this book, which can help us in finding the correct way to diagnose, solve, and prevent performance issues on Oracle Databases.

How to do it...

To solve a performance problem on the database, we need to follow these steps:

  1. Elaborate a baseline.

  2. Investigate the problem.

  3. Assume a solution, a test case, and a rollback strategy.

  4. Implement the solution and test for correctness.

  5. Test the solution.

  6. Compare the results.

  7. If the results are not as good as expected, iterate the process.

How it works...

In the first step, we have to elaborate a baseline, because without a comparison element we will not be able to know if the adopted solution really solves the problems we are facing.

The kind of baseline to elaborate depends heavily on the performance issue. There are some performance indicators which should always be checked, while others are more detailed which can be verified only if a previous indicator points to a particular area of the database. After the baseline is decided for the particular problem we are investigating, it is time to automate the process of gathering data, so it is repeatable.

While investigating the problem the process is iterative, so you can return to the previous step to add other elements to the baseline, for final testing of our solution.

When the investigation drives us to assume a particular solution, before we start implementing it on the database we have to list all the changes we are going to do and elaborate a "rollback solution" for these changes. This is especially the case if we don't have the chance to test our solution over a test database similar to the production one which is suffering the problem. If we think, for example, that adding an index IX1 on table T1 could solve our performance problem, we have to prepare a SQL script to create the index, and another SQL script to drop it, in case we want to go back if something goes wrong. In Oracle 11g, we have the opportunity to create an invisible index and check the execution plan of the query, with minimal impact on other sessions.

We might want to prepare a test-case to test the solution we will implement. This task is simpler if we have isolated the problem very well, so we are able to reproduce the issue. If the problem is random, it might be a nightmare to isolate the steps that lead to poor performance. In the latter case, we could evaluate the frequency of the problem, so we could test our solution by measuring the number of occurrences and comparing the results.

After the solution has been implemented, it must be tested with the same process that created the baseline. Check the results of the measure process and decide if the solutionhas solved the issue. If the results are not acceptable, iterate the whole process until there is a satisfactory outcome.

There's more…

The performance tuning process is a never-ending cycle; even when we solve our performance issue there will be another aspect of the system we can tune to in order to obtain better performance, or we need to satisfy more stringent requirements.

Due to these considerations, the iterative process of performance tuning that will be used throughout the book is represented in the following diagram:

To elaborate a baseline, keep track of how the system—and not only the database—is performing. We need unbiased data to compare before and after different solutions are implemented in the systems.

Tip

Performance of the system here means performance of the server, I/O, network, database, the application, and other factors.

If there is a generic "slow response-time problem", and new hardware resources (CPU, RAM) are added to the database server, this may lead to a situation where it performs worse than before. With a good baseline, before adding more resources, we could evaluate if the problem we are experiencing is related to the lack of enough hardware power—for example RAM—or something else.

To describe a good baseline we need as much data as possible; most are acquired directly from the database itself, as we will see in the next section. There is information from other sources: Operating System logs, performance counters, application logs, trace files, network statistics, and the like.

In today's multi-layered applications, it's simple to say "the database is slow" when an application is suffering poor performance, but there will be many cases when the database is performing very well but the application responsiveness is very weak.

With a solid baseline, we can isolate the layer in which the problem first occurred and concentrate our efforts on that application layer. After a baseline is established, start investigating the problem.

In the rest of the book, we will learn how to interpret the results of the baseline to correctly identify the problem. Sorry, there isn't a bullet list or a magic wand; this phase is based on knowledge and previous experience. If a simple causal-effect was in place, it would have already been coded with an automatic solution or a specific diagnostic advice, implemented in the database itself. There are several automatic diagnostic tuning features in the latest releases of Oracle database; SQL Tuning Advisor, SQL Access Advisor, Automatic Database Diagnostic Monitor . These database-centric tools help solve common performance problems, which tend to be easily identified. The real tuning process starts when the magic doesn't work, or they don't work as good as we need them to.

We have seen the most common database performance issues in the previous recipe, divided into several categories to help us in the investigation phase. During this stage, we decide what database area is a bottleneck; for instance, the memory, the I/O, and the SQL code.

Once we have identified and delimited the database area involved in the performance problem, we can assume a solution to the issue. As previously stated, both a test case and a rollback strategy are necessary—the former to check the proposed solution, the latter to revert back if the proposed solution wasn't satisfactory.

Once we have the solution, implementing it is often a trivial task, such as writing a small SQL script to alter a database object or a initialization parameter. Be sure that the solution is implemented using reproducible steps, especially when the task is quite complex or we have to test the solution in a staged database before the production.

At the end of the implementation, we have to test the solution to verify its correctness—probably in a test environment—and to know if the expected performance gain has been reached.

To test the solution there are various scenarios, depending on the work done in previous steps and by the development team. A test case will verify the results; if there are application test sets, they can be used to verify the correctness of the solution, especially if the application logic has changed.

After we have assured ourselves about the correctness of the solution implemented, compare the performance of the database (and of the application) to the baseline gathered in the first step of the process.

If the comparison shows that we have not solved the puzzle, well, let's revert back to the applied solution and start again from the first step, investigating the problem better or assuming another solution. Alternately, if the result is satisfactory, very well, let's start again from the first step to solve another problem. Always remember that the tuning process is something which evolves from the application design and lasts throughout the application life cycle.

In describing the performance tuning process, we have stated a baseline. The Oracle database helps us even in this task, with different tools that we can use to monitor the database itself and to take measurements of various performance indicators.

In the following recipes, we will introduce different tools to acquire performance data from the database, illustrating the guidelines to use them. The diagnostic tools presented are:

  • Data Dictionary and Dynamic Performance Views

    • Analyze command

    • Analyze schema and database with DBMS_UTILITY package

  • DBMS_STATS package

  • Statspack report

  • Alert log and trace files

  • Automatic Workload Repository (AWR)

  • Automatic Database Diagnostic Monitor (ADDM)

The tools specific for tuning SQL code will be presented in Chapter 4, Optimizing SQL Code.

Let's spend some time on Oracle Enterprise Manager (OEM). It is a graphical web-based application, and it is the main tool the Oracle DBA uses to configure and monitor the database in non-console mode.

In OEM, there is a performance palette which presents a dashboard with many graphs and indicators, all updated live. At the bottom of the page, there are additional links to the most common tasks related to performance tuning.

Note

OEM itself is not a performance tuning tool, but it's just a front-end to the tools and functions in the previously mentioned list. It's a good idea to familiarize yourself with OEM and its user interface. However, if a DBA knows what happens in the backstage, he/she will be able to do the right thing with any tool, and he/she will not feel lost if his/her favorite tool or GUI isn't up and running (and sometimes this is a real scenario at the customer site).

See also

  • Acquiring data using Data dictionary and dynamic performance views recipes in this chapter

  • Appendix B, Tools and Packages

Exploring the example database


In this recipe, we will prepare a database to use for our examples.

Getting ready

We need an Oracle Database 11gR2 system up and running to create our database. The host system could be a UNIX/Linux or Windows physical or virtual machine. If you want to use a virtual machine, be sure to follow the minimum CPU and memory requirements for the Oracle installation.

If you have installed the database software along with the Create Database option, then you have already set up a database with the necessary schema installed.

How to do it...

We will use the default demo database installed by the default OLTP template of Oracle Database Configuration Assistant (DBCA) for all our examples.

Note

You can find the official Oracle Database Installation Guide 11gR2 for Linux at http://download.oracle.com/docs/cd/E11882_01/install.112/e16763/toc.htm.

  1. Log on to the Operating System as a member of the administrative group, authorized to install Oracle software and to create and run database instances.

  2. Launch DBCA (for Windows users: Start | Programs | Oracle – home_name | Configuration and Migration Tools | Database Configuration Assistant) for *nix systems enter the following command at system prompt:

    $ dbca
    

    Please note that the dbca executable is by default in the $ORACLE_HOME/bin directory.

  3. A welcome screen is shown. Click Next.

  4. You are presented with some options. Select the first, namely Create a database, and click Next.

  5. You are presented a list of database templates. Choose the first, namely General purpose / OLTP, and click Next.

  6. You are asked for the global database name and SID; enter TESTDB in the global database name (the SID should be set accordingly) and click Next.

  7. In the next screen—shown in the following screenshot—leave the default options selected (OEM configuration). If you wish, you can enable e-mail notifications, checking the corresponding flag and entering the SMTP server to use (something like smtp.yourdomain.com or smtp.yourISP.com) and the e-mail address where the alerts will be delivered. Click Next to go to the next screen.

  8. Choose to use the same password for all administrative accounts, enter the password you want to use twice, and click Next. If you are advised that the password you entered is weak (not responding to the minimum complexity requirements) you can ignore the message and go on. Please note that for a production database these are very bad choices, but we are installing a demo database for testing purposes only and don't want to bother with security issues.

  9. In the next screen, leave the default option for the files position (Use Database File Locations from Template) and click Next.

  10. Leave the default options for the flash recovery area and click Next.

  11. In the next screen, check the Sample Schemas flag and click Finish.

  12. You are presented with the operations summary. Click OK and wait until the database creation process is finished.

  13. At the end of the creation process, we have to unlock the accounts created. In the summary form, there is a Password Manager button; click on it, and you will be presented with the list of accounts created.

  14. Find the following accounts: BI, HR, IX, OC, OE, PM, SH and uncheck the second column (unlocking them). Insert the password for the accounts in the last two columns, setting them the same as the account name.

    Tip

    You can click on the username column to sort accordingly.

    Don't use sample schemas or passwords the same as the username in production databases!

Now our TESTDB database is ready for experimenting.

How it works...

Oracle DBCA lets us create a database using predefined templates. For our examples, we will use the default example schemas provided by Oracle (which are installed in the EXAMPLE tablespace).

The sample schemas are HR (Human Resources), OE (Order Entry), OC (Order Catalog), PM (Product Media), IX (Information eXchange), SH (Sales History), and BI (Business Intelligence). We will use mostly HR and SH schemas.

There's more...

If we want to reset the sample schemas to the initial state, we can use the script mksample.sql located in the $ORACLE_HOME/demo/schema/ directory. This script requires eleven parameters, with the following syntax:

SQL>@?/demo/schema/mksample systempwd syspwd hrpwd oepwd pmpwd ixpwd shpwd bipwd default_tablespace temp_tablespace log_file_directory/

Note

Please note that the log_file_directory is an already existing folder and also the path must be terminated by a slash.

Our database—assuming test as the system and system password—will be reset with the following statement:

SQL>@?/demo/schema/mksample test test hr oe pm ix sh bi EXAMPLE TEMP testlog/

Note

Please note that in the default installation of Oracle Database 11gR2 Enterprise Edition the mksample.sql script is not present.

You can find it in the Companion CD.

Acquiring data using a data dictionary and dynamic performance views


In the Oracle database, there are many views which can be queried to acquire data about the database state. They are divided into data dictionary views, with a name similar to DBA_*, and dynamic performance views, named something similar to V$_*.

Getting ready

When we use a standard template in Oracle DBCA to create a database, both data dictionary views and dynamic performance views are in place after database creation. If we prefer to use our own scripts to create the database, we need to launch at least the catalog.sql and catproc.sql scripts to populate the data dictionary with the views we need. These scripts are located in the rdbms/admin subdirectory of the Oracle Home directory.

To collect timing information in the dynamic performance views, we have to set the parameter TIMED_STATISTICS=TRUE in the init.ora file of our database instance. We can also accomplish this requirement with the following SQL statement:

ALTER SYSTEM SET TIMED_STATISTICS = TRUE SCOPE = BOTH;

Tip

Please note that the default value for the TIMED_STATISTICS parameter is already TRUE and that there isn't any perceptible performance gain in changing this default value to FALSE.

How to do it...

We can query the data dictionary views and the dynamic performance views like any other view in the database, using SQL statements.

We can also query DBA_VIEWS, which is a data dictionary view showing other views in the database:

select view_name from dba_views
  where view_name like 'DBA%' order by 1

We can query the V$FIXED_TABLE view to get a list of all the V$ dynamic performance views and X$ tables:

select name from V$FIXED_TABLE order by 1;

Tip

You can find the definition of each view we will use in the book in Appendix A, Dynamic Performance Views

How it works...

Data dictionary views are owned by the user SYS and there is a public synonym for each of them. They expose data about database objects, for example, tables and indexes.

In Oracle Database 11gR2 Enterprise Edition, the database installed from the DBCA template will have more than 800 data dictionary views available. We will present the data dictionary views that we need in our recipes when we have to query them.

Even dynamic performance views are owned by the user SYS; they are synonyms to V_$* views. Those views are based on X$ tables, which are undocumented structures populated at instance start-up. The data dictionary view contains two kinds of data, namely, fields that store information on the characteristics of the object, and other fields that collect information dynamically from object usage.

For example, in the DBA_TABLES there are fields about the physical structure of the table (such as TABLESPACE_NAME, PCT_FREE, INITIAL_EXTENT) and other fields which expose statistics on the table contents (such as NUM_ROWS, AVG_SPACE, AVG_ROW_LEN).

To collect these statistical data we have to perform the ANALYZE statement. For a table, we will execute the following statement:

ANALYZE TABLE hr.employees COMPUTE STATISTICS;

To speed up and automate the analysis of many objects, we can use DBMS_UTILITY.analyze_schema or DBMS_UTILITY.analyze_database to analyze all the objects in a schema in the first case, or in the database in the latter. To analyze the objects of the HR schema, we will execute the following statement:

EXEC DBMS_UTILITY.analyze_schema('HR','COMPUTE');

Tip

For both the ANALYZE command and the DBMS_UTILITY functions, we have two choices, which are either to compute the statistics or to estimate these values based on the analysis of a restricted set of data. When ESTIMATE is chosen, we have to specify the number of rows to use for the sample or a percentage.

Oracle advises us to use another method to compute statistics, namely, the DBMS_STATS package, which allows deleting statistics, exporting, importing, and gathering statistics in parallel. The following statement analyses the schema HR:

EXEC DBMS_STATS.gather_schema_stats('HR');

Note

ANALYZE and the use of DBMS_UTILITY illustrated earlier are supported for backward compatibility only; use the package DBMS_STATS to collect statistics.

Similarly, we can gather statistics on tables, indexes, or database. Even with DBMS_STATS we can use the ESTIMATE method, as in the first of the following examples:

EXEC DBMS_STATS.gather_database_stats(estimate_percent => 20);
EXEC DBMS_STATS.gather_table_stats('HR', 'EMPLOYEES');
EXEC DBMS_STATS.gather_index_stats('HR', 'EMP_JOB_IX');

Using the DBMS_STATS package we can also delete statistics, as shown:

EXEC DBMS_STATS.delete_table_stats('HR', 'EMPLOYEES');

To transfer statistics between different databases, we have to use a statistics table, as shown in the following steps:

  1. Create the statistics table on the source database.

  2. Export the statistics from the data dictionary to the statistics table.

  3. Move the statistics table (Export/Import, Datapump, Copy) to the target database.

  4. Import the statistics from the statistics table to the data dictionary.

  5. Drop the statistics table.

The corresponding statements to execute on the source database are as follows:

EXEC DBMS_STATS.create_stat_table('DBA_SCHEMA', 'MY_STAT_TABLE');
EXEC DBMS_STATS.export_schema_stats('DBA_SCHEMA', 'MY_STAT_TABLE', NULL, 'APP_SCHEMA');

With these statements we have created the statistics table MY_STAT_TABLE in the DBA_SCHEMA and populated it with data from the APP_SCHEMA (for example, HR).

Then we transfer the MY_STAT_TABLE to the target database; using the export/import command line utilities we export the table from source database and then import the table into the target database, in which we execute the following statements:

EXEC DBMS_STATS.import_schema_stats('APP_SCHEMA', 'MY_STAT_TABLE', NULL, 'DBA_SCHEMA');
EXEC DBMS_STATS.drop_stat_table('DBA_SCHEMA', 'MY_STAT_TABLE');

In the example, we have transferred statistics about the entire schema APP_SCHEMA. We can choose to transfer statistics for the entire database, a table, an index, or a column, using the corresponding import_* and export_* procedures of the DBMS_STATS package.

There's more...

The COMPUTE STATISTICS and ESTIMATE STATISTICS parameters of the ANALYZE command are supported only for backward compatibility by Oracle. However, there are other functionalities of the command that allow validating the structure of a table, index, cluster, materialized views, or to list the chained or migrated rows:

ANALYZE TABLE employees VALIDATE STRUCTURE;
ANALYZE TABLE employees LIST CHAINED ROWS INTO CHAINED_ROWS;

The first statement validates the structure of the EMPLOYEES table, while the second command lists the chained rows of the same table into the CHAINED_ROWS table (created with the script utlchain.sql or utlchn1.sql.)

See also

  • Avoiding row chaining in Chapter 3, Optimizing Storage Structures

Analyzing data using Statspack reports


Statspack was first introduced in Oracle Database 8i R8.1.6. We shall now look at how to use this tool.

Getting ready

To use Statspack, we have to set up a tablespace to store its structures; if we don't, in the installation process we have to choose an already existing tablespace—SYSAUX is the tablespace proposed by default. To create the tablespace, we will use the following command (with the necessary change in the datafile parameter, according to the platform used and the database location):

CREATE TABLESPACE statspack
DATAFILE '/u01/oracle/db/STATSPACK.DBF' SIZE 200 M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
SEGMENT SPACE MANAGEMENT AUTO PERMANENT ONLINE;

To collect timing information in the dynamic performance views, we have to set the parameter TIMED_STATISTICS=TRUE, as shown in the recipe about the dynamic performance view.

How to do it...

Follow these steps to make use of the Statspack tool:

  1. Connect to the database with a user with the SYSDBA privilege and run the spcreate.sql script from the $ORACLE_HOME/rdbms/adminr directory. This script will ask for a password to assign to the PERFSTAT user.

  2. We will then be asked for the tablespace to use. Select the previously created tablespace by entering its name (STATSPACK). When the script asks for the temporary tablespace just press Enter to use the default temporary tablespace defined in the system.

  3. The script will create the user PERFSTAT, identified by the password provided, and all the objects needed to run the tool.

    After the tool is created, we can collect statistics by executing the following procedure:

    EXEC STATSPACK.snap;
    

    With this simple command, we have created a snapshot in the Statspack table.

  4. When we have at least two snapshots, we can create a report using a pair of them. To do so, we will execute the spreport.sql script.

    The script will show us the completed snapshots and we will be asked for the ID of the two which we want to compare.

  5. The script will ask for the name to give to the report—the default will be sp_id1_id2, where id1 and id2 are the beginning and ending snapshots chosen in the previous step.

At the end of the process, we will find our Statspack report.

How it works...

The spcreate.sql script internally launches the spcusr.sql, spctab.sql, and spcpkg.sql scripts. For every script, after the execution, we will find a corresponding file with the extension changed to .lis with the spool of the actions performed. In case anything goes wrong, we can launch the spdrop.sql script to rollback the actions performed by spcreate.sql.

A snapshot of Statspack contains information from the dynamic performance views. As these views are emptied at database start-up, it makes no sense to elaborate Statspack performance reports with the use of snapshots taken before and after a database shutdown.

The tables used to collect the data have names which start with STATS$, and are based on the corresponding V$ dynamic performance views. For example, the table STAT$DB_CACHE_ADVICE has the same columns of the view V$DB_CACHE_ADVICE, with three columns added in front of them, SNAP_ID, DBID, INSTANCE_NUMBER, which are used to identify the snapshot, the database, and the instance respectively.

Tip

If you want to use Statspack in an Oracle Real Application Cluster (RAC) environment, you have to launch STATSPACK.snap connecting to every instance you want to gather data from.

The report is divided into several sections:

  • General information about the database instance and the snapshots used

  • Cache sizes (buffer cache, shared pool, and log buffer)

  • Load profile (instance events per second and per transaction)

  • Instance efficiency indicators (buffer cache and shared pool statistics)

  • Top five timed events, showing the first five events sorted by total wait time in seconds

  • Host CPU and Instance CPU, showing the load on the CPU

  • Virtual Memory Paging and Memory Statistics

  • Wait events, foreground, background, and both foreground and background grouped together

  • SQL ordered by different criteria, by CPU, by elapsed time for DB, by gets, by executions, by parse calls, by sharable memory, by version count

  • Instance activity statistics

  • Tablespace and file I/O

  • Memory, buffer pool, and PGA statistics

  • Latch activity

  • Dictionary cache statistics

  • Library cache activity

  • SGA activity

  • init.ora parameters

There's more...

We can configure Statspack to collect different amounts of data and to produce a report on specific SQL; we wish to automate snapshot collection, too.

Collecting different amounts of data

We can configure Statspack to collect more or less data. The LEVEL parameter can be used to instruct the tool about the kind of information we want to store in the snapshot. The following table summarizes the available levels (the default level is 5):

Level

Description

0

General performance statistics

5

Additional data: High resource usage SQL statements

6

Additional data: SQL Plans and SQL Plan usage information for high resource usage SQL statements

7

Additional data: Segment level statistics including logical and physical reads, row locks, and so on

10

Additional statistics: Parent and Child latches

We can use a different level parameter for a single snapshot, passing the corresponding level to the STATSPACK.snap procedure:

EXEC STATSPACK.snap(i_snap_level=>10);

If we want our selection made permanent for subsequent snapshots, we add another parameter to the procedure:

EXEC STATSPACK.snap(i_snap_level=>6, i_modify_parameter=>'true');

If we want to change the level of the snapshots without taking one, we will use the following statement:

EXECUTE STATSPACK.modify_statspack_parameter(i_snap_level=>6);

Producing a report on a specific SQL

Statspack provides another script, sprepsql.sql, which allows us to elaborate a more detailed report on a specific SQL statement.

If we find a statement in the Statspack report that we want to investigate deeper, we can launch this script, indicating the beginning and ending snapshots, and the "Old Hash Value" (a pre-10g memory) of the SQL statement on which we want to elaborate the report.

If in our Statspack report (elaborated between the snapshots identified by 2 and 3) we have a row in the SQL ordered by CPU section that is similar to the one shown in the following screenshot:

And we want to investigate the related statement, we can launch the sprepsql.sql script and indicate ID 2 as begin, ID 3 as end, and 3787177051 as Old Hash Value.

The script will ask for the filename and will then produce a detailed report for the statement analyzed.

Automating snapshot generation

We can automate snapshot generation in various ways. Besides using a Unix cron job or a Windows Scheduled Task, we can instruct the database to capture the snapshots with a simple job. There is the spauto.sql script in the $ORACLE_HOME/rdbms/admin directory to set up an hourly snapshot. The script uses DBMS_JOB to schedule the snapshots.

Statspack maintenance

We can purge the no longer needed snapshots with the use of the spurge.sql script, indicating the ID of the first and the last snapshot to delete. Before deleting the data, we may want to export the PERFSTAT schema.

The sptrunc.sql script, instead, deletes all the data collected. All the scripts are in the $ORACLE_HOME/rdbms/admin directory.

To completely uninstall Statspack, there is the already mentioned spdrop.sql script, which has to be executed with SYSDBA privileges.

Diagnosing performance issues using the alert log


To diagnose certain performance issues, even the alert log can be used successfully.

Getting ready

There are some parameters to look at in the init.ora file of our database instance.

The parameter BACKGROUND_DUMP_DEST indicates the directory in which the alert log is located. If the parameter LOG_CHECKPOINTS_TO_ALERT is set to TRUE, we will find even checkpoint information in the alert log. By default this parameter is set to FALSE.

Before starting, we can issue the following command:

ALTER SYSTEM SET LOG_CHECKPOINTS_TO_ALERT = TRUE;
SHOW PARAMETER BACKGROUND_DUMP_DEST

This writes checkpoint information to the alert log and shows the directory in which we will find the alert log file (named alert_<instance_name>.log).

How to do it...

The following steps will demonstrate how to use the alert log:

  1. In the alert log, we can find information like the following:

    Sun Sep 19 12:25:26 2010
    Thread 1 advanced to log sequence 5 (LGWR switch)
    Current log# 2 seq# 5 mem# 0: D:\APP\ORACLE\ORADATA\TESTDB\REDO02.LOG

    This informs us of a log-switch.

  2. We can then verify the time between log switches.

    If we have set the parameter LOG_CHECKPOINTS_TO_ALERT to TRUE, we will also see lines like these in the alert log:

    Sat Sep 25 20:18:01 2010
    Beginning global checkpoint up to RBA [0x16.fd.10], SCN: 1296271
    Completed checkpoint up to RBA [0x16.fd.10], SCN: 1296271

    Then we can calculate checkpoint performance.

How it works...

The database writes information on the alert log about log switches and checkpoints. We can inspect the alert log to diagnose a possible problem with log files.

There's more...

We can force a log switch by using the following command:

ALTER SYSTEM SWITCH LOGFILE;

A checkpoint can be forced by using the following statement:

ALTER SYSTEM CHECKPOINT;

See also

  • We will see the use of SQL_TRACE and TKPROF to generate trace files and the corresponding report over SQL activity of a particular session in Tracing SQL activity with SQL Trace and TKPROF in Chapter 4, Optimizing SQL Code

Analyzing data using Automatic Workload Repository (AWR)


With Oracle Database 10g, Automatic Workload Repository (AWR) was introduced. It is a tool that extends the key concepts of Statspack.

In this recipe, we will create a manual snapshot, a baseline, and some reports.

Getting ready

To use AWR, the STATISTICS_LEVEL parameter of the init.ora file must be set to the value TYPICAL or ALL.

Note

With the default setting TYPICAL, all the statistics needed for self-management functionalities are collected, providing best overall performance. Using the parameter ALL the database will collect all the statistics included in the TYPICAL settings, as well as timed operating system statistics and row source execution statistics.

We can change the parameter online with the following statement without shutting down the database:

ALTER SYSTEM SET STATISTICS_LEVEL = TYPICAL;

How to do it...

The following steps demonstrate use of AWR:

  1. To make a manual snapshot using AWR, we use the following stored procedure:

    EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot();
    

    With the default settings in place, AWR creates a snapshot every hour, and the data collected are stored for seven days.

  2. To modify the interval or the grace period of the snapshots, we can use the modify_snapshot_settings procedure, as shown:

    EXEC DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(interval => 30);
    EXEC DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(retention => 21600);
    
  3. In AWR, we can also create a baseline to compare performances. A baseline is a set of snapshots which will be held to compare with the same kind of data in the future.

    We could have, for example, a baseline for the daily transactional work and a baseline for a batch job or a peak (quarter end). We can define a baseline indicating the start and end snapshots to be used, and we can name it:

    EXEC DBMS_WORKLOAD_REPOSITORY.create_baseline(Start_snap_id => 1, end_snap_id => 11, baseline_name => 'Friday off-peak');
    
  4. To generate a report, we will use the awrrpt.sql script, located in the $ORACLE_HOME/rdbms/admin folder. The script will ask to choose the output format (text or HTML) and the number of days to use to filter the snapshots.

  5. Then they will be presented the list of the snapshots, according to the parameter chosen in the previous step, and we are asked for the first and the last snapshot to be used. The last question is about the name of the file to generate the output to. The report generated is very similar to the Statspack report.

How it works...

As with Statspack, even AWR collects data and statistics from the database and stores them in tables. With AWR the concept of baseline is introduced.

The baselines can be fixed, moving window, or templates. The baseline we have defined in the previous example is fixed, because it corresponds to a specific time period in the past. The moving windows baseline corresponds to the AWR data within the entire retention period, and it's useful when used with adaptive thresholds. The baseline templates, instead, are created for a future time period, and can be single or repeating.

In the first statement of step 2, we have set the interval between snapshots to 30 minutes; in the second statement the retention period of the snapshots collected is set to 21600 minutes, which corresponds to 15 days.

The adaptive thresholds just mentioned consent to adapt the thresholds of a performance metric according to the workload of the system, eliminating false alerts. From Oracle 11g, adaptive thresholds are adjusted based on different workload patterns (for example, a system used for OLTP in daytime and for batch jobs at night) automatically recognized by the database.

We have created a report in the previous example by using the awrrpt.sql script. There are other reports available, generated by a corresponding script in the same folder; for example, awrrpti.sql is the same as awrrpt.sql, but for a specific database instance. awrsqrpt.sql generates a report for a particular SQL statement, like the script sprepsql.sql for Statspack. The corresponding script awrsqrpti.sql prepares the same report for a specific database instance.

There are also compare period reports, which allow us to compare not two snapshots but two AWR reports. If we have a database which performs well in a certain period, and we experiment a lack of performance in another period, we can elaborate two reports for the first and the latter period, and then compare the reports among them, to point out the differences and try to identify the issue.

For example, in step 4, we have created a baseline based on the snapshots with IDs from 1 to 11, and we name it "Friday off-peak".

The timespan of the two reports we are comparing isn't important, because AWR normalizes the data according to the different timeframe.

Compare period reports can be launched from Oracle Enterprise Manager or using the script awrddrpt.sql (the script awrddrpti.sql to concentrate the result on a single instance).

There's more...

We can specify the adaptive thresholds as a percentage of the maximum value observed in the moving window baseline, or as a statistical percentile, ranging from 0.95 to 0.9999—from five observations expected to exceed the value in 100 to 1 observation in 10,000.

Analyzing data using Automatic Database Diagnostic Monitor (ADDM)


In this recipe, we present the Automatic Database Diagnostic Monitor, a tool which analyzes the data collected by AWR to diagnose the cause of a performance problem, providing advice on how to solve the issue.

Getting ready

ADDM is enabled by default in Oracle Database 11g; it depends upon two configuration parameters of the init.ora file, STATISTICS_LEVEL and CONTROL_MANAGEMENT_PACK_ACCESS. The value for these parameters should be TYPICAL or ALL for the former and DIAGNOSTIC or DIAGNOSTIC+TUNING for the latter. To show the current parameter values, we can use the following statement:

SHOW PARAMETER STATISTICS_LEVEL
SHOW PARAMETER CONTROL_MANAGEMENT_PACK_ACCESS

While to set the parameters we can use the following commands:

ALTER SYSTEM SET STATISTICS_LEVEL = TYPICAL;
ALTER SYSTEM SET CONTROL_MANAGEMENT_PACK_ACCESS = 'DIAGNOSTIC+TUNING';

We are now ready to diagnose a problem using ADDM.

How to do it...

The following steps will demonstrate how to use ADDM:

  1. To run the ADDM in Database mode (all instances of the database will be analyzed), we will use the following statement where the parameters 3 and 5 in these steps are the numbers identifying the beginning and ending snapshots to be used:

    VAR task_name VARCHAR2(30);
    BEGIN
      :task_name := 'Report for 3 to 5';
      DBMS_ADDM.ANALYZE_DB (:task_name, 3, 5);
    END;
    
  2. To run the ADDM in Instance mode (a single instance of the database will be analyzed), we will use the following statement:

    VAR task_name VARCHAR2(30);
    BEGIN
      :task_name := 'Report for 3 to 5 inst. 1';
      DBMS_ADDM.ANALYZE_INST (:task_name, 3, 5, 1);
    END;
    
  3. To run the ADDM in Partial mode (a subset of all database instances will be analyzed), we will use the following statement:

    VAR task_name VARCHAR2(30);
    BEGIN
      :task_name := 'Custom for 3 to 5 inst. 1,2';
      DBMS_ADDM.ANALYZE_INST (:task_name, '1,2', 3, 5);
    END;
    
  4. To view the results we will query the DBMS_ADDM.GET_REPORT function, passing the name of the task used in generating the reports:

    SELECT DBMS_ADDM.get_report('Report for 3 to 5') FROM DUAL;
    SELECT DBMS_ADDM.get_report('Report for 3 to 5 inst. 1') FROM DUAL;
    SELECT DBMS_ADDM.get_report('Custom for 3 to 5 inst. 1,2') FROM DUAL;
    

    Each line in the previous code will display the corresponding ADDM report.

How it works...

Automatic Database Diagnostic Monitor runs automatically every time a new snapshot is taken by AWR (by default every hour), and the corresponding report is built comparing the last two snapshots available, so we have an ADDM report every hour.

With the statement presented, we can run a report between snapshots to identify possible problems. The reports can be built, for a Real Application Cluster configuration, with three analysis models: database, instance, and partial. In non-RAC databases, only instance analysis is possible because there is only one instance of the database.

We can see the reports with SQL*Plus using the DBMS_ADDM.GET_REPORT function, which returns a CLOB containing the report (80-columns formatted), or we can use Oracle Enterprise Manager to view the reports generated both in automatic or manual mode. In OEM, we can view ADDM findings in the homepage in the Diagnostic Summary information. We can choose Advisor Central on the bottom of the page to see a list of the ADDM reports available, as shown in the following screenshot:

Clicking on the name link in the previous list we can view the corresponding report; in the following screenshot, we can see an example of an ADDM report viewed through OEM:

There's more...

The parameter DBIO_EXPECTED influences the ADDM analysis of I/O performance, because it describes the expected I/O subsystem performance, measuring the average time needed to read a single database block. The default value of the parameter is 10 milliseconds, corresponding to the average time of common hard disks. Please note that this measure includes the seek time.

If our I/O subsystem is significantly slower or faster, we may end up with possible false alerts or no alerts at all. We can adjust the parameter issuing the following statement:

EXEC DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER('ADDM', 'DBIO_EXPECTED', 12000);

The numeric value is the time expressed in microseconds.

See also

  • Analyzing data using Automatic Workload Repository (AWR) in this chapter

A working example


In this recipe we will present a simple example of a performance tuning session, applying the recipes seen earlier.

Getting ready

The example is based on the SH schema. Be sure Statspack is installed, as presented in an earlier recipe.

How to do it...

The following steps demonstrate a simple example using the SH schema:

  1. We assume the user PERFSTAT with the password PERFSTAT and the user SH with the password SH. The TESTDB database is the default instance.

  2. Launch SQL*Plus and connect to the SH schema:

    $ sqlplus SH/SH
    
  3. Create the package Chapter1:

    CREATE OR REPLACE PACKAGE Chapter1 AS
      PROCEDURE Workload;
      PROCEDURE Foo(CUSTID IN sh.sales.cust_id%TYPE); 
    END;
    /
    
    CREATE OR REPLACE PACKAGE BODY Chapter1 AS
      PROCEDURE Workload IS
      BEGIN
       FOR i in 1 .. 50000
       LOOP
        Foo(i);
       END LOOP;
      END Workload;
      PROCEDURE Foo(CUSTID IN sh.sales.cust_id%TYPE) IS
      BEGIN
       DECLARE
        l_stmt VARCHAR2(2000);
       BEGIN
        l_stmt := 'SELECT * FROM sh.sales s WHERE s.cust_id = ' || TO_CHAR(CUSTID);
        EXECUTE IMMEDIATE l_stmt;
       END;
      END Foo;
    END;
    /
    
  4. Now we create the initial snapshot:

    CONNECT PERFSTAT/PERFSTAT
    EXEC statspack.snap;
    
  5. Execute the test workload:

    CONNECT SH/SH
    EXEC Chapter1.Workload;
    
  6. Now we can elaborate the end snapshot:

    CONNECT PERFSTAT/PERFSTAT
    EXEC statspack.snap;
    
  7. Finally we can launch the report creation:

    SQL>@?/RDBMS/ADMIN/SPREPORT.SQL
    
  8. When asked, select the last two snapshots created to produce the Chapter1.lst report (naming the report accordingly).

How it works...

In this simple example, the stored procedure Foo inside the package Chapter1 is executed 50,000 times to query the SALES table. We have not used bind variables, and the Statspack report reflects this performance issue:

In the highlighted section of the Statspack report, we can see that only 2.92 percent of parses have been "soft", because the cursor_sharing parameter is set to EXACT and we are not using bind variables.

There's more...

To solve this issue, we can:

  • Change the CURSOR_SHARING parameter to SIMILAR

  • Recode the Foo procedure, introducing bind variables

In the first case, we have to execute the following statement:

ALTER SYSTEM SET CURSOR_SHARING = SIMILAR SCOPE=MEMORY;

Now we can recreate the snapshots:

CONNECT PERFSTAT/PERFSTAT
EXEC statspack.snap;
CONNECT SH/SH
EXEC Chapter1.Workload;
CONNECT PERFSTAT/PERFSTAT
EXEC statspack.snap;

And finally, we launch the report creation:

SQL>@?/RDBMS/ADMIN/SPREPORT.SQL

The newly created report presents a significant change:

Now the Soft Parse is 97.84 percent.

We can recode the procedure as well; let's rollback the change in CURSOR_SHARING:

ALTER SYSTEM SET CURSOR_SHARING=EXACT SCOPE = MEMORY;

And let's alter the Foo procedure:

CREATE OR REPLACE PACKAGE BODY Chapter1 AS
  PROCEDURE Workload IS
  BEGIN
   FOR i in 1 .. 50000
   LOOP
    Foo(i);
   END LOOP;
  END Workload;
 
  PROCEDURE Foo(CUSTID IN sh.sales.cust_id%TYPE) IS
  BEGIN
   DECLARE
    l_stmt VARCHAR2(2000);
   BEGIN
    l_stmt := 'SELECT * FROM sh.sales s WHERE s.cust_id = :p_cust_id';
    EXECUTE IMMEDIATE l_stmt USING CUSTID;
   END;
  END Foo;
END;
/

Let's launch the snapshots and the report:

CONNECT PERFSTAT/PERFSTAT
EXEC statspack.snap;
CONNECT SH/SH
EXEC Chapter1.Workload;
CONNECT PERFSTAT/PERFSTAT
EXEC statspack.snap;
SQL>@?/RDBMS/ADMIN/SPREPORT.SQL

The newly created report presents a result similar to the previous execution:

There is now a Soft Parse of 99.20 percent.

In this simple example, we have seen how to diagnose a simple problem using Statspack; as an exercise, try to use the other tools presented using the same test case.

Tip

To use AWR and ADDM take a manual snapshot before and after running the Workload procedure.

See also

  • Using bind variables in Chapter 4, Optimizing SQL Code

  • Minimizing latches using bind variables and Tuning resources to minimize latch contention in Chapter 11, Tuning Contention

Left arrow icon Right arrow icon

Key benefits

  • Learn the right techniques to achieve best performance from the Oracle Database
  • Avoid common myths and pitfalls that slow down the database
  • Diagnose problems when they arise and employ tricks to prevent them
  • Explore various aspects that affect performance, from application design to system tuning

Description

Oracle's Database offers great performance, scalability, and many features for DBAs and developers. Due to a wide choice of technologies, successful applications are good candidates to run into performance issues and when a problem arises it's very difficult to identify the cause and the right solution to the problem. The Oracle Database 11g R2 Performance Tuning Cookbook helps DBAs and developers to understand every aspect of Oracle Database that can affect performance. You will be guided through implementing the correct solution in a proactive way before problems arise, and how to diagnose issues on your Oracle database-based solutions. This fast-paced book offers solutions starting from application design and development, through the implementation of well-performing applications, to the details of deployment and delivering best-performance databases. With this book you will quickly learn to apply the right methodology to tune the performance of an Oracle Database, and to optimize application design and SQL and PL/SQL code. By following the real-world examples you will see how to store your data in correct structures and access and manipulate them at a lightning speed. You will learn to speed up sort operations, hack the optimizer and the data loading process, and diagnose and tune memory, I/O, and contention issues. The purpose of this cookbook is to provide concise recipes, which will help you to build and maintain a very high-speed Oracle Database environment.

Who is this book for?

This book is aimed at software developers, software and data architects, and DBAs who are beginning to use the Oracle Database, and want to solve performance problems faster and in a rigorous way. If you are an architect who wants to design fast performing applications, a DBA who is keen to dig into the causes of performance issues, or a developer who wants to learn why and where the application is running slowly this book will provide a good start for your career in performance tuning.

What you will learn

  • Design applications that run at lightning speed
  • Implement fast and scalable SQL and PL/SQL code
  • Choose the correct structures to store the data and access them
  • Optimize sort operations, such as order-by, Top-N queries, ranking, and set operators
  • Help the optimizer to choose the right access plan to retrieve data at the best available speed
  • Load data in the database at a faster speed by using the correct tools and options
  • Tune the database memory to obtain maximum performance using available resources
  • Tune the I/O operations, by designing a database over the I/O system
  • Tune and reduce contention issues on data and structures by using an optimal design

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jan 20, 2012
Length: 542 pages
Edition : 1st
Language : English
ISBN-13 : 9781849682619
Vendor :
Oracle
Category :
Languages :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Jan 20, 2012
Length: 542 pages
Edition : 1st
Language : English
ISBN-13 : 9781849682619
Vendor :
Oracle
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total €26.97 €116.97 €90.00 saved
Oracle Database 11gR2 Performance Tuning Cookbook
€45.99
Oracle Advanced PL/SQL Developer Professional Guide
€48.99
Oracle Database 12c Backup and Recovery Survival Guide
€48.99
Total €26.97€116.97 €90.00 saved Stars icon
Banner background image

Table of Contents

11 Chapters
Starting with Performance Tuning Chevron down icon Chevron up icon
Optimizing Application Design Chevron down icon Chevron up icon
Optimizing Storage Structures Chevron down icon Chevron up icon
Optimizing SQL Code Chevron down icon Chevron up icon
Optimizing Sort Operations Chevron down icon Chevron up icon
Optimizing PL/SQL Code Chevron down icon Chevron up icon
Improving the Oracle Optimizer Chevron down icon Chevron up icon
Other Optimizations Chevron down icon Chevron up icon
Tuning Memory Chevron down icon Chevron up icon
Tuning I/O Chevron down icon Chevron up icon
Tuning Contention Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
(4 Ratings)
5 star 50%
4 star 25%
3 star 0%
2 star 25%
1 star 0%
Joseph Opoku Jun 15, 2013
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is one of the best Oracle Performance book I have read. The recipes are simple and easy to understand.
Amazon Verified review Amazon
Bugs Bunny Feb 26, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Good read
Amazon Verified review Amazon
S. Sutton Mar 14, 2012
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This will help you with the "what", as in "what should we do?", and it does a pretty good job at providing examples for tuning queries, indexes, and the database as a whole. Unfortunately for me, it's completely lacking the "why", as in "why did whatever they had me do just speed things up?".It's great if you just want to bang out some performance tuning, it's not so great if you want to learn from that experience and pro-actively apply it to future applications.
Amazon Verified review Amazon
Charles Hooper Mar 04, 2012
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
I ordered the "Oracle Database 11gR2 Performance Cookbook" book shortly after it became available for purchase. I was very curious to see how the book compared with the similarly titled "Oracle Database 11g Performance Tuning Recipes" book, as well as some of the other Oracle Database performance books that are on the market. Packt is a fairly new book publisher, and this book marks the first Packt book in my collection.The author of this book does not appear to be widely known in the international Oracle Database community, although it does appear that the author is an active reviewer of SQL Server and programming books on an Italian programming focused website. The author's LinkedIn page indicates that he obtained OCA and OCP certification in 2002 and 2003, respectively, has a variety of programming experience, and currently is an IT Manager.One important characteristic of this book that is missing from some of the other Oracle Database performance focused books on the market is the extensive use of test case scripts throughout most of the book that allow the reader to reproduce the performance changes mentioned in the book, in the reader's Oracle Database environments. The test case scripts, related screen captures, and author's explanations of the results are both a blessing and a curse for this book. It appears that the author used a single Amazon Elastic Compute Cloud hosted database instance with only one set of instance parameters and system statistics for the various test case results and the author's descriptions of the expected outcome when the inputs in the test case script are provided. Had the author re-executed the test case scripts in another Oracle Database environment, the author probably would have written quite differently the explanations that follow the test case scripts. It is not uncommon for 80% of some of the book pages to be consumed by one or two SQL*Plus screen captures; combined with the slightly larger font sizes, double-spacing between paragraphs, and apparent one and a half spacing between lines in code sections, the technical content in the book is a bit more limited than the page count might suggest.So, how well did the book's contents meet the level of expectations provided by the book's front cover and the publisher's description of the book? One of the bullet pointed descriptions of the book reads, "Avoid common myths and pitfalls that slow down the database." Unfortunately, the book reintroduces several myths and inaccurate conclusions about Oracle Database that have diminished in frequency during the last 10+ years. Some of the information in the book is of good quality. However, the significant number of inaccurate, vague, misleading, and/or over-generalized facts in this book suggests that the author of this book may have not received sufficient guidance from Packt and the four technical reviewers of the book. The book publisher's site currently lists no errata for the book, even though I personally submitted 21 errata items to the publisher's errata reporting site.The author's native language is obviously not English, so it is probably to be expected that some of the sentences in the book are incomprehensible. Yet, there are also sentences in the book that use completely different phrasing, close to that of a person who double-majored in English and computer science with a focus on Oracle Database. The consistent usage of the term "fields" in some sections of the book, with the consistent usage of the term "columns" in other sections of the book is but one example of the style shift that is present in the book. Some of the sentences found in the book are oddly familiar, and although I was not able to identify the original sources of all of the oddly familiar sentences, I did manage to locate a few. What constitutes plagiarism in an Oracle Database book, and how much change is required to the original material to avoid the plagiarism label? Would slightly reformatting a section of text to replace dashes with colons be sufficient to avoid the label? Would changing the order of some sentences and eliminating other sentences be sufficient to avoid the label? Would performing simple word substitutions here and there, or shortening sentences be sufficient to avoid the label? I am not suggesting that there is rampant plagiarism in the book, but one does need to question when that plateau is reached in a book about Oracle Database.While in some respects this book is more useful to the reader than the "Oracle Database 11g Performance Tuning Recipes" book due to the inclusion of test cases, both books seem to omit the reasoning behind why and when someone might consider performing the 80 or so tasks/recipes mentioned in the books. Vague, inaccurate, over-generalized, and out of date descriptions of Oracle Database behavior are limiting factors of both books. This review is quite long, and likely will not appear in full on Amazon - see my blog for the full review.Data Dictionary Views:* DBA_VIEWS (page 20)* V$FIXED_TABLE (page 21)* V$LIBRARYCACHE (page 52)* V$STATNAME, V$MYSTAT (page 53)* SYS.SEQ$ (page 65)* DBA_MVIEWS, USER_MVIEWS, ALL_MVIEWS (page 69)* INDEX_STATS (pages 127, 128)* V$SYSSTAT (page 160)* V$SESSION (page 205)Parameters:* CURSOR_SHARING (pages 9, 38)* TIMED_STATISTICS (pages 20, 201)* LOG_CHECKPOINTS_TO_ALERT, BACKGROUND_DUMP_DEST (page 28)* STATISTICS_LEVEL (pages 29, 32)* CONTROL_MANAGEMENT_PACK_ACCESS (page 32)* QUERY_REWRITE_ENABLED, QUERY_REWRITE_INTEGRITY (page 70)* DB_16K_CACHE_SIZE (page 84)* MAX_DUMP_FILE_SIZE, TRACEFILE_IDENTIFIER (page 201)* SQL_TRACE (page 202)Hints:* APPEND (page 72)* INDEX (page 121)Comments, Corrections, and Problems:* The book states, "The first rule in writing applications which connect to an Oracle Database is to always use bind variables, which means not to include parameters in SQL statements as literals." The statement should be clarified that this is a general recommendation. There are times when literals should be used rather than bind variables, for instance if there are very popular and unpopular values in a column, it might be wise to prevent the sharing of execution plans when a very popular or very unpopular value is used in the WHERE clause. A correction/clarification is provided on page 51 (page 8).* Steps for creating a database with the Oracle Database Configuration Assistant seem to be out of place in a performance tuning book (pages 17-19)* Uses the term "fields" where the term "columns" should be used (page 21).* The book demonstrates the use of ANALYZE TABLE ... COMPUTE STATISTICS, and DBMS_UTILITY.ANALYZE_SCHEMA to collect object statistics. The book states that ANALYZE is retained for backward compatibility, but the book provides no warning that using ANALYZE to collect statistics could be problematic since the release of Oracle Database 8.1 (reference page 21).* The book uses the word "elaborate" rather than "create" or "generate" (pages 24, 26, 27, 31, 37)* The book demonstrates the use of AWR without first mentioning the licensing requirements of that feature (pages 30-31).* Word substitution error: "... and we experiment a lack of performance in another period, we can elaborate two reports..." (page 31)* The book demonstrates the use of ADDM without first mentioning the licensing requirements of that feature. The book also states, "ADDM is enabled by default in Oracle Database 11g; it depends on two configuration parameters..." Unlike with Oracle Database 10.1 and 10.2, ADDM is not enabled by default in the Standard Edition of Oracle Database 11.1 or 11.2, nor can it be legally enabled on the Standard Edition. While ADDM is enabled by default in the Enterprise Edition 11.1 and 11.2, it cannot be legally used without a Diagnostic Pack license (pages 32-35).* The book suggests the system-wide use of the deprecated SIMILAR value for the CURSOR_SHARING parameter as one of two solutions to address a hard parsing problem in a test case script (page 38).* The book states, "Now the Soft Parse is 97.84 percent." The output shown in the book actually indicates a Soft Parse percent of 99.20. The instance efficiency numbers in the output are identical to those found on page 40, so this might be an indication of a copy-paste error (page 39).* The book states, "If the PreparedStatement is not closed, it can be executed multiple times - changing the value assigned to bind variables - and only a `light' soft-parse will occur, with no syntax and semantic check." If the SQL statement is held open - there will NOT be a "light" soft-parse (session cached cursors are not discussed in this section of the book, which would allow a "light" soft-parse if the cursor is NOT held open) (page 52).* The elapsed time comparison between the directly executed SELECT statement, and the REFCURSOR that is returned by the SH.SALES_BY_PRODUCT procedure is not valid for a couple of reasons: 1) The script is executed by the internal user rather than a normal user, which can lead to unexpected performance differences; 2) The SELECT statement method displays its rows to the screen, so it is subject to delays caused by formatting the output for the SQL*Plus window (SET AUTOTRACE TRACEONLY STATISTICS may be used to reduce the impact of the formatting delays, but that change had little effect); 3) The REFCURSOR method, because it involves PL/SQL, will be subject to a context switch while the normal SELECT will not be subject to the context switch - the associated delay is operating system dependent and the timing should suggest that something is wrong with the test result; 4) While the normal SELECT statement test actually fetches the rows, the REFCURSOR method does not, as can be seen within an enabled 10046 trace (the normal SELECT will show a FETCH line that is preceded by WAIT lines, while the REFCURSOR method will not show a FETCH line in the trace file) (pages 54-55).* The output of the Java version of the SQL*Plus test script found on pages 54-55 conflicts with the author's intended result. Directly executing the SQL statement required 1.438 seconds, while using the REFCURSOR in the Java code required 1.722 seconds. The performance difference may be more significant than shown, because the direct execution of the SQL statement test was performed first, and the timing results include the time to flush the shared pool and the buffer cache (the first call will almost certainly take longer than the second call) (pages 56-58).* The book uses a test case script to demonstrate the negative effects of using a "COUNTER" table rather than using a sequence to provide the same counter value. The test case script uses a trigger on the table to populate the counter column in the table, and the test case script does show that performance improves with the use of the Oracle sequence. The test case script, however, should have also included a test that completely eliminates the trigger on the table, populating the TRAVELID column by including TRAVEL_SEQ.NEXTVAL directly in the SQL statement that populates the table. My timing results show that the counter trigger-table method completes in 0.45 seconds, the trigger-sequence method completes in 0.14 seconds, and the select-sequence method completes in 0.03 seconds (reference pages 60-62).* Accidental word substitution, "... and if the high watermark is reached, it caches other X numbers in the same manner." "other" should be "another" (page 65).* The author incorrectly read the AUTOTRACE generated execution plan. The book states "We can see that in the execution plan, there is full table access to the SALES table examining 918K rows and reading 8075 KB." An AUTOTRACE generated execution plan shows an estimated execution plan that may differ from the actual execution plan in some situations, such as cases where bind variables are involved. Additionally, an AUTOTRACE generated execution plan shows the predicted number of rows that will be returned (not examined), and the predicted volume of data that will be returned (not read) based on the existing statistics for the objects (page 67).* The book states, "However, from the execution plan, the number of rows processed is 72, and each row is 648 bytes long." Once again it is important to stress that the execution plan is a predicted execution plan generated by AUTOTRACE. The estimated 72 rows returned by the operation in the execution plan does agree with the "72 rows processed" displayed in the actual statistics for the execution, but that will not always be the case for an AUTOTRACE generated execution plan (it happens to be the case because statistics were collected for the materialized view with a 100% sample rate). The statement that each row is 648 bytes long appears to be the result of misreading the previous execution plan, which estimated that 72 rows consuming 648 bytes total would be returned from operation 0 in the execution plan. The AUTOTRACE generated execution plan for the materialized view predicts that 72 rows consuming 1872 bytes will be returned from operation 0 in the execution plan, which shows a predicted row length of 1872/72 = 26 bytes per row (pages 67-68).* The book states, "In the latter case [after flushing the buffer cache], we have 4047 consistent gets and 240 physical reads..." There are a couple of issues with this test case, found in the source code library file 2602_02_Materialized Views.sql. First, the script in the source code library uses "ANALYZE TABLE SH.MV_SALES_BY_PRODUCT COMPUTE STATISTICS" to collect the statistics on the materialized view, while the book shows the use of "EXEC DBMS_STATS.GATHER_TABLE_STATS" to collect the statistics - the collected statistics from the ANALYZE table command could very easily be different from the collected statistics from the DBMS_STATS.GATHER_TABLE_STATS command. The screen capture shown after flushing the buffer cache and re-executing the select from the materialized view does show 4,047 consistent gets and 240 physical block reads, as stated in the book, but it also shows 20,544 recursive calls where 0 recursive calls were shown prior to flushing the buffer cache - this recursive call count figure indicates that something else happened beyond the author flushing the buffer cache. My test results with just flushing the buffer cache show 8 consistent gets, 6 physical reads, and 0 recursive calls. The author also apparently flushed the shared pool, which triggered the recursive calls and the majority of the consistent gets and physical block reads (15,296, 2,978, and 177 respectively). The author probably should mention that the test case and advice will not work in a Standard Edition database, and should also state that the decision whether or not the materialized view is used is a cost-based optimizer decision (page 68).* The book lists "QUERY REWRITE" as a required privilege to create materialized views. The Oracle Database 11.2 (and 10.1) documentation state that the QUERY REWRITE privilege is deprecated, and thus not needed (reference page 69).* The book states, "The same parameters [QUERY_REWRITE_ENABLED, and QUERY_REWRITE_INTEGRITY] have to be enabled to use another functionality, function-based indexes." QUERY_REWRITE_ENABLED must be set to TRUE in Oracle Database 9.2 to use function-based indexes, but that requirement disappeared in Oracle Database 10.1 (page 70).* The book states, "We encounter row chaining when the size of the row data is larger than the size of the database block used to store it." While this statement is correct, the book omits a secondary cause of chained rows - Oracle database supports a maximum of 255 columns in a row piece, so tables with more than 255 columns will necessarily have chained rows (page 84).* The book casually demonstrates setting up a 16KB block size tablespace in a database that has a default 8KB block size. The book provides a list of several advantages for including smaller or larger than default block sizes in a single database including, "Faster scans: tables and indexes that require full scans can see faster performance when placed in a large block size." This justification is incorrect for several reasons including the fact that the DB_FILE_MULTIBLOCK_READ_COUNT parameter is scaled up for tablespaces that use a smaller than database default block size, and scales the parameter down for tablespaces that use a larger than database default block size. All of the justifications found on page 88 appear to be copied verbatim from a commercial website page. The book does not discuss the bugs and unexpected optimizer cost changes that might result from using multiple block sizes in a single database (reference reference2 pages 84-88).* Step 5 contains two typos: using angle brackets (less than and greater than signs) rather than single quotes, and a spurious 3 after the semicolon (page 89).* Step 7 and 9 contain typos: using angle brackets (less than and greater than signs) rather than single quotes (page 90).* Steps 4 and 5 contain typos: using angle brackets (less than and greater than signs) rather than single quotes (page 97).* Step 14 contains a corrupted SQL statement: "CREATE.5* FROM HR.BIG_ROWS WHERE 1=0;". Steps 15, 16, and 19 contain typos: using angle brackets (less than and greater than signs) rather than single quotes. The author should have mentioned at least one of the possible problems with this approach, which might include triggers on the table, foreign keys that point to the table, and the potential statistics problems caused by the use of the ANALYZE TABLE command (page 92).* The book states about the DBMS_SPACE.CREATE_TABLE_COST example, "In this procedure we have set the tablespace to use the average row size and the row count..." The purpose of this function is to estimate space usage, not to make changes to a tablespace (page 95).* Step 1 contains an extraneous ".5" in the command.* Pages 96-112 are present in the book, but omitted from this review.* Steps 11 and 13 use angle brackets (less than and greater than signs) rather than single quotes (pages 116-117)* The book states, "We can also create a function-based descending index." This is a strange statement - all descending indexes in Oracle Database are function-based indexes (page 119).* The book states, "... this test allows us to dispel a myth. Oracle uses the indexes even if the leading columns are not referenced in the WHERE predicate of the query. We can see that in such a case, the operation will be an INDEX FAST FULL SCAN." In this case, the author is incorrectly attempting to generalize a special case into a general rule. Firstly, there is no myth to dispel - Oracle's query optimizer has had the ability to use INDEX SKIP SCAN operations when the leading column of an index is not specified in the WHERE clause, since the release of Oracle Database 9.0.1 a decade ago - but that access path is usually only advisable when there are few distinct values in the leading column of the index. The author's test case is a special case because all of the columns selected from the table are present in the index structure (page 119).* The book states, "If we use a regular index to access the data, Oracle is unable to do the sort in a mixed way, in a query like this." The author then shows a SQL statement with the first column in the ORDER BY clause sorted in descending order and the second column in the ORDER BY clause sorted in ascending order. At this point in the book, the author has not yet stated that Oracle Database is able to read index entries in an ascending or descending order through a normal (ascending sorted) b*tree index, so this sentence in the book is confusing - almost to say that Oracle Database is not able to sort one column in ascending sequence and a second column in descending sequence - that concept is obviously false. It would have been more accurate for the book to state that, "Oracle Database is unable to _avoid_ a sort operation when accessing the rows through a concatenated index if both of the columns in the index are sorted in ascending sequence, the ORDER BY clause of the SQL statement specifies that one and only one column contained in the index should be ordered in descending sequence, and the second column in the concatenated index is included in the WHERE clause." (page 120)* A self-contradicting sentence, "In the first case, we have a full table scan, because we cannot retrieve all of the data from the index, so we have to do a TABLE ACCESS BY ROWID operation for each row, which satisfies the predicate." Full table scan probably does not belong in that sentence (page 121).* The book states, "In the next screenshot, we can see that Oracle knows (from the table statistics) that only 43 rows satisfy the where condition." It is important to stress that the autotrace generated execution plan only shows the estimated number of rows that will be returned by an operation - the author's query, in fact, retrieves a single row. The index that the author specified in the index hint was created on the columns CUST_LAST_NAME and CUST_YEAR_OF_BIRTH (in descending order), yet the author's query only included the CUST_FIRST_NAME column in the WHERE clause - it is ridiculous to force the optimizer to use this index with a hint (page 121).* The index's clustering factor was not mentioned in the discussion of what determines the point at which it is more efficient to access a table through an index access path, rather than a full table scan - only the average row length was described as a consideration and the percentage of the rows that need to be retrieved. It could very well be the case that with a very poor clustering factor, that it is more efficient to retrieve less than 1% of the table's rows through a full table scan, rather than an index lookup (page 122).* The book should define "intra-block fragmentation" which is the benefit that the book lists as resulting from rebuilding indexes (page 123).* The two session example of one session rebuilding an index while a second session executes a SELECT and INSERT seems to be pointless. The second session does not use the index that the first session attempts to rebuild, instead a full table scan is performed on the BIG_CUSTOMERS table, followed by an index unique scan of the CUSTOMERS_PK index. An index named IX1_BIG_CUSTOMERS was created in the script, yet the script attempts to rebuild a non-existent index named IX1_MYCUSTOMERS. The test case only shows an example of efficiency gains due to blocks being buffered in the buffer cache. The book should have mentioned that an online rebuild and parallel rebuild are only possible in the Enterprise Edition of Oracle Database (pages 123-125).* Step 10 uses angle brackets (less than and greater than signs) rather than single quotes (page 126).* The book states, "We have used the PARALLEL option too, to speed up the rebuild process." While specifying PARALLEL during an index rebuild may speed up the rebuild, it is important to note that this results in an index with a parallel degree that should be manually reset to the original value, once the rebuild completed (page 127).* The book states, "However, when we have a table on which there are many INSERTs and DELETEs, we could schedule an index rebuild, because when deleting an index entry, the space is not freed in the index leaf, but just marked as deleted. If we have massive DELETE and INSERT operations, we could have a skewed index structure, which could slow performance due to intra-block fragmentation." The book should have defined what is meant by "skewed index structure" - does the book mean, for instance, that one portion of the index could have a BLEVEL of 2 while another portion of the index could have a BLEVEL of 3 - if that is the case, the book's statement is incorrect. If the book's definition of "skewed index structure" is that some leaf blocks of the index will be more densely packed than other leaf blocks in the same index structure, then that should be considered normal behavior for Oracle indexes - an occasional coalesce might be used to combine index entries in logically adjacent leaf blocks, but scheduling index rebuilds is neither required, nor recommended. Depending on the order of the inserted values in relation to the order of the entries in the index leaf blocks, an index leaf block split operation could evenly divide the existing index entries between two leaf blocks (a 50-50 split, resulting in both index blocks being 50% utilized, if the inserted value is not the highest value that would be inserted into the leaf block), or all of the existing entries will remain in the existing leaf block and the new entry will be placed by itself into a new leaf block (a 90-10 split). A deleted index entry will remain in the block at least until that transaction is committed, but any post-transaction insert into the block will clear out all deleted index entries in the block. Deleting all table rows with index entries at the low end of the index (the values were populated by a sequence, for example, and are deleted in the same sequential order) could leave many blocks in the index structure with nothing but deleted index entries, but that situation should only result in a performance problem if SQL statements attempt to determine the minimum value for the indexed column, or to some extent, fast full index scans and full index scans (reference reference2 page 127).* The book states, "If the value for DEL_LF_ROWS/LF_ROWS is greater than 2, or LF_ROWS is lower than LF_BLKS, or HEIGHT is 4 then the index should be rebuilt." Some of the advice found on the Internet suggests that if DEL_LF_ROWS is 20% of LF_ROWS, then the index should be rebuilt - did the author of this book intend to write "If the value for DEL_LF_ROWS/LF_ROWS is greater than 0.2"? Why should the result of DEL_LF_ROWS/LF_ROWS be a consideration of whether or not an index should be rebuilt - is it supposed to measure the amount of wasted/unused space in the index leaf blocks? The next INSERT/UPDATE DML operation in a given leaf block will clear out the index rows that are flagged as deleted, but then does that imply that the space is not wasted (or is the space wasted)? What if there are many index blocks that are roughly 50% utilized due to a large number of 50-50 leaf block splits, is that space not wasted (or is the space wasted)? Since the formula DEL_LF_ROWS/LF_ROWS really does not describe the percent of used space in the index, it is probably best to just ignore the result of that formula. DEL_LF_ROWS/LF_ROWS can never be greater than 1 because the statistic found in the LF_ROWS column includes the DEL_LF_ROWS statistic. The second criteria suggests comparing LF_ROWS to LF_BLKS, such that if on average there is less than one index entry per leaf block, that the index should be rebuilt - there can never be less than one index entry per leaf block, because the leaf block will be detached from the index structure when all rows are removed from that leaf block. The final criteria suggests rebuilding the index when the height is exactly 4 - does that mean that an index with a height of 5, 6, 7, etc. does not need to be rebuilt? What if after rebuilding the index it still has a height of 4 - will it help to rebuild a second time? (page 127)* The book states, "When we rebuild an index, we can add the COMPUTE STATISTICS option to that statement." Since the release of Oracle Database 10.1, statistics are automatically collected when indexes are created and/or rebuilt, so the COMPUTE STATISTICS clause is unnecessary (page 127).* Steps 6 and 9 uses angle brackets (less than and greater than signs) rather than single quotes (page 128-129).* Steps 8 and 15 uses angle brackets (less than and greater than signs) rather than single quotes (page 131-132).* The book should mention that bitmap indexes are not available in the Standard Edition of Oracle Database (page 136).* Step 3 uses angle brackets (less than and greater than signs) rather than single quotes (page 137).* The author created a composite bitmap index with three columns to demonstrate the use of bitmap indexes. Composite bitmap indexes are rare - one of the strengths in using bitmap indexes is the ability to create multiple single column bitmap indexes, and as needed the optimizer will select to bitmap join two or more bitmap indexes in an attempt to significantly reduce the number of rows visited in the table (page 138).* The book states, "This time the execution plan uses the newly created bitmap index, ... using the INDEX RANGE SCAN or INDEX FAST FULL SCAN operation, depending on whether we are filtering on the first key column of the index - CUST_GENDER - or not. This result is obtained thanks to the structure of bitmap indexes." With the index definition found in the book, the operations that should be present in the execution plan are BITMAP INDEX RANGE SCAN and BITMAP INDEX FAST FULL SCAN, while you might expect to find INDEX RANGE SCAN or INDEX FAST FULL SCAN operations associated with normal b*tree indexes. However, it is a cost-based decision for the optimizer to use or not use an index, so there is no guarantee that index will be used as indicated in the book if the leading column in the index is either specified or not specified. Additionally, it is not the structure of bitmap indexes that permits INDEX RANGE SCAN or INDEX FAST FULL SCAN operation, depending on whether we are filtering on the first key column of the index - creating a normal b*tree index in the script rather than a composite bitmap index could (will) actually allow the optimizer to take advantage of INDEX RANGE SCAN or INDEX FAST FULL SCAN operations (page 139).* The book states, "Bitmap indexes offer very fast performance when we have a low cardinality field indexed on a table containing many rows." This statement could have several different interpretations, but I believe that the author's intended meaning is "Bitmap indexes offer significantly faster performance than b*tree indexes when columns with few distinct values are indexed in tables containing a significant number of rows." This fixed statement still requires additional clarification - if the bitmap index does not help to further reduce the number of table rows that are accessed through the index, the end result may be performance that is roughly the same as that of an equivalent b*tree index. One way to accomplish the task of further reducing the number of table rows accessed is through the utilization of multiple bitmap indexes with bitmap combine operations to significantly reduce the number of rowids that are used to fetch table rows (page 139).* The book states, "When rows are frequently inserted, deleted, and updated, there is a performance bottleneck if we use a bitmap index. When the index is updated, all the bitmap segments are locked." This statement requires a bit of clarification. I do not believe that the author is stating that updating an entry in a bitmap index will lock all of the bitmap indexes in the database (a segment could be a table, table partition, index, etc.). Instead, I think that the author is intending to state that updating an entry in a bitmap index will lock all of the index entries in that index, effectively preventing any other session from inserting, updating (the column covered by the index), or deleting rows in the table. For very small bitmap indexes, this statement could very well be true. However, for larger bitmap indexes, built for tables with many rows, the number of index rows that will be locked during an update is determined by the number of rows covered by the index block(s) that update changed, possibly 20,000 to 50,000 rows per index block. (page 139 reference slide 46, reference2 page 2, reference3 comments section).* The book states, "This [bitmap join index] is a bitmap index which represents the join between two tables, and can be used instead of a materialized view in certain conditions." The book did not offer any suggestions or describe any conditions that permit a bitmap join index to take the place of a materialized view. The statement in the book needs additional clarification (reference reference2 page 140).* The book states about index organized tables, "If the row size exceeds the size indicated by this parameter [PCTTHR
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.