Christian Antognini, Troubleshooting Oracle Performance, 2nd Edition (Review)

This is an excellent book with an encyclopedia-like approach to Oracle performance tuning, concentrated around the optimizer. It doesn’t cover architectural changes in the database in the newer releases, like the new redo mechanism with the private strands or the new implementation of the latches and pins. In my opinion, Oracle performance is more than just the optimizer.
Also, the book is very long and without the sections for new features in 12c, which makes it rather hard to read for those interested primarily in the new features of 12c. It is also very useful as an encyclopedia, but very hard to read from cover to cover, which is what I did.
Having said that, it is also the most complete description of the Oracle optimizer available. An alternative title could be “all about the Oracle optimizer in one place”. The book is very useful and a must have for any Oracle DBA interested in performance.

Posted in Uncategorized | Leave a comment

12c and export/import from a lower version

There is a serious problem with export/import utilities between versions.

The first command, executed on Oracle 11.2.0.4, Linux x86_64 was:


[oracle@oradb tmp]$ expdp system directory=tmp dumpfile=oe.dmp schemas=oe

logfile=oe.log

Directory tmp was created as /tmp

Result was the following:

Export: Release 11.2.0.4.0 – Production on Mon Jul 28 22:57:03 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights

reserved.

Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 -

64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing

options

Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_01″: system/******** directory=tmp

dumpfile=oe.dmp schemas=oe logfile=oe.log

…..

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

. . exported “OE”.”PRODUCT_DESCRIPTIONS” 2.379 MB 8640

rows

ORA-39181: Only partial table data may be exported due to fine grain

access control on “OE”.”PURCHASEORDER”

. . exported “OE”.”PURCHASEORDER” 243.9 KB 132

rows

. . exported “OE”.”WAREHOUSES” 12.46 KB 9

rows

. . exported “OE”.”CUSTOMERS” 77.98 KB 319

rows

. . exported “OE”.”PRODUCT_INFORMATION” 72.77 KB 288

rows

. . exported “OE”.”SUBCATEGORY_REF_LIST_NESTEDTAB” 6.585 KB 21

rows

. . exported “OE”.”PRODUCT_REF_LIST_NESTEDTAB” 12.51 KB 288

rows

. . exported “OE”.”CATEGORIES_TAB” 14.15 KB 22

rows

. . exported “OE”.”INVENTORIES” 21.67 KB 1112

rows

. . exported “OE”.”ORDERS” 12.39 KB 105

rows

. . exported “OE”.”ORDER_ITEMS” 20.88 KB 665

rows

. . exported “OE”.”PROMOTIONS” 5.507 KB 2

rows

Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded

******************************************************************************

Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:

/tmp/oe.dmp

Job “SYSTEM”.”SYS_EXPORT_SCHEMA_01″ completed with 1 error(s) at Mon Jul

28 22:58:16 2014 elapsed 0 00:01:08

After that, the dump file is copied to the machine running 12c instance:


[oracle@oradb tmp]$ ls

oe.dmp oe.log

[oracle@oradb tmp]$ scp oe.dmp medo:/tmp/

oracle@medo’s password:

oe.dmp 100% 3700KB 3.6MB/s

[oracle@oradb tmp]$

The next thing to try is importing the file into the 12c instance:

[oracle@medo ~]$ impdp userid=pdbadmin/admin@local dumpfile=oe.dmp

schemas=oe directory=tmp log=oe.log sqlfile=oe_schema.sql

Import: Release 12.1.0.1.0 – Production on Mon Jul 28 23:06:43 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights

reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 -

64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application

Testing options

ORA-39001: invalid argument value

ORA-39000: bad dump file specification

ORA-31640: unable to open dump file “/tmp/oe.dmp” for read

ORA-27041: unable to open file

Linux-x86_64 Error: 22: Invalid argument

Additional information: 2

The errors are “invalid argument value” and “bad dump file

specification”, which means that Oracle 12c cannot interpret Oracle 11g

dump file. That is vewy, vewy bad. However, the network link import works

like a charm:


[oracle@medo ~]$ impdp userid=system@local network_link=ora11 schemas=oe

directory=tmp log=oe.log sqlfile=oe_schema.sql

Import: Release 12.1.0.1.0 – Production on Mon Jul 28 23:10:42 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights

reserved.

Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 -

64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application

Testing options

Legacy Mode Active due to the following parameters:

Legacy Mode Parameter: “log=oe.log” Location: Command Line, Replaced

with: “logfile=oe.log”

Starting “SYSTEM”.”SYS_SQL_FILE_SCHEMA_01″: userid=system/********@local

network_link=ora11 schemas=oe directory=tmp logfile=oe.log

sqlfile=oe_schema.sql

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM

Processing object type SCHEMA_EXPORT/TYPE/INC_TYPE

Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC

Processing object type SCHEMA_EXPORT/TYPE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE

Processing object type SCHEMA_EXPORT/XMLSCHEMA/XMLSCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/TABLE/COMMENT

Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION

Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/

INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/

FUNCTIONAL_INDEX/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/VIEW/VIEW

Processing object type SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/TYPE/TYPE_BODY

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/TRIGGER

Processing object type SCHEMA_EXPORT/VIEW/TRIGGER

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job “SYSTEM”.”SYS_SQL_FILE_SCHEMA_01″ successfully completed at Mon Jul

28 23:11:50 2014 elapsed 0 00:01:04

I tried with the SCOTT schema, got the same result.
I guess that means that I will have to create more database links. Odd.

Posted in Uncategorized | 1 Comment

Gimmick with offset and fetch N rows first in 12c

I was looking into some new features of 12c and I didn’t need to dig deep, in order to figure out how does the “fetch first” feature work:

QL> set autotrace on
SQL> select ename from emp offset 5 rows fetch next 5 rows only;

ENAME
———-
BLAKE
CLARK
SCOTT
KING
TURNER

Elapsed: 00:00:00.03

Execution Plan
———————————————————-
Plan hash value: 3611411408

——————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————-
| 0 | SELECT STATEMENT | | 14 | 280 | 2 (0)| 00:00:01 |
|* 1 | VIEW | | 14 | 280 | 2 (0)| 00:00:01 |
|* 2 | WINDOW NOSORT STOPKEY| | 14 | 84 | 2 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | EMP | 14 | 84 | 2 (0)| 00:00:01 |
——————————————————————————-

Predicate Information (identified by operation id):
—————————————————

1 – filter(“from$_subquery$_002″.”rowlimit_$$_rownumber”<=CASE WHEN
(5>=0) THEN 5 ELSE 0 END +5 AND “from$_subquery$_002″.”rowlimit_$$
_rownu

mber”>5)
2 – filter(ROW_NUMBER() OVER ( ORDER BY NULL )<=CASE WHEN (5>=0)
THEN 5 ELSE 0 END +5)

So, there is “ROW_NUMBER analytic function that I have used before to do the same thing. In other words, the long awaited new feature is just slightly simplified old trick with ROW_NUMBER() function. And “VIEW” in the plan tells me that my query was rewritten, as there were no views in my query. Nevertheless, this new feature will come in very handy, for everybody writing a web application and in need of efficient pagination method. One disappointing thing is that “FIRST” and “NEXT” are synonymous and return the same thing. In other words, statements will not remember their offset.
Unfortunately, the associated PL/SQL fetch statement doesn’t have an offset clause, only the well known limit clause:

declare
cursor csr is select ename from emp;
type tab_emp is table of emp.ename%type;
var_temp tab_emp:=tab_emp();
begin
open csr;
fetch csr bulk collect into var_temp limit 5;
for i in var_temp.first..var_temp.last
loop
dbms_output.put_line(var_temp(i));
end loop;
end;
/

If you try to put OFFSET clause anywhere in the FETCH command, a syntax error will follow. That means that it is not possible to skip first 3 pages and fetch just the forth one. The application needs to fetch the first 3 pages, in order to be able to display the 4th page.

 

Posted in Uncategorized | Leave a comment

Still the same cluster limitations

I am slightly disappointed to report that the new extended data types cannot be used in the clustered tables. CLOB columns have never been allowed, and are still not allowed in 12c, but I expected the new data types to pass seamlessly. That is not the case:

SQL> create cluster testclu(intkey number(30,0));

Cluster created.

Elapsed: 00:00:00.03
SQL> create index testclu_ind on cluster testclu;

Index created.

Elapsed: 00:00:00.03

Now, let’s create a normal table with the extended character type:

create table testtab1(
intkey number(30,0),
big_text varchar2(32767))
/
Table created.

Elapsed: 00:00:00.02
SQL>
SQL> desc testtab1
Name                       Null?    Type
—————————————– ——– —————————-
INTKEY                         NUMBER(30)
BIG_TEXT                     VARCHAR2(32767)

SQL>

Now, let’s try and put the table with an extended data type into a cluster.

SQL> create table testtab2
2  (intkey number(30),
3   big_text varchar2(32767))
4  cluster testclu(intkey);
create table testtab2
*
ERROR at line 1:
ORA-14691: Extended character types are not allowed in this table.

If the size of the big_text column is reduced to 4000, the problem is gone:

1  create table testtab2
2  (intkey number(30),
3   big_text varchar2(4000))
4* cluster testclu(intkey)
SQL> /

Table created.

Elapsed: 00:00:00.03
SQL>

I was unable to find any documentation on this restriction.

Posted in Uncategorized | Leave a comment

SYSBACKUP travails, part II

The SYSBACKUP connectivity problems described in the previous post are caused by unpublished bug 15828768.

There is a way to connect, using SYSBACKUP privilege:

 

RMAN> connect target ‘system@local as sysbackup’

target database Password:
connected to target database: ORA12C (DBID=214280212, not open)

Please, note the quotes around the connection string. Without the quotes, this will not work. It works with both single and double quotes:

RMAN> connect target “system@local as sysbackup”

target database Password:
connected to target database: ORA12C (DBID=214280212, not open)

This would require me to change quite a few scripts, in order to utilize SYSBACKUP connections with a commercial backup utility. Fortunately, Oracle is aware of the problem and is actively working on it.

 

 

Posted in Uncategorized | Leave a comment

Quirks with the SYSBACKUP privilege in 12c

Oracle 12c has a great new privilege, tailor made for the paranoid oracle DBA guys like me. However, there are still some quirks: SYSBACKUP privilege doesn’t work over Oracle*Net. If the authentication method is SID based local authentication, all is well:
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 5027385344 bytes
Fixed Size    2298736 bytes
Variable Size    1040190608 bytes
Database Buffers    3976200192 bytes
Redo Buffers    8695808 bytes
Database mounted.
SQL>

Now, let’s do a cold backup of tablespace USERS:

[oracle@medo admin]$ rman target system
Recovery Manager: Release 12.1.0.1.0 – Production on Sat Feb 22 17:20:06 2014
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
target database Password:
connected to target database: ORA12C (DBID=214280212, not open)
RMAN> backup tablespace users;
Starting backup at 22-FEB-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=5 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/data/oradata/ora12c/ORA12C/datafile/o1_mf_users_9f093j2d_.dbf
channel ORA_DISK_1: starting piece 1 at 22-FEB-14
channel ORA_DISK_1: finished piece 1 at 22-FEB-14
piece handle=/data/oradata/fra/ORA12C/backupset/2014_02_22/o1_mf_nnndf_TAG20140222T172029_9jl8of7v_.bkp tag=TAG20140222T172029 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 22-FEB-14
Starting Control File and SPFILE Autobackup at 22-FEB-14
piece handle=/data/oradata/fra/ORA12C/autobackup/2014_02_22/o1_mf_s_840215852_9jl8ohd8_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 22-FEB-14
RMAN>
User SYSTEM does not have SYSDBA privilege. However, the V$PWFILE_USERS doesn’t show that while the instance is in the “MOUNTED” state:
SQL> select * from v$pwfile_users;
USERNAME    SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
—————————— —– —– —– —– —– —– ———-
SYS    TRUE  TRUE  FALSE FALSE FALSE FALSE    1
When the database is open, the user “SYSTEM” is clearly visible:
SQL> alter database open;
Database altered.
SQL> select * from v$pwfile_users;
USERNAME    SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
—————————— —– —– —– —– —– —– ———-
SYS    TRUE  TRUE  FALSE FALSE FALSE FALSE    0
SYSTEM    FALSE FALSE FALSE TRUE  FALSE FALSE    3
This shows that SYSTEM is not a SYSDBA user. However, the connection as SYSDBA will succeed:

SQL*Plus: Release 12.1.0.1.0 Production on Sat Feb 22 17:33:53 2014
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
SQL> connect system as sysdba
Enter password:
Connected.

While the database is not open, SYSBACKUP connection is equivalent to a SYSDBA connection, because it’s impossible to read the user data while the database is not open. When the database is opened, the first attempt is surprising:

SQL> select * from scott.emp;
select * from scott.emp
                    *
ERROR at line 1:
ORA-00942: table or view does not exist

This is because we are connecting to the container database which doesn’t contain the table SCOTT.EMP. Let’s try with the pluggable database which does contain that table:

SQL> connect system@local as sysdba
Enter password:
ERROR:
ORA-01031: insufficient privileges

User SYSTEM cannot connect as SYSDBA, because that privilege wasn’t granted to that user. There is new connection syntax which allows connecting as SYSBACKUP, which was granted to the user SYSTEM:

SQL> connect system@local as sysbackup
Enter password:
Connected.
Let’s now open the pluggable database:
SQL> alter pluggable database open;
Pluggable database altered.
SQL> select * from scott.emp;
select * from scott.emp
               
ERROR at line 1:
ORA-01031: insufficient privileges

That’s the main advantage of the SYSBACKUP role: it prevents the backup operator from seeing the user data. Backup also works, which was the intention. The problem arises with connecting through the listener. It works with the user “SYS” or any other SYSDBA  user:

RMAN> connect target sys@local
target database Password:
connected to target database: ORA12C (DBID=214280212)

It doesn’t work with the SYSBACKUP users:

[oracle@medo admin]$ rman
Recovery Manager: Release 12.1.0.1.0 – Production on Sat Feb 22 18:09:20 2014
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
RMAN> connect target system@local
target database Password:
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
ORA-01031: insufficient privileges
RMAN>

This is a big problem because various network backup utilities use Oracle*Net connection. Unfortunately, rman attempts sysdba connection with “as SYSDBA”  and that doesn’t work. The problem is in the rman utility itself because sqlplus allows network connection as SYSBACKUP:

SQL*Plus: Release 12.1.0.1.0 Production on Sat Feb 22 18:14:15 2014
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
SQL> connect system@local as sysbackup
Enter password:
Connected.

Posted in Uncategorized | Leave a comment

Missing Information

Since the version 11G it is possible to exclude committed blocks from specific tablespace from being written to flashback recovery area. FBDA process writes committed blocks to flashback area and is quite IO intensive. There are, among other things, flashback checkpoints as well. They occur when when flashback logs fill up and start anew. It is, therefore, desirable to exclude tablespaces to which users don’t have access from having the committed changes written to flashback. Here is an example how to do that for the trace analyzer (TRCA) tablespace:

SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
——————————
SYSTEM
SYSAUX
TEMP
USERS
TRCANLZR

Elapsed: 00:00:00.12
SQL> alter tablespace TRCANLZR flashback off;

Tablespace altered.

Elapsed: 00:00:00.45

This is somewhat typical situation when to turn the flashback off. Flashback is primarily a protection from the human error. I advocate turning the flashback off for tablespaces containing no user content or content that humans are extremely unlikely to modify, like the data dictionary for various applications.
There is, however, a problem:
DBA_TABLESPACES view contains no information about that and, even more problematic, DBMS_METADATA returns no information about that:

SQL> set long 1024
SQL> select dbms_metadata.get_ddl(‘TABLESPACE’,tablespace_name) SQL
2 from dba_tablespaces
3 where tablespace_name=’TRCANLZR’
4 /

SQL
——————————————————————————–

CREATE TABLESPACE “TRCANLZR” DATAFILE
SIZE 134217728
AUTOEXTEND ON NEXT 134217728 MAXSIZE 1073741824
LOGGING ONLINE PERMANENT BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576 DEFAULT
NOCOMPRESS SEGMENT SPACE MANAGEMENT AUTO

The information about the flashback status is important enough to be included in V$DATABASE view and, in my modest opinion, it should be included in the DBA_TABLESPACES and returned by DBMS_METADATA.
By the way, in Oracle 12c, flashback is also common for all pluggable databases, not only redo logs and undo tablespace.

Posted in Miscellaneous Oracle, Uncategorized | Leave a comment