Opatch Integration

Oracle 12c came up with one nice new feature: Opatch, the tool known to all database administrators is now integrated into the database in the form of DBMS_QOPATCH package. If the database contains pluggable databases, the package must be executed from the root (CDB$ROOT) level, as user sys.
Also, a minor inconvenience is that there are no tabular results, results are returned as XML. It is a bit cumbersome to work with and I do prefer the traditional Opatch for now. So, let’s make an example:

Screenshot - 11102014 - 05:40:22 PM

The query is “select dbms_qopatch.get_opatch_list() from dual” and it returns results in one monstrous line. There is no nice way to look at the output XML in the SQL Developer, one must cut & paste the entire monstrosity into a 3rd party XML editor:
Screenshot - 11102014 - 05:51:11 PM
This looks like a story similar to V$DIAG_ALERT_EXT, which was available in the last versions of 10G software as an X$ table, to be made both available and useful in version 11G. This XML output is completely useless and very hard to read. The good, old “opatch lsinventory” is still preferred, but it is going to be integrated with the database in not so distant future. For now, DBA_REGISTRY_HISTORY will have to do.

Posted in Uncategorized | Leave a comment

Poor Man’s In-Memory Caching

Introduction

It is common knowledge, or at least it should be, that Oracle has
released version 12.1.0.2 in June 2014. which has a feature called
“Oracle In-Memory”. The name is a tad confusing, since many
features were called “in-memory” before, from “Very Large
Memory” to Times 10 product.

This “In Memory” option is a licensed product featuring
columnar storage, in addition to the normal buffer storage, almost
identical to IBM’s “Blu acceleration” product, released for DB2
in 2013. Essentially, it’s like creating an in-memory bitmap index on
the low selectivity column, but without adverse effects when it comes
to locking. Obviously, as In Memory option is a licensed product, it
cannot be what the title is referring to.

There is another caching new feature built into 12.1.0.2, which is
much less spectacular but still useful, if properly understood. It is
not unlike the KEEP buffer pool, known from the previous versions,
although it is a bit less granular. This feature is controlled by the
init parameter named DB_BIG_TABLE_CACHE_PERCENT_TARGET, with very
suggestive and self-explanatory name and can be monitored by the two
V$ tables, namely V$BT_SCAN_CACHE and V$BT_SCAN_OBJ_TEMPS. Both the
parameter and the V$ tables are described in the reference manual.

Essentially, by setting this parameter on a single instance
database, the instance allocates a percent of the buffer cache for
caching large tables. What are large tables? Apparently, the same
criteria as for direct reads is applied: if the table is larger than
2% of SGA, it is considered a large table. This is explained in
detail by Tanel Poder in his excellent article.
The same criteria is used to decide whether the table is big or not.

Details

It is not possible to control this feature on the table basis, by
specifying something like BUFFER_POOL KEEP in STORAGE clause for the
table. All large tables are cached, when a scan is executed. It
doesn’t necessarily need to be a full table scan

SQL>
startup

ORACLE
instance started.

Total
System Global Area 838860800 bytes

Fixed
Size 2929936 bytes

Variable
Size 335547120 bytes

Database
Buffers 494927872 bytes

Redo
Buffers 5455872 bytes

Database
mounted.

Database
opened.

SQL>
alter session set container=pdbtest;

Session
altered.

SQL>
select object_count from V$BT_SCAN_CACHE;

OBJECT_COUNT

————


0

SQL>
set autotrace on

SQL>
select count(*) from scott.t1;

COUNT(*)

———-

2000000

Execution
Plan

———————————————————-

Plan
hash value: 227768458

——————————————————————————

|
Id | Operation | Name | Rows | Cost (%CPU)| Time |

——————————————————————————

|
0 | SELECT STATEMENT | | 1 | 211 (6)| 00:00:01 |

|
1 | SORT AGGREGATE | | 1 | | |

|
2 | INDEX FAST FULL SCAN| SYS_C0011355 | 2565K| 211 (6)|
00:00:01 |

——————————————————————————

Note

—–

-
dynamic statistics used: dynamic sampling (level=2)

Statistics

———————————————————-

947
recursive calls


0 db block gets

11489
consistent gets

4191
physical reads


0 redo size

542
bytes sent via SQL*Net to client

552
bytes received via SQL*Net from client


2 SQL*Net roundtrips to/from client

698
sorts (memory)


0 sorts (disk)


1 rows processed

SQL>
select object_count from V$BT_SCAN_CACHE;

OBJECT_COUNT

————


1

So, the cache was populated by the full index scan. The same
caching goes into effect if a range scan is used. Caching takes
place when the big table is accessed, not only by the full table
scan. Also, optimizer is unaware that the table is cached. This claim
will be proven later. First, let’s check that the object is the right
one.

SQL>
select object_count from v$bt_scan_cache;

OBJECT_COUNT

————


1

SQL>
select ts#,dataobj#,policy from V$BT_SCAN_OBJ_TEMPS;

TS#
DATAOBJ# POLICY

———-
———- ———-

196612 96464
MEM_ONLY

SQL>
select owner,object_name,object_type

2
from dba_objects

3
where data_object_id=96464;

OWNER

——————————————————————————–

OBJECT_NAME

——————————————————————————–

OBJECT_TYPE

———————–

SCOTT

T1

TABLE

Sure enough, it’s our table SCOTT.T1, which has 2M generated rows
and takes around 64M of disk space. The TS# column is supposed to be
the number of the tablespace, but the number is obviously wrong since
I have only 7 tablespaces in PDBTEST and 5 in CDB$ROOT. Connection ID
in the V$BT_SCAN_OBJ_TEMPS is also consistently 0, despite the fact
that the object resides in CON_ID 4.. Now, it is time to prove that
optimizer doesn’t know about caching. The first thing to execute is
counting without any hints:

SQL>
set timing on

SQL>
select ts#,dataobj#,con_id from V$BT_SCAN_OBJ_TEMPS;

TS#
DATAOBJ# CON_ID

———-
———- ———-

196612 96464
0

Elapsed:
00:00:00.00

SQL>
set autotrace on

SQL>
select count(*) from scott.t1;

COUNT(*)

———-

2000000

Elapsed:
00:00:00.07

Execution
Plan

———————————————————-

Plan
hash value: 227768458

——————————————————————————

|
Id | Operation | Name | Rows | Cost (%CPU)| Time |

——————————————————————————

|
0 | SELECT STATEMENT | | 1 | 211 (6)| 00:00:01 |

|
1 | SORT AGGREGATE | | 1 | | |

|
2 | INDEX FAST FULL SCAN| SYS_C0011355 | 2565K| 211 (6)|
00:00:01 |

——————————————————————————

Note

—–

-
dynamic statistics used: dynamic sampling (level=2)

Statistics

———————————————————-


0 recursive calls


0 db block gets

3911
consistent gets


0 physical reads


0 redo size

542
bytes sent via SQL*Net to client

552
bytes received via SQL*Net from client


2 SQL*Net roundtrips to/from client


0 sorts (memory)


0 sorts (disk)


1 rows processed

Elapsed:
00:00:00.08

The plan still reflects the normal situation, without any caching.
The path selected is the fast full index scan of the primary key
index. Now, let’s try forcing the full scan, using a hint:

SQL>
select /*+ full(t1) */ count(*) from scott.t1;

COUNT(*)

———-

2000000

Elapsed:
00:00:00.03

Execution
Plan

———————————————————-

Plan
hash value: 3724264953

——————————————————————-

|
Id | Operation | Name | Rows | Cost (%CPU)| Time |

——————————————————————-

|
0 | SELECT STATEMENT | | 1 | 596 (33)| 00:00:01 |

|
1 | SORT AGGREGATE | | 1 | | |

|
2 | TABLE ACCESS FULL| T1 | 2565K| 596 (33)| 00:00:01 |

——————————————————————-

Note

—–

-
dynamic statistics used: dynamic sampling (level=2)

Statistics

———————————————————-


0 recursive calls


0 db block gets

7883
consistent gets


0 physical reads


0 redo size

542
bytes sent via SQL*Net to client

552
bytes received via SQL*Net from client


2 SQL*Net roundtrips to/from client


0 sorts (memory)


0 sorts (disk)


1 rows processed

From the statistics, it is visible that the cache was used, since
there were no physical reads. Full table scan used the cache, it
should have been selected by the optimizer. The index scan will also
be selected even if the table cache is populated by the full table
scan and index blocks are not cached, which means that optimizer
doesn’t take into account the big table caching

Now, the next interesting thing happens when the cache is
cleaned. Flushing the buffer cache cleans the big table cache, but
the V$ tables do not recognize that:

SQL>
alter system flush buffer_pool all;

System
altered.

Elapsed:
00:00:00.01

SQL>
select ts#,dataobj#,con_id from V$BT_SCAN_OBJ_TEMPS;

TS#
DATAOBJ# CON_ID

———-
———- ———-

196612 96464
0

Elapsed:
00:00:00.00

SQL>
set autotrace on

SQL>
select /*+ full(t1) */ count(*) from scott.t1;

COUNT(*)

———-

2000000

Elapsed:
00:00:01.97

Execution
Plan

———————————————————-

Plan
hash value: 3724264953

——————————————————————-

|
Id | Operation | Name | Rows | Cost (%CPU)| Time |

——————————————————————-

|
0 | SELECT STATEMENT | | 1 | 596 (33)| 00:00:01 |

|
1 | SORT AGGREGATE | | 1 | | |

|
2 | TABLE ACCESS FULL| T1 | 2565K| 596 (33)| 00:00:01 |

——————————————————————-

Note

—–

-
dynamic statistics used: dynamic sampling (level=2)

Statistics

———————————————————-


79 recursive calls


0 db block gets

8058
consistent gets

8194
physical reads


0 redo size

542
bytes sent via SQL*Net to client

552
bytes received via SQL*Net from client


2 SQL*Net roundtrips to/from client


6 sorts (memory)


0 sorts (disk)


1 rows processed

SQL>

The V$BT_SCAN_OBJ_TEMPS reports that the SCOTT.T1 table is
cached, but the query causes a bunch of physical reads, which
couldn’t have happened if the table was really cached.

Conclusion

Big table caching is less granular than using the KEEP buffer
pool, which would be possible on the per-table basis and on read
block basis so it would be less efficient than using the KEEP pool.
On the other hand, it is much simpler to use, just allocate percent
of memory and forget it. The thing that bothers me most is that this
feature is also activated by a range scan and that the decision is
made solely on the basis of the table size. This guarantees bringing
into the cache some tables that I wouldn’t normally want to bring
into the cache and thereby making the cache less useful. Other than
that, this looks like an interesting new feature.

Posted in Uncategorized | Tagged | Leave a comment

Christian Antognini, Troubleshooting Oracle Performance, 2nd Edition (Review)

This is an excellent book with an encyclopedia-like approach to Oracle performance tuning, concentrated around the optimizer. It doesn’t cover architectural changes in the database in the newer releases, like the new redo mechanism with the private strands or the new implementation of the latches and pins. In my opinion, Oracle performance is more than just the optimizer.
Also, the book is very long and without the sections for new features in 12c, which makes it rather hard to read for those interested primarily in the new features of 12c. It is also very useful as an encyclopedia, but very hard to read from cover to cover, which is what I did.
Having said that, it is also the most complete description of the Oracle optimizer available. An alternative title could be “all about the Oracle optimizer in one place”. The book is very useful and a must have for any Oracle DBA interested in performance.

Posted in Uncategorized | Leave a comment

12c and export/import from a lower version

There is a serious problem with export/import utilities between versions.

The first command, executed on Oracle 11.2.0.4, Linux x86_64 was:


[oracle@oradb tmp]$ expdp system directory=tmp dumpfile=oe.dmp schemas=oe

logfile=oe.log

Directory tmp was created as /tmp

Result was the following:

Export: Release 11.2.0.4.0 – Production on Mon Jul 28 22:57:03 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights

reserved.

Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 -

64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing

options

Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_01″: system/******** directory=tmp

dumpfile=oe.dmp schemas=oe logfile=oe.log

…..

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

. . exported “OE”.”PRODUCT_DESCRIPTIONS” 2.379 MB 8640

rows

ORA-39181: Only partial table data may be exported due to fine grain

access control on “OE”.”PURCHASEORDER”

. . exported “OE”.”PURCHASEORDER” 243.9 KB 132

rows

. . exported “OE”.”WAREHOUSES” 12.46 KB 9

rows

. . exported “OE”.”CUSTOMERS” 77.98 KB 319

rows

. . exported “OE”.”PRODUCT_INFORMATION” 72.77 KB 288

rows

. . exported “OE”.”SUBCATEGORY_REF_LIST_NESTEDTAB” 6.585 KB 21

rows

. . exported “OE”.”PRODUCT_REF_LIST_NESTEDTAB” 12.51 KB 288

rows

. . exported “OE”.”CATEGORIES_TAB” 14.15 KB 22

rows

. . exported “OE”.”INVENTORIES” 21.67 KB 1112

rows

. . exported “OE”.”ORDERS” 12.39 KB 105

rows

. . exported “OE”.”ORDER_ITEMS” 20.88 KB 665

rows

. . exported “OE”.”PROMOTIONS” 5.507 KB 2

rows

Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded

******************************************************************************

Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:

/tmp/oe.dmp

Job “SYSTEM”.”SYS_EXPORT_SCHEMA_01″ completed with 1 error(s) at Mon Jul

28 22:58:16 2014 elapsed 0 00:01:08

After that, the dump file is copied to the machine running 12c instance:


[oracle@oradb tmp]$ ls

oe.dmp oe.log

[oracle@oradb tmp]$ scp oe.dmp medo:/tmp/

oracle@medo’s password:

oe.dmp 100% 3700KB 3.6MB/s

[oracle@oradb tmp]$

The next thing to try is importing the file into the 12c instance:

[oracle@medo ~]$ impdp userid=pdbadmin/admin@local dumpfile=oe.dmp

schemas=oe directory=tmp log=oe.log sqlfile=oe_schema.sql

Import: Release 12.1.0.1.0 – Production on Mon Jul 28 23:06:43 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights

reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 -

64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application

Testing options

ORA-39001: invalid argument value

ORA-39000: bad dump file specification

ORA-31640: unable to open dump file “/tmp/oe.dmp” for read

ORA-27041: unable to open file

Linux-x86_64 Error: 22: Invalid argument

Additional information: 2

The errors are “invalid argument value” and “bad dump file

specification”, which means that Oracle 12c cannot interpret Oracle 11g

dump file. That is vewy, vewy bad. However, the network link import works

like a charm:


[oracle@medo ~]$ impdp userid=system@local network_link=ora11 schemas=oe

directory=tmp log=oe.log sqlfile=oe_schema.sql

Import: Release 12.1.0.1.0 – Production on Mon Jul 28 23:10:42 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights

reserved.

Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 -

64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application

Testing options

Legacy Mode Active due to the following parameters:

Legacy Mode Parameter: “log=oe.log” Location: Command Line, Replaced

with: “logfile=oe.log”

Starting “SYSTEM”.”SYS_SQL_FILE_SCHEMA_01″: userid=system/********@local

network_link=ora11 schemas=oe directory=tmp logfile=oe.log

sqlfile=oe_schema.sql

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM

Processing object type SCHEMA_EXPORT/TYPE/INC_TYPE

Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC

Processing object type SCHEMA_EXPORT/TYPE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE

Processing object type SCHEMA_EXPORT/XMLSCHEMA/XMLSCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/TABLE/COMMENT

Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION

Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/

INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/

FUNCTIONAL_INDEX/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/VIEW/VIEW

Processing object type SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/TYPE/TYPE_BODY

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/TRIGGER

Processing object type SCHEMA_EXPORT/VIEW/TRIGGER

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job “SYSTEM”.”SYS_SQL_FILE_SCHEMA_01″ successfully completed at Mon Jul

28 23:11:50 2014 elapsed 0 00:01:04

I tried with the SCOTT schema, got the same result.
I guess that means that I will have to create more database links. Odd.

Posted in Uncategorized | 1 Comment

Gimmick with offset and fetch N rows first in 12c

I was looking into some new features of 12c and I didn’t need to dig deep, in order to figure out how does the “fetch first” feature work:

QL> set autotrace on
SQL> select ename from emp offset 5 rows fetch next 5 rows only;

ENAME
———-
BLAKE
CLARK
SCOTT
KING
TURNER

Elapsed: 00:00:00.03

Execution Plan
———————————————————-
Plan hash value: 3611411408

——————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————-
| 0 | SELECT STATEMENT | | 14 | 280 | 2 (0)| 00:00:01 |
|* 1 | VIEW | | 14 | 280 | 2 (0)| 00:00:01 |
|* 2 | WINDOW NOSORT STOPKEY| | 14 | 84 | 2 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | EMP | 14 | 84 | 2 (0)| 00:00:01 |
——————————————————————————-

Predicate Information (identified by operation id):
—————————————————

1 – filter(“from$_subquery$_002″.”rowlimit_$$_rownumber”<=CASE WHEN
(5>=0) THEN 5 ELSE 0 END +5 AND “from$_subquery$_002″.”rowlimit_$$
_rownu

mber”>5)
2 – filter(ROW_NUMBER() OVER ( ORDER BY NULL )<=CASE WHEN (5>=0)
THEN 5 ELSE 0 END +5)

So, there is “ROW_NUMBER analytic function that I have used before to do the same thing. In other words, the long awaited new feature is just slightly simplified old trick with ROW_NUMBER() function. And “VIEW” in the plan tells me that my query was rewritten, as there were no views in my query. Nevertheless, this new feature will come in very handy, for everybody writing a web application and in need of efficient pagination method. One disappointing thing is that “FIRST” and “NEXT” are synonymous and return the same thing. In other words, statements will not remember their offset.
Unfortunately, the associated PL/SQL fetch statement doesn’t have an offset clause, only the well known limit clause:

declare
cursor csr is select ename from emp;
type tab_emp is table of emp.ename%type;
var_temp tab_emp:=tab_emp();
begin
open csr;
fetch csr bulk collect into var_temp limit 5;
for i in var_temp.first..var_temp.last
loop
dbms_output.put_line(var_temp(i));
end loop;
end;
/

If you try to put OFFSET clause anywhere in the FETCH command, a syntax error will follow. That means that it is not possible to skip first 3 pages and fetch just the forth one. The application needs to fetch the first 3 pages, in order to be able to display the 4th page.

 

Posted in Uncategorized | Leave a comment

Still the same cluster limitations

I am slightly disappointed to report that the new extended data types cannot be used in the clustered tables. CLOB columns have never been allowed, and are still not allowed in 12c, but I expected the new data types to pass seamlessly. That is not the case:

SQL> create cluster testclu(intkey number(30,0));

Cluster created.

Elapsed: 00:00:00.03
SQL> create index testclu_ind on cluster testclu;

Index created.

Elapsed: 00:00:00.03

Now, let’s create a normal table with the extended character type:

create table testtab1(
intkey number(30,0),
big_text varchar2(32767))
/
Table created.

Elapsed: 00:00:00.02
SQL>
SQL> desc testtab1
Name                       Null?    Type
—————————————– ——– —————————-
INTKEY                         NUMBER(30)
BIG_TEXT                     VARCHAR2(32767)

SQL>

Now, let’s try and put the table with an extended data type into a cluster.

SQL> create table testtab2
2  (intkey number(30),
3   big_text varchar2(32767))
4  cluster testclu(intkey);
create table testtab2
*
ERROR at line 1:
ORA-14691: Extended character types are not allowed in this table.

If the size of the big_text column is reduced to 4000, the problem is gone:

1  create table testtab2
2  (intkey number(30),
3   big_text varchar2(4000))
4* cluster testclu(intkey)
SQL> /

Table created.

Elapsed: 00:00:00.03
SQL>

I was unable to find any documentation on this restriction.

Posted in Uncategorized | Leave a comment

SYSBACKUP travails, part II

The SYSBACKUP connectivity problems described in the previous post are caused by unpublished bug 15828768.

There is a way to connect, using SYSBACKUP privilege:

 

RMAN> connect target ‘system@local as sysbackup’

target database Password:
connected to target database: ORA12C (DBID=214280212, not open)

Please, note the quotes around the connection string. Without the quotes, this will not work. It works with both single and double quotes:

RMAN> connect target “system@local as sysbackup”

target database Password:
connected to target database: ORA12C (DBID=214280212, not open)

This would require me to change quite a few scripts, in order to utilize SYSBACKUP connections with a commercial backup utility. Fortunately, Oracle is aware of the problem and is actively working on it.

 

 

Posted in Uncategorized | Leave a comment