Rename Pluggable Database

I stumbled upon a convenient way to rename a pluggable database:

SQL> alter system enable restricted session;

System altered.

SQL> alter database rename global_name to test12.home.com;

Database altered.

SQL> alter session set container=cdb$root;

Session altered.

SQL> select name from v$pdbs;

NAME
——————————
PDB$SEED
TEST12

In other words, to rename a pluggable database, one only needs to rename its global name. Cute!

Posted in Uncategorized | 1 Comment

12c: OPTIMIZER_DYNAMIC_SAMPLING=11

Oracle 12c has a new feature related to optimizer dynamic sampling. The new level 11 is supposed to put the dynamic sampling on auto pilot and make the statistics available to other users. However, as soon as this was set, I started noticing processes spending significant time waiting for row cache latch. This was strange, especially having in mind that ASMM was configured. I checked the V$ROWCACHE  by using the following, fairly standard, query:

select parameter,gets,getmisses from v$rowcache order by getmisses;

Here is what I got:

PARAMETER GETMISSES GETS
dc_free_extents

0

0

dc_used_extents

0

0

dc_sql_prs_errors

0

0

kqlsubheap_object

0

0

dc_outlines

0

0

realm cache

0

0

Command rule cache

0

0

Realm Object cache

0

0

Rule Set Cache

0

0

DV Auth Cache

0

0

extensible security principal in

0

0

Unused

0

0

Unused

0

0

Unused

0

0

XS security class privilege

0

0

extensible security midtier cach

0

0

AV row cache 1

0

0

AV row cache 2

0

0

AV row cache 3

0

0

triton security name to ID

0

0

rule_or_piece

0

0

dc_qmc_ldap_cache_entries

0

0

qmc_app_cache_entries

0

0

qmc_app_cache_entries

0

0

qmtmrcin_cache_entries

0

0

qmtmrctn_cache_entries

0

0

qmtmrcip_cache_entries

0

0

qmtmrctp_cache_entries

0

0

qmtmrciq_cache_entries

0

0

qmtmrctq_cache_entries

0

0

qmrc_cache_entries

0

0

qmemod_cache_entries

0

0

SMO rowcache

0

0

dc_cdbfiles

0

0

dc_cdbservices

0

0

dc_pdbdba

0

0

dc_pdbstates

0

0

dc_users

0

0

dc_partition_scns

0

0

dc_users

0

0

realm auth

0

0

realm auth

0

0

Realm Subordinate Cache

0

0

dc_users

0

0

dc_users

0

0

rule_fast_operators

0

0

dc_users

0

0

dc_awr_control

2

45531

dc_profiles

6

381825

rule_info

8

8

dc_tablespace_quotas

18

44

dc_rollback_segments

60

1000560

dc_tablespaces

77

485307021

outstanding_alerts

146

2551

dc_props

180

6559399

dc_users

630

255650332

dc_constraints

989

2855

dc_files

1176

1782669

dc_sequences

2152

1999677

dc_global_oids

2314

996894

dc_table_scns

2795

2795

dc_object_grants

15522

28781047

dc_segments

145653

99182525

sch_lj_objs

195178

195278

dc_users

203715

855595322

sch_lj_oids

213295

435827

dc_histogram_defs

459886

74035168

dc_objects

613787

202217522

dc_histogram_data

1084138

76433495

dc_histogram_data

3409318

2919298230

So, the latch misses are related to the histograms. The only conceivable mechanism that can be considered a culprit is the new OPTIMIZER_DYNAMIC_SAMPLING=11 setting. As soon as that was reset to 2 (11g default), the problem was gone. What are the experiences of other list members with ODS in version 12c? I asked the almighty and omniscient Google and came across a very good article by Pythian:

https://www.pythian.com/blog/performance-problems-with-dynamic-statistics-in-oracle-12c/

Apparently, I wasn’t the first to encounter the problem. Contrary to Pythian, I didn’t notice inefficient plans, probably because the database that I was monitoring did not use in-memory option. However, the 3rd party application, which does a fairly large number of parse calls started running between 10% and 15% slower than before.

Posted in Uncategorized | Leave a comment

Oracle 12c, big table caching

In Oracle 11g,  there has been annoying change which has frequently had a seriously detrimental effect to the application performance. All tables, larger than 2% of SGA, were considered big tables and were read using direct reads into PGA, instead of SGA. That has had two undesired effects:

  • Increased paging, as PGA had to be expanded, in order to receive all those rows
  • No sharing, since the PGA is not visible from other processes (or threads).

Good news about it was preventing a full table scan of a big table (see the definition of big tables above) from flooding the SGA and throwing potentially useful blocks out of it. This mechanism is described here:

http://afatkulin.blogspot.co.uk/2012/07/serial-direct-path-reads-in-11gr2-and.html

However, this story has a sequel, in Oracle 12c. Direct reads were detrimental to performance, so Oracle decided to allow creating of an area withing db cache, dedicated for the large tables. That is so called “big table cache” which, when activated, houses blocks from big tables. Those blocks can no longer flood the SGA and throw the useful blocks out of the cache, as they are now stored in separate pool.  Also, the detrimental direct reads are greatly reduced by using the big table cache. I wrote about the mechanism before, here::

https://dbwhisperer.wordpress.com/2014/10/06/poor-mans-in-memory-caching/

However, now I got to test it and the results were very beneficial, across the board. Bad news is that Oracle 12c requires more memory than Oracle 11G. That should have been expected. Each new version ended up using significantly more memory than the previous one, so this is not really surprising. However, the prices of memory are falling so rapidly that  they more than offset increased hunger for memory by new versions of Oracle RDBMS, so that shouldn’t be a problem.

PS:

Also worth reading is the following article by Tanel Poder:

Optimizer statistics-driven direct path read decision for full table scans (_direct_read_decision_statistics_driven)

 

Posted in Miscellaneous Oracle | Leave a comment

Smarter DG Broker in Oracle 12c

In Oracle 11G, DG Broker needed to be told whether the standby database was physical or logical. The command to add database in 11G was something like this:

ADD DATABASE <DB Unique Name> AS CONNECT IDENTIFIER is <TNS Descriptor>

MAINTAINED AS [PHYSICAL|LOGICAL];

Oracle 12c is definitely smarter and can figure it out on its own. The “ADD DATABASE” syntax is simpler:

DGMGRL for 64-bit Windows: Version 12.1.0.2.0 – 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type “help” for information.
Connected as SYSDG.
DGMGRL> help add database

Adds a member to the broker configuration

Syntax:

ADD { BACKUP_APPLIANCE | DATABASE | FAR_SYNC } <object name>
[AS CONNECT IDENTIFIER IS <connect identifier>];

DGMGRL>

DG Broker will connect to the database and figure out what kind of a standby it is. The new version of DG Broker can also add Oracle’s “Zero Loss Backup Appliance”, and engineered system, or a “Far Sync Instance” for geographically dislocated standby databases. FAR SYNC looks like a very useful concept to me, but I haven’t yet had an opportunity to set one up. Here is a good article about it:

http://karandba.blogspot.com/2014/10/far-sync-in-oracle-12c-database.html

 

Posted in Uncategorized | 2 Comments

Synthetic Full Backup for Oracle

Synthetic full backup is a method for combining incremental backup into a full backup. It is not a new thing, many backup utilities can do that with file system backups. It is also well described:

http://www.techworld.com/storage/what-are-synthetic-backups-1149/

http://www.webopedia.com/TERM/S/synthetic_backup.html

However, there is only one backup suite that can do that with Oracle RDBMS: Commvault 11.

How is it done?  Well, Commvault is a very well documented tool, the documentation is here:

http://documentation.commvault.com/commvault/v11/article?p=products/oracle/snap/backup_adv.htm

To perform a synthetic full backup, Commvault will use storage snapshot to snap LUN(s) on which database lives, then use their proprietary block level backup to backup the snapshot to designated backup storage and then combine the incremental backup with the previous backup, to produce a “synthetic full backup”.  Why is that a big deal which deserves a blog post? I am notoriously lazy when it comes to blogging and do not post very often. However, this is a big deal.  The reason why this is a big deal is that this backup method eliminates regular full backups. A full backup needs to be taken only once, as the baseline. All subsequent backups are only incremental. And that means fast.

This helps with multi TB data warehouses which contain huge amounts of data, but are only loaded every few days, with few GB of information. Incremental backup will be a few GB in size and will be combined with the previous backups to form a “synthetic full”. Synthetic full backup contains only the data blocks, not archive logs. Once it is restored, the log files will still need to be applied.  It can be used to restore either the entire database or just a part of it. I was lucky enough to test it and it works really well.

The backups are NOT combined on the database server, so the database users will not suffer. Backups are combined on another computer which is known as “media agent” in Commvault vernacular.  For all the gory details, please contact your friendly neighbourhood Commvault sales office.

Posted in Uncategorized | Leave a comment

A New Version of an Old Trick

A long, long time ago, people started using the following SQL idiom:

with t as (select /*+ materialize */ …..)

select …. from t

If my memory serves me right, Jonathan Lewis was the first person whom I saw using this idiom. The use of that idiom was to speed create a global temporary table and reuse the query results, without having to re-execute query again. Here is an example:

SQL> set autotrace on

SQL> set timing on

SQL> with t as (select /*+ materialize */ * from dba_objects)

2 select count(*) from t;

COUNT(*)

———-

87661

Elapsed: 00:00:02.67

Execution Plan

———————————————————-

Plan hash value: 3274171655

——————————————————————————–

——————————

| Id | Operation | Name | Rows | B

ytes | Cost (%CPU)| Time |

——————————————————————————–

——————————

| 0 | SELECT STATEMENT | | 1 |

| 570 (15)| 00:00:01 |

| 1 | TEMP TABLE TRANSFORMATION | | |

| | |

| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D6608_4A7EB2 | |

| | |

| 3 | VIEW | DBA_OBJECTS | 87374 |

13M| 239 (24)| 00:00:01 |

| 4 | UNION-ALL | | |

| | |

|* 5 | TABLE ACCESS BY INDEX ROWID | SUM$ | 1 |

9 | 1 (0)| 00:00:01 |

|* 6 | INDEX UNIQUE SCAN | I_SUM$_1 | 1 |

| 0 (0)| 00:00:01 |

| 7 | TABLE ACCESS BY INDEX ROWID | OBJ$ | 1 |

30 | 3 (0)| 00:00:01 |

|* 8 | INDEX RANGE SCAN | I_OBJ1 | 1 |

| 2 (0)| 00:00:01 |

|* 9 | FILTER | | |

| | |

|* 10 | HASH JOIN | | 87373 |

10M| 237 (24)| 00:00:01 |

| 11 | TABLE ACCESS FULL | USER$ | 99 |

1683 | 2 (0)| 00:00:01 |

|* 12 | HASH JOIN | | 87373 |

9044K| 230 (23)| 00:00:01 |

| 13 | INDEX FULL SCAN | I_USER2 | 99 |

2178 | 1 (0)| 00:00:01 |

|* 14 | TABLE ACCESS FULL | OBJ$ | 87373 |

7167K| 224 (21)| 00:00:01 |

| 15 | NESTED LOOPS | | 1 |

29 | 2 (0)| 00:00:01 |

|* 16 | INDEX SKIP SCAN | I_USER2 | 1 |

20 | 1 (0)| 00:00:01 |

|* 17 | INDEX RANGE SCAN | I_OBJ4 | 1 |

9 | 1 (0)| 00:00:01 |

| 18 | NESTED LOOPS | | 1 |

105 | 2 (0)| 00:00:01 |

| 19 | NESTED LOOPS | | 99 |

105 | 2 (0)| 00:00:01 |

| 20 | TABLE ACCESS FULL | USER$ | 99 |

1683 | 2 (0)| 00:00:01 |

|* 21 | INDEX RANGE SCAN | I_LINK1 | 1 |

| 0 (0)| 00:00:01 |

| 22 | TABLE ACCESS BY INDEX ROWID| LINK$ | 1 |

88 | 0 (0)| 00:00:01 |

| 23 | SORT AGGREGATE | | 1 |

| | |

| 24 | VIEW | | 87374 |

| 331 (8)| 00:00:01 |

| 25 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6608_4A7EB2 | 87374 |

13M| 331 (8)| 00:00:01 |

——————————————————————————–

——————————

Predicate Information (identified by operation id):

—————————————————

5 – filter(BITAND(“S”.”XPFLAGS”,8388608)=8388608)

6 – access(“S”.”OBJ#”=:B1)

8 – access(“EO”.”OBJ#”=:B1)

9 – filter(“O”.”TYPE#”<>4 AND “O”.”TYPE#”<>5 AND “O”.”TYPE#”<>7 AND “O”.”TYPE

#”<>8 AND

“O”.”TYPE#”<>9 AND “O”.”TYPE#”<>11 AND “O”.”TYPE#”<>12 AND “O”.”TY

PE#”<>13 AND “O”.”TYPE#”<>14 AND

“O”.”TYPE#”<>22 AND “O”.”TYPE#”<>87 AND “O”.”TYPE#”<>88 OR BITAND(

“U”.”SPARE1″,16)=0 OR

(“O”.”TYPE#”=4 OR “O”.”TYPE#”=5 OR “O”.”TYPE#”=7 OR “O”.”TYPE#”=8

OR “O”.”TYPE#”=9 OR “O”.”TYPE#”=10

OR “O”.”TYPE#”=11 OR “O”.”TYPE#”=12 OR “O”.”TYPE#”=13 OR “O”.”TYPE

#”=14 OR “O”.”TYPE#”=22 OR

“O”.”TYPE#”=87) AND (SYS_CONTEXT(‘userenv’,’current_edition_name’)

=’ORA$BASE’ AND “U”.”TYPE#”<>2 OR

“U”.”TYPE#”=2 AND “U”.”SPARE2″=TO_NUMBER(SYS_CONTEXT(‘userenv’,’cu

rrent_edition_id’)) OR EXISTS

(SELECT 0 FROM SYS.”USER$” “U2″,SYS.”OBJ$” “O2” WHERE “O2”.”OWNER#

“=”U2″.”USER#” AND “O2”.”TYPE#”=88

AND “O2”.”DATAOBJ#”=:B1 AND “U2”.”TYPE#”=2 AND “U2”.”SPARE2″=TO_NU

MBER(SYS_CONTEXT(‘userenv’,’current_

edition_id’)))))

10 – access(“O”.”SPARE3″=”U”.”USER#”)

12 – access(“O”.”OWNER#”=”U”.”USER#”)

14 – filter(“O”.”TYPE#”<>10 AND “O”.”NAME”<>’_NEXT_OBJECT’ AND

“O”.”NAME”<>’_default_auditing_options_’ AND “O”.”LINKNAME” IS NUL

L AND BITAND(“O”.”FLAGS”,128)=0)

16 – access(“U2”.”TYPE#”=2 AND “U2”.”SPARE2″=TO_NUMBER(SYS_CONTEXT(‘userenv’,’

current_edition_id’)))

filter(“U2”.”TYPE#”=2 AND “U2”.”SPARE2″=TO_NUMBER(SYS_CONTEXT(‘userenv’,’

current_edition_id’)))

17 – access(“O2”.”DATAOBJ#”=:B1 AND “O2”.”TYPE#”=88 AND “O2″.”OWNER#”=”U2”.”US

ER#”)

21 – access(“L”.”OWNER#”=”U”.”USER#”)

Statistics

———————————————————-

11 recursive calls

1279 db block gets

2384 consistent gets

1251 physical reads

928 redo size

528 bytes sent via SQL*Net to client

500 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

0 sorts (memory)

0 sorts (disk)

1 rows processed

SQL>

The lines in bold font are showing the temp table transformation. If the alias “t” is used repeatedly in the script, the “select * from dba_objects” will not be re-executed but the results will be loaded from the temp table. Now, there is a new version of the same trick:

1 with t as (select /*+ result_cache */ * from dba_objects)

2* select count(*) from t

SQL> /

COUNT(*)

———-

87661

Elapsed: 00:00:01.90

Execution Plan

———————————————————-

Plan hash value: 1250653294

——————————————————————————–

——-

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Tim

e |

——————————————————————————–

——-

| 0 | SELECT STATEMENT | | 1 | | 237 (24)| 00:

00:01 |

| 1 | SORT AGGREGATE | | 1 | | |

|

| 2 | VIEW | DBA_OBJECTS | 87374 | | 237 (24)| 00:

00:01 |

| 3 | UNION-ALL | | | | |

|

|* 4 | FILTER | | | | |

|

|* 5 | HASH JOIN | | 87373 | 9385K| 236 (24)| 00:

00:01 |

| 6 | INDEX FULL SCAN | I_USER2 | 99 | 396 | 1 (0)| 00:

00:01 |

|* 7 | HASH JOIN | | 87373 | 9044K| 230 (23)| 00:

00:01 |

| 8 | INDEX FULL SCAN | I_USER2 | 99 | 2178 | 1 (0)| 00:

00:01 |

|* 9 | TABLE ACCESS FULL| OBJ$ | 87373 | 7167K| 224 (21)| 00:

00:01 |

| 10 | NESTED LOOPS | | 1 | 29 | 2 (0)| 00:

00:01 |

|* 11 | INDEX SKIP SCAN | I_USER2 | 1 | 20 | 1 (0)| 00:

00:01 |

|* 12 | INDEX RANGE SCAN | I_OBJ4 | 1 | 9 | 1 (0)| 00:

00:01 |

| 13 | NESTED LOOPS | | 1 | 17 | 1 (0)| 00:

00:01 |

| 14 | INDEX FULL SCAN | I_LINK1 | 1 | 13 | 0 (0)| 00:

00:01 |

|* 15 | INDEX RANGE SCAN | I_USER2 | 1 | 4 | 1 (0)| 00:

00:01 |

——————————————————————————–

——-

Predicate Information (identified by operation id):

—————————————————

4 – filter(“O”.”TYPE#”<>4 AND “O”.”TYPE#”<>5 AND “O”.”TYPE#”<>7 AND

“O”.”TYPE#”<>8 AND “O”.”TYPE#”<>9 AND “O”.”TYPE#”<>11 AND “O”.”TYP

E#”<>12 AND

“O”.”TYPE#”<>13 AND “O”.”TYPE#”<>14 AND “O”.”TYPE#”<>22 AND “O”.”T

YPE#”<>87

AND “O”.”TYPE#”<>88 OR BITAND(“U”.”SPARE1″,16)=0 OR (“O”.”TYPE#”=4

OR

“O”.”TYPE#”=5 OR “O”.”TYPE#”=7 OR “O”.”TYPE#”=8 OR “O”.”TYPE#”=9 O

R

“O”.”TYPE#”=10 OR “O”.”TYPE#”=11 OR “O”.”TYPE#”=12 OR “O”.”TYPE#”=

13 OR

“O”.”TYPE#”=14 OR “O”.”TYPE#”=22 OR “O”.”TYPE#”=87) AND

(SYS_CONTEXT(‘userenv’,’current_edition_name’)=’ORA$BASE’ AND “U”.

“TYPE#”<>2

OR “U”.”TYPE#”=2 AND “U”.”SPARE2″=TO_NUMBER(SYS_CONTEXT(‘userenv’,

‘current_edit

ion_id’)) OR EXISTS (SELECT 0 FROM SYS.”USER$” “U2″,SYS.”OBJ$” “O

2″ WHERE

“O2″.”OWNER#”=”U2″.”USER#” AND “O2”.”TYPE#”=88 AND “O2″.”DATAOBJ#”

=:B1 AND

“U2”.”TYPE#”=2 AND “U2”.”SPARE2″=TO_NUMBER(SYS_CONTEXT(‘userenv’,’

current_editi

on_id’)))))

5 – access(“O”.”SPARE3″=”U”.”USER#”)

7 – access(“O”.”OWNER#”=”U”.”USER#”)

9 – filter(“O”.”TYPE#”<>10 AND “O”.”NAME”<>’_NEXT_OBJECT’ AND

“O”.”NAME”<>’_default_auditing_options_’ AND “O”.”LINKNAME” IS NUL

L AND

BITAND(“O”.”FLAGS”,128)=0)

11 – access(“U2”.”TYPE#”=2 AND “U2”.”SPARE2″=TO_NUMBER(SYS_CONTEXT(‘userenv’,

‘current_edition_id’)))

filter(“U2”.”TYPE#”=2 AND “U2”.”SPARE2″=TO_NUMBER(SYS_CONTEXT(‘userenv’,

‘current_edition_id’)))

12 – access(“O2”.”DATAOBJ#”=:B1 AND “O2”.”TYPE#”=88 AND

“O2″.”OWNER#”=”U2″.”USER#”)

15 – access(“L”.”OWNER#”=”U”.”USER#”)

Statistics

———————————————————-

0 recursive calls

0 db block gets

1115 consistent gets

0 physical reads

0 redo size

528 bytes sent via SQL*Net to client

500 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

0 sorts (memory)

0 sorts (disk)

1 rows processed

SQL>

The difference is in the /*+ result_cache */ hint instead of /*+ materialize /. The purpose is exactly the same, the difference in general should be marginal, coming from not having to create a temporary object in the dictionary. However, it is my general experience that the scripts with the /*+ result_cache */ hint behave a little bit better and perform a little bit faster than with the old material girl version.

The numbers in the script above prove nothing, because the table rows have already been read by the previous version, so the fact that the /*+ materialize */ version did some physical reads and the /*+ result_cache */ version didn’t didn’t do any physical reads was to be expected and proves nothing. Again, it is my impression that the result cache behaves better than the “material girl” version.

Posted in Uncategorized | Tagged | 4 Comments

After startup trigger on RAC

Recently, a colleague of mine had some problems with pinning application procedures into the shared pool. His database was upgraded to RAC and he wanted to pin the application procedures and packages into shared pool upon the database startup.
The non-RAC version of the database was using a shell script, something like $ORACLE_HOME/local/scripts/pin_procs.sh in the script that was starting DB when the system was booted
However, RAC starts the database automatically and no user scripts are invoked during startup. The alternative mechanism is to use a startup database trigger. The only question that remains is whether the trigger will file once for each instance or once for the database? The trigger will fire once for each instance. Let’s demonstrate the statement:

mgogala@umajor:~$ ssh oracle@rac1
Last login: Sat Aug 8 19:36:48 2015
[oracle@rac1 ~]$ srvctl start db -d orcl
[oracle@rac1 ~]$ srvctl status db -d orcl
Instance orcl2 is running on node rac1
Instance orcl1 is running on node rac2
[oracle@rac1 ~]$ srvctl config db -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /home/oracle/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/orcl/spfileorcl.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: orcl2,orcl1
Disk Groups: DATA,REDO
Mount point paths:
Services:
Type: RAC
Database is administrator managed
[oracle@rac1 ~]$

This is a little 11.2.0.4 RAC database, running on two nodes. The proof will be achieved by creating a trigger which inserts the instance id and the host name into a table. If the table contains only a single record, the trigger have fired only once. If there are two records, with two different instance numbers and machine names, the trigger has fired twice, just as was the original statement. So, let’s first create a table in the SCOTT schema:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create table startup(
2 inst_num integer,
3 inst_host varchar2(128));

Table created.

Elapsed: 00:00:00.11
SQL> grant insert on startup to system;

Grant succeeded.

Elapsed: 00:00:00.01

Next comes the trigger in the SYSTEM schema, because login triggers can only be created by the schema with the DBA privilege.

SQL> connect system@rac11
Enter password:
Connected.
CREATE OR REPLACE TRIGGER system.start_me_up AFTER startup ON DATABASE
DECLARE
instance_num INTEGER;
host VARCHAR2(128);
BEGIN
SELECT instance_number,host_name INTO INSTANCE_NUM,HOST FROM v$instance;
INSERT INTO scott.startup VALUES (instance_num,host);
END;
9 /

Trigger created.

Elapsed: 00:00:00.26

Now, the moment of truth. Let’s restart the database and see what is in the SCOTT.STARTUP table. First, the restart:

[oracle@rac1 ~]$ srvctl stop db -d orcl -o immediate
[oracle@rac1 ~]$ srvctl status db -d orcl
Instance orcl2 is not running on node rac1
Instance orcl1 is not running on node rac2
[oracle@rac1 ~]$ srvctl start db -d orcl
[oracle@rac1 ~]$ srvctl status db -d orcl
Instance orcl2 is running on node rac1
Instance orcl1 is running on node rac2
[oracle@rac1 ~]$

Now, let’s check SCOTT.STARTUP:
mgogala@umajor:~$ sqlplus scott/tiger@rac11

SQL*Plus: Release 12.1.0.2.0 Production on Wed Aug 12 18:32:01 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> column inst_host format a10;
SQL> select * from startup;

INST_NUM INST_HOST
---------- ----------
2 rac1
1 rac2

Elapsed: 00:00:00.01
SQL>

The table contains two records, once for each instance. That proves that the startup trigger will fire once for each instance. This also provides the mechanism for doing different things after starting different instances because it would be trivial to add IF … THEN clause based on the instance number. However, in this particular case, the task was to pin the same set of the application procedures in both instances. This was accomplished by the database trigger.

Posted in Uncategorized | 1 Comment