October 29, 2012

BLOGS I FOLLOW

SCRIPT
http://oracommands.blogspot.in/
https://sites.google.com/site/myscriptsora/
http://tamimdba.wordpress.com/category/database-script/
http://ahmedbaraka.com/computer/docs/oracle_dba_code_examples.pdf
http://faruqueahmed.wordpress.com

ASM
http://nadvi.blogspot.in/2010/06/how-to-check-free-space-in-asm.html
http://ashok-linux-tips.blogspot.in/2012/11/oracle-asm-commands.html

Exhausted space of FRA
http://muhammetucar.blogspot.in/2012/06/exhausted-space-of-fra.html

DATA GUARD

http://chenguangblog.wordpress.com/2011/02/20/data-guard-scripts/
http://arup.blogspot.in/2009/12/resolving-gaps-in-data-guard-apply.html

Oracle® Data Guard Concepts and Administration 10g Release 2 (10.2) Part Number B14239-05
http://docs.oracle.com/cd/B19306_01/server.102/b14239/scenarios.htm

Troubleshooting Data Guard
http://www.datadisk.co.uk/html_docs/oracle_dg/troubleshooting.htm

Data Guard Cheatsheet
http://www.datadisk.co.uk/html_docs/oracle_dg/cheatsheet.htm

Resolving Gaps in Data Guard Apply Using Incremental RMAN BAckup
http://arup.blogspot.in/2009/12/resolving-gaps-in-data-guard-apply.html

Oracle 10g Dataguard – Sync Standby database with Incremental SCN based RMAN backup
http://cheatsheet4oracledba.blogspot.in/2012/01/oracle-10g-dataguard-sync-standby.html

Steps to perform for Rolling forward a standby database using RMAN incremental backup where ASM filesystem is involved.
http://www.quick-dba.com/?pg=articles&actid=176

Standby roll forward using RMAN incremental backup
http://www.dbapundits.com/blog/step-by-step/standby-roll-forward-using-rman-incremental-backup/

RMAN Incremental Backups to Roll Forward a Physical Standby Database
http://www.oracle-ckpt.com/rman-incremental-backups-to-roll-forward-a-physical-standby-database-2/

Flashing Back a Failed Primary Database into a Physical Standby Database
http://www.quick-dba.com/?pg=articles&actid=190

Password file maintenance in a Data Guard environment
http://prutser.wordpress.com/2011/06/13/password-file-maintenance-in-a-data-guard-environment/


DATA GUARD BROKER

Oracle® Data Guard Broker 11g Release 1 (11.1) Part Number B28295-03
http://docs.oracle.com/cd/B28359_01/server.111/b28295/cli.htm

Data Guard Manager (Observer) Utility
http://satya-dba.blogspot.in/2010/09/dgmgrl-utility-tool-executable.html

DATAPUMP

https://sites.google.com/site/oracledbnote/datapump
http://docs.oracle.com/cd/B28359_01/server.111/b28319/dp_export.htm
http://www.morganslibrary.org/reference/datapump.html
http://www.morganslibrary.org/reference/pkgs/dbms_datapump.html
http://psoug.org/reference/datapump.html

DATAPUMP STATUS
https://sites.google.com/site/oracledbnote/dpstatus.sql
http://basklaassen.blogspot.in/2006/07/import-status.html

Schema backup and Restore using Expdp and Impdp / Restore schema backups using impdp
http://www.dbas-oracle.com/2011/05/how-to-take-schema-backup-using-expdp.html
http://alexzeng.wordpress.com/2012/09/25/valuate-oracle-awr-automatic-workload-repository/

TABLE LOCKS

http://raghuook.blogspot.in/2010/08/apps-dba-lock-realated-scripts.html
http://www.myoracleguide.com/s/locks.htm


AWR
http://alexzeng.wordpress.com/2012/09/25/valuate-oracle-awr-automatic-workload-repository/

Prepare 4 Interview
Oracle Advanced Performance Tuning
http://gavinsoorma.com/2009/06/temp-tablespace-usage/

Oracle Advanced Performance Tuning
http://prep-interview.blogspot.in/2012/05/oracle-advanced-performance-tuning.html

Overview of Oracle Tuning
http://alexzeng.wordpress.com/2012/06/13/how-to-find-the-sql-that-using-lots-of-temp-tablespace-in-oracle/


What You Can Do When Your Database Runs out of Temp Space
http://www.dbspecialists.com/files/presentations/temp_space.html


 http://shahiddba.blogspot.in/2012/04/oracle-dba-daily-checklist.html
http://www.fadalti.com/oracle/database/rman_effective_use.htm
http://www.oracle-base.com/articles/11g/oracle-db-11gr2-rac-installation-on-oracle-linux-6-using-virtualbox.php
http://oraclemamukutti.blogspot.in/2013/01/applied-column-on-varchivedlog-when.html
http://arup.blogspot.in/2009/12/resolving-gaps-in-data-guard-apply.html
http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmcatdb.htm#BRADV89687
http://www.puthranv.com/2013/01/goldengate-with-csn.html


http://www.downloadmela.com/resumes/32/Oracle-DBA-resumes-sample-example-experience-resumes.html

http://pagaldba.wordpress.com/2010/08/06/oracle-dba-interview-qs/

Explain plan for an Old Query

Explain plan for an Old Query :

set pages 9999 lines 300
select SESSION_ID,SESSION_SERIAL#,SQL_ID,SQL_PLAN_HASH_VALUE from DBA_HIST_ACTIVE_SESS_HISTORY where SESSION_ID=<session_id> and SESSION_SERIAL#=<session_serial>;

SELECT * FROM TABLE(dbms_xplan.display_awr('&SQL_ID'));

SELECT * FROM TABLE(dbms_xplan.display_awr('&SQL_ID','&plan_hash_value'));

select * from table(dbms_xplan.display_awr('&sql_id',null,null,'advanced +peeked_binds'));

SELECT * FROM TABLE(dbms_xplan.display_cursor('&SQL_ID'));

SELECT * FROM TABLE(dbms_xplan.display_cursor('&SQL_ID', '&child_number'));
 
Explain plan for Query :

alter session set current_schema=<schema_name>;

select * from table(dbms_xplan.display);

select * from table(dbms_xplan.display_cursor);

select /*+ gather_plan_statistics */ count(*) from hist_test where id1>1000000;

select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

select distinct sql_id, PLAN_HASH_VALUE,TIMESTAMP from dba_hist_sql_plan where sql_id='&sql_id';

October 28, 2012

Recover Standby DB from a missing archivelog

A Physical Standby database relies on continuous application of archivelogs from a Primary Database to be in synch with it. In Oracle Database versions prior to 10g in the event of an archivelog gone missing or corrupt you had to rebuild the standby database from scratch.

In 10g you can use an incremental backup from SCN and recover the standby using the same to compensate for the missing archivelogs as shown below

Step 1: On the standby database check the current scn.
STDBY> set numwidth 30;
STDBY> select current_scn from v$database;

CURRENT_SCN
-----------
123456789

Step 2: On the primary database create the needed incremental backup from the above SCN
rman target /
RMAN> {
allocate channel c1 type disk;
BACKUP INCREMENTAL FROM SCN 123456789 DATABASE
}
Step 3: SCP the backup files to standby server to /tmp/incr_bkp folder.

searching for all files that match the pattern /tmp/incr_bkp/

List of Files Unknown to the Database
=====================================

Step 4: Catalog the Incremental Backup Files at the Standby Database
/tmp/incr_bkp > rman target /

RMAN> CATALOG START WITH '/tmp/incr_bkp/';

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

Step 5: Apply the Incremental Backup to the Standby Database
RMAN> RECOVER DATABASE NOREDO;
Media recovery complete.

Step 6: Put the standby database back to managed recovery mode.
STDBY>> recover managed standby database disconnect;


From the alert.log you will notice that the standby database is still looking for the old log files

*************************************************
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence ....
**************************************************
This is because the controlfile has not been updated.
Hence the standby controlfile has to be recreated

Step 7: On the primary create new standby controlfile
PRIM>alter database create standby controlfile as ‘/tmp/incr_bkp/standby01.ctl’;
System altered.

Step 8: At Standby .. Replace standby controlfile at all location as shown by controle_files parameter.
Copy the standby control file to the standby site. Shutdown the stanby database and replace the stanby controlfiles and restart the standby database in managed recovery mode...


Note: - FOR STANDBY DATABASES ON ASM additional steps is required after replacing the stanby control file. Like renaming datafiles ...

GRANT USER ACCESS ON SCHEMA

user needs read only access on existing schema
----------------------------------------------
select 'grant '||decode(object_type,'TABLE','select','VIEW','select','SEQUENCE','SELECT ','PROCEDURE','EXECUTE ','PACKAGE','EXECUTE ','TRIGGER','EXECUTE ','FUNCTION','EXECUTE ')|| ' on '||owner||'.'||object_name||' to target user' from dba_objects where OWNER='source user' and object_type not in ('INDEX','PACKAGE BODY','DATABASE LINK','LOB');

user needs full access on existing schema
-----------------------------------------
select 'grant '||decode(object_type,'TABLE','select,insert,delete,update ','VIEW','select,insert,delete,update ','SEQUENCE','SELECT ','PROCEDURE','EXECUTE ','PACKAGE','EXECUTE ','TRIGGER','EXECUTE ','FUNCTION','EXECUTE ')|| ' on '||owner||'.'||object_name||' to targetuser' from dba_objects where OWNER='source user' and object_type not in
('INDEX','PACKAGE BODY','DATABASE LINK','LOB');

Related:
-------
Running the SQL*Plus script below (substituting &Owner and &NewUser) will produce a listing of all the permissions to allow the New User to access all the objects owned by OWNER. Review the output of the script and then run it to Grant the new permissions to NewUser.

Set pagesize 0
define OWNER=
define NEWUSER=

Spool new_grants.txt

Select
decode(OBJECT_TYPE,
'TABLE','GRANT SELECT, INSERT, UPDATE, DELETE , REFERENCES ON '||'&OWNER'||'.',
'VIEW','GRANT SELECT ON '||'&OWNER'||'.',
'SEQUENCE','GRANT SELECT ON '||'&OWNER'||'.',
'PROCEDURE','GRANT EXECUTE ON '||'&OWNER'||'.',
'PACKAGE','GRANT EXECUTE ON '||'&OWNER'||'.',
'FUNCTION','GRANT EXECUTE ON '||'&OWNER'||'.' )||object_name||' TO &NewUser;'
From USER_OBJECTS where OBJECT_TYPE IN ( 'TABLE', 'VIEW', 'SEQUENCE', 'PROCEDURE', 'PACKAGE', 'FUNCTION')
Order By OBJECT_TYPE;

Spool Off;
exit

DGMGRL SWITCHOVER SWITCHBACK

SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS,THREAD#, DELAY_MINS FROM V$MANAGED_STANDBY;

PROCESS CLIENT_P  SEQUENCE# STATUS
------- -------- ---------- ------
MRP0      N/A            1000 APPLIED_LOG

select THREAD#,SEQUENCE#,STATUS,ARCHIVED from v$log;  ------ the status of redo log files
LGWR RFS 1000 CURRENT

Both the above current values must be same.


$ps -ef|grep dmon|grep -v grep

show parameter broker;

$dgmgrl

DGMGRL> connect /

Broker config pre switchover:
============================

DGMGRL> show configuration;

Databases:
    TESTPRI - Primary database
    TESTDG  - Physical standby database

Switchover Now:
--------------
DGMGRL> switchover to 'TESTDG';


Broker config post switchover:
=============================

DGMGRL> show configuration;

Databases:

    TESTPRI - Physical standby database

    TESTDG  - Primary database

Switchback Now:
---------------

DGMGRL> connect sys/sys@TESTDG

Connected.

DGMGRL> switchover to 'TESTPRI';

DGMGRL> exit


Troubleshooting Tips:
---------------------
1. Remember to create TEMP files on DG(new primary) after switchover

2. Give sys passsword explicitly as time of connecting using DGMGRL for switchover else it fails. Let us take an example where sys password was not given.

DGMGRL> connect /

Connected.

DGMGRL> switchover to 'TESTDG';

Performing switchover NOW. Please wait...

Operation requires shutdown of instance "TESTPRIR1" on database "TESTPRI".

Shutting down instance "TESTPRIR1"...

ORA-01017: invalid username/password; logon denied

You are no longer connected to ORACLE

Please connect again.

Unable to shut down instance "TESTPRIR1".

You must shut down instance "TESTPRIR1" manually.

Operation requires shutdown of instance "TESTPRI" on database "TESTDG".

You must shut down instance "TESTPRI" manually.

Operation requires startup of instance "TESTPRIR1" on database "TESTPRI".

You must start instance "TESTPRIR1" manually.

Operation requires startup of instance "TESTPRI" on database "TESTDG".

You must start instance "TESTPRI" manually.

Switchover succeeded. New primary is "TESTDG"


Fix:
----

Just manually stop and start the instances. Switchover of role reversal is already done.

3. Before executing the switchover you may reduce the number of ARCH processes to the minimum needed for both remote and local archiving. Additional ARCH processes can take additional time to shutdown thereby increasing overall switchover timings. Once the switchover has been completed you can reenable the additional ARCH processes.'log_archive_max_processes' is the parameter in question here.

This is it. So we saw how easy it was to switchover/switchback using Oracle Dataguard broker. If the configuration is set correctly, it's very easy to manage.

DATAGUARD SWITCHOVER SWITCHBACK

Pre-Switchover Steps:-
---------------------
1.Blackout Databases
2.Freeze Jobs
3.Stop Streams

------------------------------------------------------------------------------------------------------------------
PRIMARY NODE 1 (n1pr1)
----------------------
connect / as sysdba

set pages 999 lines 300
col OPEN_MODE for a10
col HOST_NAME for a10
select

name,INSTANCE_NAME,HOST_NAME,OPEN_MODE,DATABASE_STATUS,DATABASE_ROLE,PROTECTION_LEVEL,CONTROLFILE_TYPE,LOGINS,to_c

har(STARTUP_TIME,'DD-MON-YYYY HH24:MI:SS') "UP TIME"from v$database,gv$instance;

column role format a7 tru
column name format a10 wrap
select name,database_role,log_mode,protection_mode,protection_level from v$database;

select dest_id,database_mode,recovery_mode,protection_mode,archived_seq#,applied_seq# from v$archive_dest_Status

where dest_id in (1,2);

SELECT DATABASE_ROLE,SWITCHOVER_STATUS FROM V$DATABASE;

show parameter log_archive_dest_2;

select flashback_on from v$database;
YES
if not, enable using,
alter database flashback ON;

create restore point <before_test_ddmmyy> guarantee flashback database;

set linesize 121
col scn format 99999999
col time format a32

SELECT scn, database_incarnation#, guarantee_flashback_database, storage_size, time, name from gv$restore_point;

STANDBY NODE 1 (n1pr1)
----------------------
connect / as sysdba

set pages 999 lines 300
col OPEN_MODE for a10
col HOST_NAME for a10
select

name,INSTANCE_NAME,HOST_NAME,OPEN_MODE,DATABASE_STATUS,DATABASE_ROLE,PROTECTION_LEVEL,CONTROLFILE_TYPE,LOGINS,to_c

har(STARTUP_TIME,'DD-MON-YYYY HH24:MI:SS') "UP TIME"from v$database,gv$instance;

column role format a7 tru
column name format a10 wrap
select name,database_role,log_mode,protection_mode,protection_level from v$database;

select dest_id,database_mode,recovery_mode,protection_mode,archived_seq#,applied_seq# from v$archive_dest_Status

where dest_id in (1,2);

SELECT DATABASE_ROLE,SWITCHOVER_STATUS FROM V$DATABASE;

create restore point <before_test_ddmmyy> guarantee flashback database;

set linesize 121
col scn format 99999999
col time format a32

SELECT scn, database_incarnation#, guarantee_flashback_database, storage_size, time, name from gv$restore_point;

------------------------------------------------------------------------------------------------------------------
check Sync:
----------
DR SYNC STATUS
------------------------------------------------------------------------------------------------------------------

SWITCH OVER TO STANDBY
========================
@PRIMARY
--------
PRIMARY NODE 1 (p1nr1)
----------------------

SELECT * FROM v$block_change_tracking;
STATUS     FILENAME BYTES
---------- -------- -----

if ENABLED,
ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

srvctl stop instance -d <database> -i <instance_n2pr2> -o immediate

SELECT DATABASE_ROLE,SWITCHOVER_STATUS FROM V$DATABASE;
DATABASE_ROLE     SWITCHOVER_STATUS
----------------  --------------------
PRIMARY           TO STANDBY

alter database commit to switchover to physical standby with session shutdown;
shutdown immediate;
startup mount;

SELECT DATABASE_ROLE,SWITCHOVER_STATUS FROM V$DATABASE;
DATABASE_ROLE     SWITCHOVER_STATUS
----------------  --------------------
PHYSICAL STANDBY  TO PRIMARY          ----------------> NEW STANDBY

@STANDBY
--------
PHYSICAL STANDBY 1 (n1dr1)
--------------------------

SELECT * FROM v$block_change_tracking;
STATUS     FILENAME BYTES
---------- -------- -----

if ENABLED,
ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

srvctl stop instance -d <database> -i <instance_n2dr2> -o immediate

SELECT DATABASE_ROLE,SWITCHOVER_STATUS FROM V$DATABASE;
DATABASE_ROLE     SWITCHOVER_STATUS
----------------  --------------------
PHYSICAL STANDBY  TO PRIMARY         

alter database commit to switchover to primary with session shutdown;
shutdown immediate;
startup;

SELECT DATABASE_ROLE,SWITCHOVER_STATUS FROM V$DATABASE;
DATABASE_ROLE     SWITCHOVER_STATUS
----------------  --------------------
PRIMARY           TO STANDBY          ----------------> NEW PRIMARY          
------------------------------------------------------------------------------------------------------------------
Switchback to the old Primary 
======================
@NEW STANDBY
------------
PRIMARY NODE 1 (n1pr1)
----------------------
srvctl start instance -d <database> -i <instance_n2pr2> -o mount
connect / as sysdba
SELECT * FROM v$block_change_tracking;
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

alter database commit to switchover to primary with session shutdown;
shutdown immediate;
startup;

SELECT DATABASE_ROLE,SWITCHOVER_STATUS FROM V$DATABASE;
DATABASE_ROLE     SWITCHOVER_STATUS
----------------  --------------------
PRIMARY           TO STANDBY          ---------------->BACK TO PRIMARY

@NEW PRIMARY
------------
PHYSICAL STANDBY NODE 1 (n1dr1)
-------------------------------
srvctl start instance -d <database> -i <instance_n2dr2> -o mount
connect / as sysdba
SELECT * FROM v$block_change_tracking;
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

alter database commit to switchover to physical standby with session shutdown;
shutdown immediate;
startup mount;

SELECT DATABASE_ROLE,SWITCHOVER_STATUS FROM V$DATABASE;
DATABASE_ROLE     SWITCHOVER_STATUS
----------------  --------------------
PHYSICAL STANDBY  TO PRIMARY          ---------------->BACK TO STANDBY            

alter database recover managed standby database disconnect from session;
        
------------------------------------------------------------------------------------------------------------------
check Sync:
----------
DR SYNC STATUS
------------------------------------------------------------------------------------------------------------------

Post-Switchback Steps:-
---------------------
1.Un-Blackout Databases
2.Un-Freeze Jobs
3.Start Streams
4.Drop restore points created at both Primary and Standby after confirmation from application team.
drop restore point <before_test_ddmmyy>;

set linesize 200
col scn format 99999999
col time format a35

SELECT scn, database_incarnation#, guarantee_flashback_database, storage_size, time, name from gv$restore_point;

DATAGUARD SYNC STATUS

@PRIMARY

connect / as sysdba
set pages 999 lines 300
col OPEN_MODE for a10
col HOST_NAME for a10
col DATABASE_ROLE for a7

select NAME,DB_UNIQUE_NAME,INSTANCE_NAME,HOST_NAME,OPEN_MODE,LOG_MODE,DATABASE_STATUS,DATABASE_ROLE,SWITCHOVER_STATUS,PROTECTION_MODE,PROTECTION_LEVEL,CONTROLFILE_TYPE,STATUS,LOGINS,to_char(STARTUP_TIME,'DD-MON-YYYY HH24:MI:SS') "DB UP TIME"from v$database,gv$instance;

SELECT NAME,OPEN_MODE,DATABASE_ROLE,PROTECTION_LEVEL,SWITCHOVER_STATUS,STATUS,LOGINS FROM V$DATABASE,GV$INSTANCE;

select THREAD#,SEQUENCE#,STATUS,ARCHIVED from v$log;

select dest_id,database_mode,recovery_mode,protection_mode,archived_seq#,applied_seq# from v$archive_dest_Status where dest_id in (1,2);

select * from v$archive_gap;
select * from v$dataguard_stats;

show parameter dump;
show parameter listener;
show parameter service;
show parameter log_archive_dest_2;
show parameter log_archive_dest_state_2;
show parameter dg_broker_start;

Set lines 1000
Set pages 1000
Set scan off
Set feed off
BREAK ON ROW SKIP 1
COL "THREAD" for a10
COL "PR-ARCHIVED" for a15
COL "STBY-ARCHIVED" for a15
COL "STBY-APPLIED" for a 15
COL "SHIPPING GAP(PR -> STBY)" for a20
COL "APPLIED GAP(STBY -> STBY)" for a20
set head off
select 'sysdate: '|| to_char(sysdate,'DD-Mon-YYYY Hh24:Mi:ss') from dual;
select '****************Standby Log ship and Log Apply Status*****************' from dual;
set head on
select DEST_ID,DESTINATION,TARGET,STATUS from V$ARCHIVE_DEST where DESTINATION is not null;
select * from (
select
lpad(t1,4,' ') "Thread",
lpad(pricre,9,' ') "PR - Archived",
lpad(stdcre,10,' ') "STBY - Archived",
lpad(stdnapp,9,' ') "STBY - Applied",
lpad(pricre-stdcre,13,' ') "Shipping GAP (PR -> STBY)",
lpad(stdcre-stdnapp,15,' ') "Applied GAP (STBY -> STBY)"
from
(select max(sequence#) stdcre, thread# t1 from v$archived_log where standby_dest='YES'  and resetlogs_id in (select max(RESETLOGS_ID) from v$archived_log) and thread# in (1,2,3,4) group by thread#) a,
(select max(sequence#) stdnapp, thread# t2 from v$archived_log where standby_dest='YES'  and resetlogs_id in (select max(RESETLOGS_ID) from v$archived_log) and thread# in (1,2,3,4) and applied='YES' group by thread#) b,
(select max(sequence#) pricre, thread# t3 from v$archived_log where standby_dest='NO' and resetlogs_id in (select max(RESETLOGS_ID) from v$archived_log) and  thread# in (1,2,3,4) group by thread#) c
where a.t1=b.t2 and b.t2=c.t3 and c.t3=a.t1) order by 1
/

THREAD PR-ARCHIVED STBY-ARCHIVED STBY-APPLIED SHIPPING GAP(PR -> STBY) APPLIED GAP(STBY -> STBY)

alter system switch logfile;        or
alter system switch all logfile;    or
alter system archive log current;

/
/

Set lines 1000
Set pages 1000
Set scan off
Set feed off
BREAK ON ROW SKIP 1
COL "THREAD" for a10
COL "PR-ARCHIVED" for a15
COL "STBY-ARCHIVED" for a15
COL "STBY-APPLIED" for a 15
COL "SHIPPING GAP(PR -> STBY)" for a20
COL "APPLIED GAP(STBY -> STBY)" for a20
set head off
select 'sysdate: '|| to_char(sysdate,'DD-Mon-YYYY Hh24:Mi:ss') from dual;
select '****************Standby Log ship and Log Apply Status*****************' from dual;
set head on
select DEST_ID,DESTINATION,TARGET,STATUS from V$ARCHIVE_DEST where DESTINATION is not null;
select * from (
select
lpad(t1,4,' ') "Thread",
lpad(pricre,9,' ') "PR - Archived",
lpad(stdcre,10,' ') "STBY - Archived",
lpad(stdnapp,9,' ') "STBY - Applied",
lpad(pricre-stdcre,13,' ') "Shipping GAP (PR -> STBY)",
lpad(stdcre-stdnapp,15,' ') "Applied GAP (STBY -> STBY)"
from
(select max(sequence#) stdcre, thread# t1 from v$archived_log where standby_dest='YES'  and resetlogs_id in (select max(RESETLOGS_ID) from v$archived_log) and thread# in (1,2,3,4) group by thread#) a,
(select max(sequence#) stdnapp, thread# t2 from v$archived_log where standby_dest='YES'  and resetlogs_id in (select max(RESETLOGS_ID) from v$archived_log) and thread# in (1,2,3,4) and applied='YES' group by thread#) b,
(select max(sequence#) pricre, thread# t3 from v$archived_log where standby_dest='NO' and resetlogs_id in (select max(RESETLOGS_ID) from v$archived_log) and  thread# in (1,2,3,4) group by thread#) c
where a.t1=b.t2 and b.t2=c.t3 and c.t3=a.t1) order by 1
/

THREAD PR-ARCHIVED STBY-ARCHIVED STBY-APPLIED SHIPPING GAP(PR -> STBY) APPLIED GAP(STBY -> STBY)


@STANDBY
--------
connect / as sysdba
set pages 999 lines 300
col OPEN_MODE for a10
col HOST_NAME for a10
col DATABASE_ROLE for a7

select NAME,DB_UNIQUE_NAME,INSTANCE_NAME,HOST_NAME,OPEN_MODE,LOG_MODE,DATABASE_STATUS,DATABASE_ROLE,SWITCHOVER_STATUS,PROTECTION_MODE,PROTECTION_LEVEL,CONTROLFILE_TYPE,STATUS,LOGINS,to_char(STARTUP_TIME,'DD-MON-YYYY HH24:MI:SS') "DB UP TIME"from v$database,gv$instance;

SELECT NAME,OPEN_MODE,DATABASE_ROLE,PROTECTION_LEVEL,SWITCHOVER_STATUS,STATUS,LOGINS FROM V$DATABASE,GV$INSTANCE;

select THREAD#,SEQUENCE#,STATUS,ARCHIVED from v$log;

SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS,THREAD#, DELAY_MINS FROM V$MANAGED_STANDBY;

select dest_id,database_mode,recovery_mode,protection_mode,archived_seq#,applied_seq# from v$archive_dest_Status where dest_id in (1,2);

select * from v$archive_gap;
select * from v$dataguard_stats;

show parameter dump;
show parameter listener;
show parameter service;
show parameter log_archive_dest_2;
show parameter log_archive_dest_state_2;
show parameter dg_broker_start;

PROMPT
PROMPT
PROMPT Run on Standby Database. This script checks last log applied and last log received time
PROMPT
PROMPT

col time format a40
select 'Last applied  : ' Logs, to_char(next_time,'DD-MON-YY:HH24:MI:SS') Time
    from v$archived_log
    where sequence# = (select max(sequence#) from v$archived_log where applied='YES')
    union
    select 'Last received : ' Logs, to_char(next_time,'DD-MON-YY:HH24:MI:SS') Time
    from v$archived_log
   where sequence# = (select max(sequence#) from v$archived_log);
PROMPT last sequence# received and the last sequence# applied to standby database.
PROMPT
select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
from v$archived_log
where resetlogs_change#=(select resetlogs_change# from v$database)
group by thread#) al,
(select thread# thrd, max(sequence#) lhmax
from v$log_history
where first_time=(select max(first_time) from v$log_history)
group by thread#) lh
where al.thrd = lh.thrd;

-------------------------------------------OR----------------------------------------------

DR SYNC STATUS
--------------------

@PRIMARY

SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied",

(ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference" FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG

WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,

(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME)

FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

    Thread Last Sequence Received Last Sequence Applied Difference
---------- ---------------------- --------------------- ----------

alter system switch logfile;
/
/

SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied",

(ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference" FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG

WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,

(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME)

FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

Thread Last Sequence Received Last Sequence Applied Difference
---------- ---------------------- --------------------- ----------
                                                                   0 OR min

-------------------------------------------OR----------------------------------------------

DR SYNC STATUS
--------------------

@PRIMARY

alter system switch logfile;
alter system switch logfile;
alter system switch logfile;

select thread#,max(sequence#) from v$archived_log where archived='YES' group by thread# order by 1;

@STANDBY

select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread# order by 1;

-------------------------------------------OR----------------------------------------------
DR SYNC STATUS
--------------------
 @STANDBY

SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS,THREAD#, DELAY_MINS FROM V$MANAGED_STANDBY;

PROCESS   CLIENT_P  SEQUENCE# STATUS
--------- -------- ---------- ------
MRP0      N/A            1000 APPLIED_LOG

select THREAD#,SEQUENCE#,STATUS,ARCHIVED from v$log;  ------ the status of redo log files
LGWR RFS 1000 CURRENT

Both the above current values must be same.

-------------------------------------------OR----------------------------------------------


DELETE COMPRESS FILES OS LEVEL

To list files dated Apr 24
--------------------------
ls -lrt | grep 'Apr 24' | awk -F' ' '{print$9}'

To delete files dated Apr 24
----------------------------
rm -rf `ls -lrt | grep 'Apr 24' | awk -F' ' '{print$9}'`

To list files of Month Apr
--------------------------
ls -lrt | grep 'Apr' | awk -F' ' '{print$9}'
ls -lrt *.trc|grep 'Apr'|xargs rm -rf  {}\;

To delete files of Month Apr
----------------------------
rm -rf `ls -lrt | grep 'Apr' | awk -F' ' '{print$9}'`

To list files older than 60 days
--------------------------------
find . -name "*.trc" -depth -mtime +60 -exec ls -l {} \;
find /path/to/files* -mtime +60 -print

To delete files older than 60 days
----------------------------------
find . -name "*.trc" -depth -mtime +60 -exec rm {} \;
find /path/to/files* -mtime +60 -exec rm {} \;
find /path/to/files* -type f -mtime +60 -print0 | xargs -r rm -rf

COMPRESS
--------
nohup compress *.arc &
ls -lrt *.arc|awk '{print "compress "$9}' >ARC_LIST.txt

SOURCE : Internet

References:
http://eisabainyo.net/weblog/useful-linux-commands/

SERVER REBOOT

If its Dataguard setup or normal db, following steps mandatory just take backup of following three steps and store it in separate notepad. once server reboot activity completed cross check once which was taken before server reboot.
####################################################################################################

ps -ef|grep pmon 

ps -ef|grep pmon|wc -l

ps -ef|grep inh (OR) ps -ef|grep tns

ps -ef|grep inh|wc -l

ps -ef |grep d.bin

hostname

date

uname -a

cat /etc/oratab  /  cat /var/opt/oracle/oratab

uptime

who -b

df -h  / df -gt
####################################################################################################

server reboot time database side need to  check the below commands:
(Before server reboot and after server boot, if its is DG database)
-------------------------------------------------------------------
SQL> select name,db_unique_name,database_role,controlfile_type,CREATED from v$database;

SQL >select sequence#,first_time,next_time,completion_time,applied from v$archived_log where applied <> 'YES' and DEST_ID !=0 and status!='D'and completion_time <(sysdate-1/48) Order By 1;

no rows selected----->in sync

SQL >select process, status ,sequence# from v$managed_standby;

MRP0---->process should reflect

===============================
Data Guard Db Startup Procedure
===============================

Sqlplus “/ as sysdba”

startup nomount;

alter database mount standby database;

recover managed standby database disconnect from session; ----------->To put it in MRM MODE

exit;

=================================
Data Guard Db  Shutdown Procedure
=================================

Login as oracle

source the environment

sqlplus “/ as sysdba”

alter database recover managed standby database cancel; ------> to cancel MRM mode

shutdown immediate;

exit

DATAFILE RESIZE


set pages 999 lines 300

select file_id,FILE_NAME,(BYTES/1024/1024) "Size MB",tablespace_name from dba_data_files where tablespace_name like '%%';

alter database datafile <no.> resize 1024m;

or

SET LONG 90000

select DBMS_METADATA.GET_DDL('TABLESPACE','tablespacename') FROM DUAL;

CREATE TABLESPACE "tablespacename" DATAFILE '/path/filename' SIZE 1024 AUTOEXTEND ON NEXT 512 MAXSIZE 512

ALTER DATABASE DATAFILE <no.> AUTOEXTEND ON NEXT 512 MAXSIZE 3000M;

DATAGUARD FAILOVER WITH FLASHBACK DATABASE


On Primary
=======
Defer the log shipping to standby.

SQL> Alter system set log_archive_dest_state_n=defer;

Check last log sequence Select max(sequence#) from v$log_history

On Satndby
========
Enable flashback recovery.

Setup flashback destination and file size.

SQL> Alter system set db_recovery_file_dest=’/u01/flash’ scope=spfile;
SQL> Alter system set db_recovery_file_dest_size=10G scope=spfile;

Cancel the recovery.

SQL> Alter database recover managed standby database cancel;

SQL> Shutdown immediate

Startup database in mount mode.
SQL> Startup mount

Put database in the flashback mode
SQL> alter database flashback on;

Create the Restore Point on the standby database
SQL> Create restore point before_dr guarantee flashback database;

Activate Standby database
==================
SQL> Alter database activate standby database;

Open the DR database

SQL> Alter database open;

Shutdown and restart the database

--- Perform DR tests ---

Steps to rollback and convert to standby.

On Satndby

Shutdown after DR test

SQL>Shutdown immediate

Startup database with mount and force option.

SQL>startup mount force

Restore database to the restored point.

SQL> Flashback database to restore point before_dr;

Drop restored point

SQL> Drop restore point before_dr;

Turn off flashback

alter database flashback off;

Convert database to physical standby

SQL> Alter database convert to physical standby;

shutdown and startup in standby mode

SQL> Shutdown immediate
SQL> startup nomount
SQL> Alter database mount standby database;
SQL> Alter database recover managed standby database disconnect from session;

On Production
==========
Enable the log shipping to standby.

SQL> Alter system set log_archive_dest_state_4='enable';

switch logg and verify the apply.

SQL> alter system switch logfile;

RMAN Backup Details

RMAN Backup Details
-------------------

set linesize 150
col INSTANCE for a9
col ELAPSED for a30

        SELECT (  SELECT   instance_name FROM v$instance)
              || ' '
              || (  SELECT   instance_number FROM v$instance)
                 instance,
            --  TO_CHAR (start_time, 'YYYY-MM-DD HH24:MI') start_time,
       to_date (start_time, 'DD-MM-YYYY HH24:MI:SS') start_time,
              TO_CHAR (output_bytes / 1048576, '999,999,999.9') output_mb,
              TO_CHAR (output_bytes_per_sec / 1048576, '999,999.9') mb_S,
              time_taken_display elapsed,input_type,status
         FROM v$rman_backup_job_details
         where start_time >= sysdate - &Days
         ORDER BY start_time;

RMAN Backup Details
-------------------

select ctime "Date",
       decode(backup_type, 'L', 'Archive Log', 'D', 'Full', 'Incremental') backup_type,
        bsize "Size MB"
 from (select trunc(bp.completion_time) ctime
              , backup_type
              , round(sum(bp.bytes/1024/1024),2) bsize
       from v$backup_set bs, v$backup_piece bp
       where bs.set_stamp = bp.set_stamp
       and bs.set_count  = bp.set_count
       and bp.status = 'A'
       group by trunc(bp.completion_time), backup_type)
order by 1, 2;

TAG Details
-----------

select bp.TAG, bp.handle, bp.STAMP, decode(bs.backup_type, 'L', 'Archive Log', 'D', 'Full', 'Incremental')

backup_type,bp.SET_STAMP, to_char(bp.completion_time + 30/1440,'yyyy/mm/dd HH24:MI:SS') end_time from v$backup_set bs, v$backup_piece bp where bs.set_stamp = bp.set_stamp

and bs.set_count  = bp.set_count and tag='&TAG';

RMAN Backup Details
-------------------

set pages 9999 lines 300
col status for a10
col object_type for a10

alter session set nls_date_format = 'DD-MM-YYYY HH24:MI:SS';

select SID,STATUS,START_TIME,END_TIME,OBJECT_TYPE,OUTPUT_DEVICE_TYPE from v$rman_status order by START_TIME desc;

select max(START_TIME) "START_TIME" ,END_TIME,ELAPSED_SECONDS/60 "ELAPSED_MINS", OUTPUT_BYTES/1024/1024/1024 "OUTPUT_BYTES_GB",STATUS from v$rman_backup_job_details group by start_time,END_TIME,ELAPSED_SECONDS,OUTPUT_BYTES, STATUS having max(START_TIME) > sysdate-1;

level 0 backup in the last 7 days
---------------------------------

select /*+ rule */ NVL(min(r.status),'NO BACKUP') as status from V$RMAN_BACKUP_JOB_DETAILS r inner join
(select distinct session_stamp, incremental_level from v$backup_set_details) b on r.session_stamp = b.session_stamp where incremental_level is not null and r.start_time > sysdate - 7 and b.incremental_level = 0;

Longops - RMAN
--------------

SELECT SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK,ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE", TIME_REMAINING/60
Remaining_mins FROM V$SESSION_LONGOPS WHERE OPNAME LIKE 'RMAN%' AND OPNAME NOT LIKE '%aggregate%' AND TOTALWORK != 0 AND SOFAR <> TOTALWORK;

More
----
List the most recent Level 0 backups

Here,
incremental level 0 backups which runs
on Sunday using "backup as compressed backupset incremental level 0 ...."
We have incremental level 1 backups which runs
everyday other than Sunday using "backup as compressed backupset incremental level 1 database...."

select BACKUP_TYPE,INCREMENTAL_LEVEL,START_TIME,COMPLETION_TIME from v$backup_set
where START_TIME > sysdate -4 and BACKUP_TYPE='L';

B INCREMENTAL_LEVEL START_TIME        COMPLETION_TIME
- ----------------- ----------------- -----------------
D                 0 28-AUG-2012 04:30 28-AUG-2012 04:33
D                 0 28-AUG-2012 04:30 28-AUG-2012 04:33
I                 1 29-AUG-2012 04:30 29-AUG-2012 04:33
I                 1 29-AUG-2012 04:30 29-AUG-2012 04:35
I                 1 30-AUG-2012 04:30 30-AUG-2012 04:33
I                 1 30-AUG-2012 04:30 30-AUG-2012 04:35
I                 1 31-AUG-2012 04:30 31-AUG-2012 04:33
I                 1 31-AUG-2012 04:30 31-AUG-2012 04:35
D                   31-AUG-2012 04:35 31-AUG-2012 04:35
D                   31-AUG-2012 06:30 31-AUG-2012 06:30
D                   31-AUG-2012 08:30 31-AUG-2012 08:30

As per Oracle documentation V$BACKUP_SET,BACKUP_TYPE holds
"BACKUP_TYPE VARCHAR2(1) Type of files that are in this backup. If the
backup contains archived redo logs, the value is L. If this is a datafile
full
backup, the value is D. If this is an incremental backup, the value is I."
"INCREMENTAL_LEVEL NUMBER Location where this backup set fits into
the database's backup strategy. Set to zero for full datafile backups,
nonzero for incremental datafile backups, and NULL for archivelog
backups."
So the above query shows an entry with "D" which means a full datafile
backup did run today, but you can see INCREMENTAL_LEVEL is null which
means it is not a full datafile backup.

SELECT DISTINCT TO_CHAR((b.CHECKPOINT_TIME), &#39;YYYY-MM-DD HH:MI.SS&#39;) t
FROM v$backup_datafile b, v$tablespace ts, v$datafile f
WHERE b.incremental_level = 0
  AND INCLUDED_IN_DATABASE_BACKUP=&#39;YES&#39;
  AND f.file#=b.file#
  AND f.ts#=ts.ts#
GROUP BY b.checkpoint_time
ORDER BY 1;

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---

If you look at LV column, A=archivelog, 0=incremental level #, F=file backup.

RMAN> list backup by file tag <tag_name>;

Managing and Monitoring RMAN Backups

LIST

RMAN> list backup; // backup sets
RMAN> list copy; // image copies
RMAN> list backup of database; // whole database backup sets, full or incremental
RMAN> list backup of datafile 1;
RMAN> list backup of tablespace users;
RMAN> list backup of archivelog all;
RMAN> list copy of archivelog from time=’sysdate – 2';
RMAN> list backup of archivelog from sequence 1000 until sequence 1050;
REPORT

RMAN> report schema; // “schema” is confusing. this just shows datafiles and has nothing to do with user objects.
RMAN> report need backup;
RMAN> report need backup days 3;
RMAN> report need backup redundancy 3;
RMAN> report obsolete;
RMAN> report obsolete redundancy 2; // shows backups that are older than two others
DELETE

RMAN> delete obsolete;
RMAN> delete obsolete redundancy 2; // delete backups that are older than two others
Views & Tables

These views and data are contained in the control file only (not recovery catalog).
V$BLOCK_CHANGE_TRACKING – to monitor block change tracking
V$BACKUP_FILES
V$BACKUP_SET
V$BACKUP_PIECE
V$BACKUP_REDOLOG – each archived log that has been backed up
V$BACKUP_SPFILE – each spfile that has been backed up
V$BACKUP_DEVICE – names of sbt devices
V$RMAN_CONFIGURATION
Parameters

DB_CREATE_FILE_DEST – default location for the change tracking file.

level 0 incremental backup    the base for subsequent incremental backups, copies all blocks containing data.

You can create a level 0 database backup as backup sets or image copies.
level 1 cumulative incremental backup    backs up all blocks changed after the
most recent incremental backup at level 0
level 1 differential incremental backup    backs up all blocks changed after the most recent incremental backup at

level 1 or 0 - Incremental backups are differential by default.
Block Change Tracking    The block change tracking feature for incremental backups improves backup
performance by recording changed blocks for each datafile. - a small bitmap file.

An example is shown below where we are checking the validity of a database backup for a database which has a

weekly level 0 backup. Note that the RESTORE DATABASE VALIDATE command will cause RMAN to check for the last level

0 backup which has been performed in this case on the 15th of November.

RMAN> restore database validate;

Starting restore at 20-NOV-09
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1

channel ORA_SBT_TAPE_1: starting validation of datafile backup set
channel ORA_SBT_TAPE_1: reading from backup piece vnkud0k5_1_1
channel ORA_SBT_TAPE_1: piece handle=vnkud0k5_1_1 tag=TAG20091115T060908
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: validation complete, elapsed time: 00:14:15
Finished restore at 20-NOV-09

RMAN> list backup of database tag TAG20091115T060908;

List of Backup Sets
===================

BS Key Type LV Size Device Type Elapsed Time Completion Time
——- —- — ———- ———– ———— —————
3054 Incr 0 15.59G SBT_TAPE 00:39:00 15-NOV-09
BP Key: 3054 Status: AVAILABLE Compressed: NO Tag: TAG20091115T060908
Handle: vnkud0k5_1_1 Media:
List of Datafiles in backup set 3054
File LV Type Ckp SCN Ckp Time Name
—- — —- ———- ——— —-
1 0 Incr 23674579905 15-NOV-09 /u04/oradata/emrep/system01.dbf
2 0 Incr 23674579905 15-NOV-09 /u04/oradata/emrep/undotbs01.dbf
3 0 Incr 23674579905 15-NOV-09 /u04/oradata/emrep/sysaux01.dbf
4 0 Incr 23674579905 15-NOV-09 /u04/oradata/emrep/users01.dbf
5 0 Incr 23674579905 15-NOV-09 /u04/oradata/emrep/mgmt.dbf
6 0 Incr 23674579905 15-NOV-09 /u04/oradata/emrep/mgmt_ecm_depot1.dbf
7 0 Incr 23674579905 15-NOV-09 /u04/oradata/emrep/patrol01.dbf

http://www.freelists.org/post/oracle-l/RMAN-Query-on-VBACKUP-SET-inconsistency,1
http://gavinsoorma.com/2009/11/rman-restore-validate-examples/
http://myoracleworld.hobby-electronics.net/RMAN-basic-excersizes.html
http://www.pafumi.net/rman.htm
http://orainstance.wordpress.com/2013/04/18/interview-questions-answer-on-rman/

 

ORA-00257: archiver error. Connect internal only, until freed.

ISSUE:
ORA-00257: archiver error. Connect internal only, until freed
ARCHIVE DESTINATION FULL ORA-00257

ps -ef|grep pmon

ORACLE_SID=`ps -ef | grep asm_smon | grep -v 'grep' | grep -v 'sed' | awk '{printf $8}' | awk 'BEGIN{FS="_"} {printf $3}'`


ASM Tablespace Utilization Scripts

show parameter recovery;

1. DB_RECOVERY_FILE_DEST_SIZE (Specifies max space to use for FRA)
2. DB_RECOVERY_FILE_DEST (Location of FRA)
The DB_RECOVERY_FILE_DEST_SIZE must be set before DB_RECOVERY_FILE_DEST.

col name format a40
select
   name,
   to_char(space_limit, '999,999,999,999') as space_limit,
   to_char(space_limit - space_used + space_reclaimable,
   '999,999,999,999') as space_available,
   round((space_used - space_reclaimable)/space_limit * 100, 1) as pct_full
from
   v$recovery_file_dest;

select * from V$FLASH_RECOVERY_AREA_USAGE;(see what kind of files are available in the Flash Recovery Area)
select * from V$RECOVERY_FILE_DEST; (determine actual values)

ALTER SYSTEM SET db_recovery_file_dest_size=10G scope=both;
ALTER SYSTEM SET db_recovery_file_dest='/oradata/FRA';

For example, If FRA is in an Automatic Storage Management (ASM) disk group
ALTER SYSTEM SET db_recovery_file_dest='+FLASH’ scope=both;

RAC
ALTER SYSTEM set db_recovery_file_dest_size=60G scope=both sid='*' ;
ALTER SYSTEM SET db_recovery_file_dest='+FLASH' sid='*';
In a RAC database, all instances must have the same values for these parameters. Even though there are multiple nodes they all share the same controlfiles.

Here,if there is no space available. DBA can take the archive backup to free the space.

ls -ltr *.cmd

nohup rman target / cmdfile=archivebackup.cmd log=archivebackup_dbname_DDMONYY.log &
nohup: appending output to `nohup.out'
tail -f archivebackup_dbname_DDMONYY.log

archivebackup.cmd
-----------------
run {
DELETE NOPROMPT ARCHIVELOG UNTIL TIME 'SYSDATE-1' BACKED UP 1 TIMES TO SBT_TAPE;
allocate channel ch1 type 'sbt_tape';
allocate channel ch2 type 'sbt_tape';
allocate channel ch3 type 'sbt_tape';
allocate channel ch4 type 'sbt_tape';
BACKUP ARCHIVELOG ALL FILESPERSET 10 DELETE INPUT;
}

ps -ef| grep rman

col name format a40
select
   name,
   to_char(space_limit, '999,999,999,999') as space_limit,
   to_char(space_limit - space_used + space_reclaimable,
   '999,999,999,999') as space_available,
   round((space_used - space_reclaimable)/space_limit * 100, 1) as pct_full
from
   v$recovery_file_dest;

select * from V$FLASH_RECOVERY_AREA_USAGE;(see what kind of files are available in the Flash Recovery Area)
select * from V$RECOVERY_FILE_DEST; (determine actual values)

NOTE:

In order to solve the above error the solutions are

1) Increase the free space where archiver archives the archivelog. The location where archiver archives the log is determined by parameter file pfile or spfile.
This can be determined by loging into sqlplus and issuing

SQL> show parameter log_archive_dest

2) In case it is not possible to increase free space at the same location but if free space is available at other location then the parameter log_archive_dest (or log_archive_dest_1 in some cases) can be changed so that the new archives are produced at new location specified which has free space.

this can be done by modify init.ora file or using alter system if spfile is present

SQL> alter system set log_archive_dest_1=’

3) We can use following steps for this
   1.find the location of Archive destination by
   show parameter archive_dest

   lets say it provide  LOCATION=/u10/oradata/mydb/arch

   2.move some files to some other location using os command
   cd /u10/oradata/mydb/arch
   mv /u10/oradata/mydb/arch/* /u11/oradata/mydb/arch-bkp/

4) The option which is often used is to take a backup of the archives from the existing place and delete those archives from that place so that new archives can generated at that place .
the backup can be OS level backup and OS level del;etion but the recommended method which is compulsory to be used with ASM in place is taking any RMAN backup and delete using RMAN. as shown

rman target sys/sys

RMAN> backup archive log all device type disk format ‘/oracle/arch_%U’;

RMAN> delete archive until time ‘trunc(sysdate)’;

This will delete all the archive logs until today and space will freed and the archiver will start archiving redo logs

------------------------------------------------------------------------------------------------------------

The views v$recovery_file_dest and v$flash_recovery_area_usage does not always give the true picture of exact space used due to BUG Bug 4911954 in Oracle 10g and the versions which is confirmed to be affected is 10.2.0.2. (Reference Metalink Doc Id 4911954.8 ).
V$FLASH_RECOVERY_AREA_USAGE provides information about the flash recovery area disk space usage. Following is its main columns:
FILE_TYPE - the type of the file and can have any of the following values:
controlfile, onlinelog, archivelog, backuppiece, imagecopy, flashbacklog
PERCENT_SPACE_USED - This represents the disk space used by the file type, in percentage.
PERCENT_SPACE_RECLAIMABLE - this represents the percentage of disk space reclaimable from the file type after deleting any obsolete or redundant files, and files backed up to a tertiary device.
OTN Notes that you can see the actual space used by joining into v$recovery_file_dest:
A new view, V$FLASH_RECOVERY_AREA_USAGE, shows what's available in the flashback area.

ARCHIVE DESTINATION FULL BACKUP TO TAPE

ARCHIVE LOG BACKUP TO TAPE
--------------------------
ps -ef|grep pmon

ORACLE_SID=`ps -ef | grep asm_smon | grep -v 'grep' | grep -v 'sed' | awk '{printf $8}' | awk 'BEGIN{FS="_"} {printf $3}'`

export ORACLE_SID


sqlplus "/as sysdba"
set pages 9999 lines300
col OPEN_MODE for a10
col HOST_NAME for a10

select name,INSTANCE_NAME,HOST_NAME,OPEN_MODE,DATABASE_STATUS,DATABASE_ROLE, PROTECTION_LEVEL,CONTROLFILE_TYPE,LOGINS,to_char(STARTUP_TIME,'DD-MON-YYYY HH24:MI:SS') "UP TIME"from v$database,Gv$instance;

SET LINESIZE  145
SET PAGESIZE  9999
SET VERIFY    off
COLUMN group_name             FORMAT a20           HEAD 'Disk Group|Name'
COLUMN sector_size            FORMAT 99,999        HEAD 'Sector|Size'
COLUMN block_size             FORMAT 99,999        HEAD 'Block|Size'
COLUMN allocation_unit_size   FORMAT 999,999,999   HEAD 'Allocation|Unit Size'
COLUMN state                  FORMAT a11           HEAD 'State'
COLUMN type                   FORMAT a6            HEAD 'Type'
COLUMN total_mb               FORMAT 999,999,999   HEAD 'Total Size (MB)'
COLUMN used_mb                FORMAT 999,999,999   HEAD 'Used Size (MB)'
COLUMN pct_used     &nb;       allocation_unit_size
&n"sans-serif" verdana="verdana">  , type                             &nbsp1

compute sum label "Grand Total: " of total_mb used_mb on report
SELECT
  , (total_mb - free_mb)   &)*100, 2)  pct_used<;pct_used

FROM

    v\$asm_diskgroup

WHERE

    state = 'MOUNTED'

ORDER BY

    name

/


nohup rman f;">Disk Group            Sector   Block   Allocation

Name               &nb;       type

  , total_mb                                 total_mb

  , (total_mb - free_mb)                     used_mb

  , ROUND((1- (free_mb / total_mb))*100, 2)  pct_used

FROM

    v\$asm_diskgroup

WHERE

    state = 'MOUNTED'

ORDER BY

    name

/


Disk Group            Sector   Block   Allocation

Name                    Size    Size    Unit Size State       Type   Total Size (MB) Used Size (MB) Pct. Used

-------------------- ------- ------- ------ -f archivebackup_dbname_DDMONYY.log


archivebackup.cmd
-----------------
run
{
delete archivelog until time 'sysdate-3' backed up 1 times to sbt_tape;
allocate channel ch1 type 'sbt_tape';
allocate channel ch2 type 'sbt_tape';
allocate channel ch3 type 'sbt_tape';
allocate channel ch4 type 'sbt_tape';
backup archivelog all filesperset 10 delete input;
}

nohup: appending output to `nohup.out'
tail -f archivebackup_dbname_DDMONYY.log
------------------------------------------------------------------------------------------------------------------ls -lrth
rm arch (remove oldest archive if not required)

Database health check

ps -ef|grep pmon

ps -ef|grep tns

ORACLE_SID=`ps -ef | grep asm_smon | grep -v 'grep' | grep -v 'sed' | awk '{printf $8}' | awk 'BEGIN{FS="_"} {printf $3}'`

export ORACLE_SID

sqlplus "/as sysdba"
set pages 9999 lines 300
col OPEN_MODE for a10
col HOST_NAME for a10

select name,INSTANCE_NAME,HOST_NAME,OPEN_MODE,DATABASE_STATUS,DATABASE_ROLE, PROTECTION_LEVEL,CONTROLFILE_TYPE,LOGINS,to_char(STARTUP_TIME,'DD-MON-YYYY HH24:MI:SS') "UP TIME"from v$database,Gv$instance;

alter system checkpoint;

alter system switch logfile;

October 25, 2012

DBA VIEWS

DBA Views Queries
---------------------------         
select * from dict;
select * from cat;

TABLE COLUMNS
---------------------------
select * from dba_tab_columns where table_name = 'table_name';

CONSTRAINTS
-----------------------
select table_name, constraint_name, constraint_type, status from dba_constraints where owner in('', '')
select constraint_name, r_constraint_name, constraint_type from dba_constraints where constraint_name = '';
select constraint_name, constraint_type from dba_constraints where r_constraint_name = '';

DATAFILES
-----------------
select substr(file_name,1,70) from dba_data_files order by file_name;

DEPENDENCIES
------------------------
select * from user_dependencies where referenced_name = ''

INDEXES
--------------
select index_name, column_name from dba_ind_columns where table_name = '' order by index_name, column_position
select index_name from dba_indexes where table_name = '';

JOBS
--------
select job, this_date, this_sec, next_date,  next_sec, interval, what,  failures, broken from dba_jobs;

OBJECTS
--------------
select object_name from dba_objects where status <> 'VALID';

TABLE PRIVILEGES
------------------------------
desc dba_tab_privs
select * from dba_tab_privs where grantee = '';
select * from dba_tab_privs where table_name = '';

ROLE PRIVILEGES
-----------------------------
select * from dba_role_privs where grantee='';
select * from dba_tables where table_name = '';
select segment_name, segment_type, bytes from dba_segments where segment_name in ('','','','','');
select sum(bytes) from dba_segments where segment_name in ('','','','','');
SELECT substr(segment_name, 1, 30) "OBJECT", max(extents) FROM dba_segments group by segment_name having max(extents) > &max_extents_greater_than;
select segment_name, count(*), sum(bytes) from dba_extents where segment_name = '' group by segment_name ;
select * from dba_segments where segment_name = '';
select * from dba_segments@p1aqcis where segment_name = '';
select segment_name, count(*), sum(bytes) from dba_extents group by segment_name having count(*) > 1;
select * from dba_tables where table_name = '';
select segment_name, segment_type, bytes from dba_segments where segment_name in ('','','','','');
select sum(bytes) from dba_segments where segment_name in ('','','','','');
SELECT substr(segment_name, 1, 30) "OBJECT", max(extents) FROM dba_segments group by segment_name having max(extents) > &max_extents_greater_than;
 
SOURCE CODE
-----------------------
select owner,name,type,text from dba_source where upper(text) like '%%';
select name,type,text from dba_source where upper(text) like '%%';
----------------------
select tablespace_name, bytes, file_name from dba_data_files order by tablespace_name;

USERS
----------
set pagesize 0
set echo off

------------
space allocation for tables using computed statistics

select table_name, num_rows, avg_row_len, (num_rows * avg_row_len) from dba_tables where owner in ('','');

TABLESPACES
----------------------
select tablespace_name, bytes, file_name from dba_data_files order by tablespace_name;

USERS
----------
set pagesize 0
set echo off
select 'ROLE '||GRANTED_ROLE from dba_role_privs where grantee='';
select 'SYSTEM PRIVILEGE '||privilege from dba_sys_privs where grantee='';
select privilege||' ON '||owner||'.'||table_name from dba_tab_privs where grantee='';
set echo on
select * from dba_users where username = '';

VIEWS
----------
select view_name from dba_views where view_name like '%%';

PACKAGE BACKUP in ORACLE


PACKAGE BACKUP
--------------
set heading off
set echo off
set flush off
set pagesize 50000
set linesize 32767
set long 99999999
spool PACKAGE_NAME.pks
select dbms_metadata.get_ddl('PACKAGE','<PACKAGE_NAME>','<OWNER_NAME>') from dual ;
spool off;

OR

set linesize 500
set pagesize 100000
set head off
set verify off
spool PACKAGE_NAME.pks
select text from dba_source where owner='<OWNER_NAME>'and name='<PACKAGE_NAME>';
spool off

OR

select text from dba_source where owner='<OWNER_NAME>' and name='PACKAGE_NAME' and type='PACKAGE' order by line;

PACKAGE BODY BACKUP
-------------------
select dbms_metadata.get_ddl('PACKAGE_BODY','<PACKAGE_BODY_NAME>','<OWNER_NAME>') from dual ;

OR

set pagesize 0
set linesize 800
set trimspool on
set heading off
set feedback off
spool PACKAGE_BODY_NAME.pkb
select text from dba_source where owner='<OWNER_NAME>'and name='<PACKAGE_BODY_NAME>';
spool off

OR

select text from dba_source where owner='<OWNER_NAME>' and name='PACKAGE_BODY_NAME' and type='PACKAGE BODY' order by line;

DROP PACKAGE
------------
set pagesize 0
set linesize 800
col object_name for a30
col object_type for a25
col owner for a25
spool package_drop.sql
select owner,object_name,object_type from dba_objects where object_name='<object_name>';
drop package <owner>.<package_name>;
spool Off;

BACKUP ORACLE HOME AND INVENTORY

Oracle Home and Inventory Backup
-----------------------------------------
tar -cvf $ORACLE_HOME $ORACLE_HOME/oraInventory | gzip > Backup_Software_Version.tar.gz

Note:
tar -cvf <destination_location> <source_location>

ORACLE_HOME & ORACLE_CRS_HOME backup’s (for all nodes in RAC)

cd /u01/before_patch_backup

tar -cvf oracle_base_jun30.tar $ORACLE_BASE
tar -cvf oracle_home_jun30.tar $ORACLE_HOME
tar -cvf oracle_crs_home_jun30.tar $ORACLE_CRS_HOME

Controlfile Backup
---------------------
alter database backup controlfile to trace;

show parameter user_dump_dest
(go to udump dest and make the note of controlfile trace)

DBA Audit Trail

col USERNAME for a10
col OS_USERNAME for a10
col USERHOST for a15
col ACTION_NAME for a11
set pages 200 lines 100
alter session set nls_date_format='DD-MON-YY HH24:MI:SS';

select USERNAME,OS_USERNAME,USERHOST,TIMESTAMP,ACTION_NAME, LOGOFF_TIME from dba_audit_trail where username='&USERNAME' order by USERHOST;

USER

Schema DDL
-------------
 set pagesize 0 
SET LONG 9999999
select dbms_metadata.get_ddl('USER','&USERNAME') FROM DUAL;

Schema Status
-------------
select username,account_status,default_tablespace,temporary_tablespace from dba_users where username='&username';

select username,tablespace_name,decode(max_bytes,-1,'unlimited',ceil(max_bytes / 1024 / 1024) || 'M' ) "QUOTA in MB" from dba_ts_quotas where  username='&username';

select grantee,granted_role || ' ' || decode(admin_option, 'NO', '', 'YES', 'with admin option') "GRANTED ROLE" from dba_role_privs where grantee='&username';

select grantee,privilege || ' ' || decode(admin_option, 'NO', '', 'YES', 'with admin option') "PRIVILEGE" from dba_sys_privs where grantee='&username';

select owner,sum(bytes)/1024/1024/1024 "SIZE in GB" from dba_segments where owner='&owner';

select owner,object_type,count(*) from dba_objects where owner='&owner' group by owner, object_type;


All Users passwords backup
--------------------------------
spool ./users_pwd_backup.log
select 'alter user '||username||' identified by values '||password||';' from dba_users;  ( For 10g database)
select 'alter user '||name||' identified by values '||password||';' from SYS.USER$; ( For 11g database)
spool off
/
All tables owned by a user
--------------------------
set pages 9999 lines 300
col owner format a15
col tablespace_name format a15
col segment_name format a40
col segment_type format a20
col PARTITION_NAME format a20
col mb format 999,999,999
select    owner
,       tablespace_name
,       segment_name
,       segment_type
,       PARTITION_NAME
,    ceil(sum(bytes) / 1024 / 1024) "Size in MB"
from    dba_segments
where    owner like '&user'
and    segment_type = 'TABLE'
group    by segment_name
order     by ceil(sum(bytes) / 1024 / 1024) desc
/

All schema object details in a tablespace
-----------------------------------------
set pages 9999 lines 300
col owner format a15
col tablespace_name format a15
col segment_name format a40
col segment_type format a20
col PARTITION_NAME format a20
col mb format 999,999,999
select    owner
,       tablespace_name
,       segment_name
,       segment_type
,       PARTITION_NAME
,    ceil(sum(bytes) / 1024 / 1024) "Size in MB"
from    dba_segments
where    tablespace_name like '&tablespace_name'
group    by segment_name
order     by ceil(sum(bytes) / 1024 / 1024) desc
/

Total space allocated by Owner:
-------------------------------
set pages 9999 lines 300
col    owner format a15
col    segment_name format a30
col    segment_type format a15
col     tablespace_name format a20
col    mb format 999,999,999
select  owner
,    segment_name
,    segment_type
,       tablespace_name
,    mb
from    (
    select    owner
    ,    segment_name
    ,    segment_type
        ,       tablespace_name
    ,    bytes / 1024 / 1024 "SIZE in MB"
    from    dba_segments
    order    by bytes desc
    )
/

schemas in a tablespace
-----------------------
set pages 999
col "size MB" format 999,999,999
col "Objects" format 999,999,999
select    obj.owner "Owner"
,    obj_cnt "Objects"
,    decode(seg_size, NULL, 0, seg_size) "size MB"
from     (select owner, count(*) obj_cnt from dba_objects group by owner) obj
,    (select owner, ceil(sum(bytes)/1024/1024) seg_size
    from dba_segments group by owner) seg
where     obj.owner  = seg.owner(+)
order    by 3 desc ,2 desc, 1
/