Showing posts with label ISSUE. Show all posts
Showing posts with label ISSUE. Show all posts

November 06, 2013

How to find high CPU using sessions in Oracle Database

If the problem relates to CPU bound applications then CPU information for each session can be examined to determine the culprits. The v$sesstat  view can be queried to find high cpu using sessions and then SQL can be listed.

1.Find the 'CPU used by this session' statistic.

SQL>SELECT name ,statistic# FROM v$statname WHERE  name LIKE '%CPU%session';

    NAME                                STATISTIC#
    ----------------------------------- ----------
    CPU used by this session                    14

2. Then determine which session is using most of the cpu.

SQL>SELECT * FROM v$sesstat WHERE statistic# = 14;

           SID STATISTIC#      VALUE
    ---------- ---------- ----------
             1         14          0
             2         14          0
             3         14          0
             4         14          0
             5         14          0
             6         14          0
             7         14          0
             8         14          0
             9         14          0
            10         14          0
            11         14          0
            12         14          0
            16         14       1930

3. Lookup details for the session which is using most of the cpu.

SQL>SELECT address ,SUBSTR(sql_text,1,20) Text, buffer_gets, executions,buffer_gets/executions AVG
    FROM   v$sqlarea a, v$session s
    WHERE  sid = 16
    AND    s.sql_address = a.address
    AND    executions > 0
    ORDER BY 5;

4. Use v$sqltext to extract the whole SQL text.

set pages 50000 lines 32767
col SPID for a10
col PROGRAM for a15
col OSUSER for a10
col ACTION for a10
col EVENT for a25
col SQL_TEXT for a25
col MACHINE for a10
col P1TEXT for a10
col P2TEXT for a10
col P3TEXT for a10
SELECT b.sid, b.serial#, a.spid, b.sql_id, b.program, b.osuser, b.machine, b.type,
b.event, b.action, b.p1text, b.p2text, b.p3text, b.state, c.sql_text,b.logon_time
FROM v$process a, v$session b, v$sqltext c WHERE a.addr=b.paddr
AND b.sql_hash_value = c.hash_value
AND b.sid = '&sid' ORDER BY a.spid, c.piece
/

5. Once the whole SQL statement has been identified it can be tuned.

Explain the queries and examine their access paths.
Autotrace is a useful tool for examining access paths.

Syntax:-

Explain plan for 'sql statement';


September 13, 2013

ORA-01502 index or partition of such index is in unusable state

ERROR:-
-----

ORA-01502: index "owner.index_name" or partition of such index is in unusable state

SOLUTION:-
--------

REBUILD UNUSABLE INDEXES in Oracle Database

$sqlplus "/as sysdba"

select owner,index_name,table_name,status from dba_indexes where index_name='&index_name';
OWNER    INDEX_NAME    TABLE_NAME    STATUS
-----    ----------    ----------    ------
owner    index_name     table_name    INVALID

select owner,segment_name,sum(bytes)/1024/1024 "SIZE in MB" from dba_segments where segment_name='&segment_name';
OWNER    SEGMENT_NAME    SIZE IN MB   
-----    ------------    ----------   
owner    index_name           3000   

alter session set current_schema='&schema_name';
Session altered.

alter index <index_name> rebuild;
Index altered.

select owner,index_name,table_name,status from dba_indexes where index_name='&index_name';
OWNER    INDEX_NAME    TABLE_NAME    STATUS
-----    ----------    ----------    ------
owner    index_name     table_name    VALID

select owner,segment_name,sum(bytes)/1024/1024 "SIZE in MB" from dba_segments where segment_name='&segment_name';
OWNER    SEGMENT_NAME    SIZE IN MB   
-----    ------------    ----------   
owner    index_name            800

------------------------------------  OR  ------------------------------------------------------

SELECT 'ALTER INDEX '||OWNER||'.'||INDEX_NAME||' REBUILD;' FROM DBA_INDEXES WHERE STATUS = 'UNUSABLE';

This will output statements for all "unusable" indexes. Run them, so that the indexes can be "usable" again.

August 21, 2013

ORA-02297: cannot disable constraint (SCHEMANAME.PK_TABLENAME) - dependencies exist

ISSUE:
ORA-02297: cannot disable constraint (SCHEMANAME.PK_TABLENAME) - dependencies exist

Restore TABLE backup

Schema Status
-------------
select username from dba_users where username=upper('&username');

Table Status
------------
select count(*) from SCHEMANAME.TABLENAME;

TRUNCATE TABLE
---------------------------
spool TARGET_TABLENAME_TRUNCATE.log
set echo on term on feed on timing on

truncate table SCHEMANAME.TABLENAME;
truncate table SCHEMANAME.TABLENAME
                         *
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys

select CONSTRAINT_NAME,CONSTRAINT_TYPE,TABLE_NAME,status from dba_constraints where TABLE_NAME='&TABLE_NAME';

CONSTRAINT_NAME                C TABLE_NAME                     STATUS
------------------------------ - ------------------------------ -------
SYS_CONSTRAINT1                C TABLENAME                      ENABLED
PK_TABLENAME                   P TABLENAME                      ENABLED

alter table SCHEMANAME.TABLENAME DISABLE constraint SYS_CONSTRAINT1;

alter table SCHEMANAME.TABLENAME DISABLE constraint PK_TABLENAME;
ERROR at line 1:
ORA-02297: cannot disable constraint (SCHEMANAME.PK_TABLENAME) -
dependencies exist

SOLUTION:

alter table SCHEMANAME.TABLENAME DISABLE constraint PK_TABLENAME cascade;

Table altered.

select CONSTRAINT_NAME,CONSTRAINT_TYPE,TABLE_NAME,status from dba_constraints where TABLE_NAME='TABLE_NAME';


July 28, 2013

ISSUE: Error is related to timezone file during Upgrade.

ISSUE: Error is related to timezone file  during Upgrade.

Started database in upgrade mode and fired catupgrd.sql :

SQL> startup upgrade
ORACLE instance started.
Total System Global Area 6413680640 bytes
Fixed Size                  2160112 bytes
Variable Size            1946159632 bytes
Database Buffers         4429185024 bytes
Redo Buffers               36175872 bytes
Database mounted.
Database opened.
SQL> @catupgrd.sql
DOC>#######################################################################
DOC>#######################################################################
DOC>
DOC>   The first time this script is run, there should be no error messages
DOC>   generated; all normal upgrade error messages are suppressed.
DOC>
DOC>   If this script is being re-run after correcting some problem, then
DOC>   expect the following error which is not automatically suppressed:
DOC>
DOC>   ORA-00001: unique constraint () violated
DOC>#
   FROM registry$database
        *
ERROR at line 2:
ORA-00942: table or view does not exist
This  error is related to timezone file  which must be version 4 for Oracle version 11g.If timezone is not version 4 than patch needs to be applied.
Query to check timezone file  is:
SQL> select * from v$timezone_file;
FILENAME        VERSION
———— ———-
timezlrg.dat          4
SQL> select * from v$timezone_file;
FILENAME        VERSION
———— ———-
timezlrg.dat          4
So I had correct version.I remember applying patch before upgrade.I got lucky because patch existed for version 10.2.0.3.
If there is no patch for your Oracle versions than patch can be download for similar version and  applied manually.
Instructions are below:
1.  Download the identified patch.
2. Unzip the patch, and locate the 2 files timezone.dat and timezlrg.dat in the “files/oracore/zoneinfo” directory of the uncompressed patch (or from the relevant .jar file of a   patchset). If there is also a readme.txt in this location then make a note of this as well.
3. Backup your existing files in $ORACLE_HOME/oracore/zoneinfo – THIS CAN BE VITAL, DO NOT SKIP.
Note:
Before going on with step 4, make sure the current files are not in use.
On Windows the files will simply refuse to be removed when the are in use.
On Unix replacing the files whilst they are in use can cause the files to become corrupt. Use the fuser command before replacing the files to make sure they are not in use.
4. Copy the 2 .dat files and possibly the readme.txt file that were found in step 2 into the $ORACLE_HOME/oracore/zoneinfo directory.
5. Restart the database (in case of installation on a database), or restart the client applications (in case of client install). Note that the database did not need to be down before the time zone files were applied, but it does need to be restarted afterwards.

http://indiandba.blogspot.in/2012/01/error-is-related-to-timezone-file.html

Opatch rollback failed because files under patch_storage are missing

Issue : opatch rollback failed because files under patch_storage are missing
Error :
Archive Action: Source file "/u01/oracle/product/102/.patch_storage/4966417_Mar_12_2007_03_46_16/files/lib/libgeneric10.a/kgl.o" does not exist.

'oracle.rdbms, 10.2.0.3.0': Cannot update file '/u01/oracle/product/102/lib/libgeneric10.a' with '/kgl.o'

RollbackSession failed during prerequisite checks: Prerequisite check
"CheckRollbackable" failed.
System intact, OPatch will not attempt to restore the system

OPatch failed with error code 74

Fix:
Try roll backing the patch as below
opatch rollback -id <sub-set patch#> -no_sysmod

-no_sysmod options just removes the patch updating the inventory with out updating the files in the File System.


June 21, 2013

RMAN Backup and Recovery Scenario Point-In-Time Recovery

Backup and Restore Scenario from existing Level 0 backup
-------------------------------------------------------
steps to recover the database:-

1. Take Level 0 backup.
2. Lost SYSTEM datafile.
3. Start the instance without mounting the database.
4. Restore Controlfile.
5. Mount the database.
6. Restore database.
7. Recovery database.
8. Open the database and reset logs.

1. Take Level 0 backup:

run
{
allocate channel channel1 type disk;
allocate channel channel2 type disk;
allocate channel channel3 type disk;
backup incremental level=0 database;
backup archivelog all;
backup current controlfile;
}

2. Lost SYSTEM datafile.

Note:
----
Need controlfile, backup and parameter file for recoverying database.

3. Start the instance without mounting the database.

$sqlplus "/as sysdba"

SQL> startup nomount;
SQL> exit

4. Restore Controlfile.

$rman target /
RMAN> restore controlfile;

5. Mount the database.
RMAN> alter database mount;

6. Restore and recover & Point-In-Time Recovery of all data back to a particular date/time in the past.
run
{
set until time "to_date('2013-03-20:14:40:00','YYYY-MM-DD:hh24:mi:ss')";
restore database;
recovery database;
}

7.Open the database and reset logs.

SQL> alter database open resetlogs;
This will update all current datafiles and online redo logs and all subsequent archived redo logs with a new RESETLOGS SCN and time stamp.

NOTE:
----
As soon as you have done a resetlogs run a full backup, this is important as should you suffer a second failure you will not be able to perform a second recovery because after resetting the logs the SCN numbers will no longer match any older backup files.

http://ss64.com/ora/rman_pitr_example.html

March 05, 2013

Error 1031 received logging on to the standby

ISSUE

Error 1031 received logging on to the standby
ORA-01031: insufficient privileges
PING[ARC0]: Heartbeat failed to connect to standby 'dgp'. Error is 1031.

PRIMARY
=======

Primary database Alert log error:
--------------------------------

Error 1031 received logging on to the standby
ORA-01031: insufficient privileges
PING[ARC0]: Heartbeat failed to connect to standby 'dgp'. Error is 1031.

Chech DR sync and see the difference.

THREAD PR-ARCHIVED STBY-ARCHIVED STBY-APPLIED SHIPPING GAP(PR -> STBY) APPLIED GAP(STBY -> STBY)
------ ----------- ------------- ------------ ------------------------ -------------------------

set pages 999 lines 999

col MESSAGE for a100
select to_char(timestamp,'YYYY-MON-DD HH24:MI:SS')||' '||message||severity from gv$dataguard_status where severity in ('Error','Fatal') order by timestamp;

show parameter log_archive_dest_state_2;

LISTNER VERIFICATION FROM PRIMATY DB
------------------------------------
select dest_id,status,error from v$archive_dest where dest_name='LOG_ARCHIVE_DEST_2';

FIND GAP
--------
select thread#,low_sequence#,high_sequence# from gv$archive_log;

ps -ef |grep tns
lsnrctl status

DR
==

DR database Alert log error:
----------------------------
Error 1031 received logging on to the standby

set pages 999 lines 999

col MESSAGE for a100
select to_char(timestamp,'YYYY-MON-DD HH24:MI:SS')||' '||message||severity from gv$dataguard_status where severity in ('Error','Fatal') order by timestamp;

select inst_id,process,status,thread#,sequence#,block#,blocks from gv$managed_standby;

PROCESS STATUS
------- ------------
RFS IDLE
RFS IDLE
RFS IDLE
MRP0 WAIT_FOR_LOG

checking log transfer and apply
-------------------------------
SELECT SEQUENCE#,FIRST_TIME,NEXT_TIME,APPLIED FROM gV$ARCHIVED_LOG ORDER BY SEQUENCE#;
select count(*) from GV$ARCHIVED_LOG where applied='NO';

Redo transfer was not happening. When we checked in the
v$managed_process data dictionary view, we could see that RFS was not starting.

Here,
ora-01031 usually appears when some sysdba session failes to authenticate
Check that the primary and standby are using a password file
and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
and that the SYS password is same in the password files.

1.The RFS process was not started in the standby which indicates that standby is not receiving any redo information from the primary.

2.In the primary alert log file we could see the errors indicating that the primary is not able to communicate with the standby instance. The error observed was "Error 1031 received logging on to the standby".

3.The time stamp of the password file on the primary and the standby was different. This indicated the possibility of having the wrong password file in the standby.

SOLUTION:
=========

For the redo transfer to take place efficiently, the password file on standby should be a copy from the primary and renamed standby.We can use v$pwd_file_users data dictionary view to check if the password file is used

1.copy the password file from the primary to the standby and renamed the password file in the following format ie orapw<sid> .

2.Restar the media recovery process on the standby.

Once the above steps are completed we could see that redo shipping and redo apply is taking place.

Copy the latest Password file from available PRIMARY Node to rest of PRIMARY and STANDBY nodes:
----------------------------------------------------------------------------------------------

Primary (Node 1)
----------------

cd $ORACLE_HOME/dbs
ls -lrt
scp orapw<sid> oracle@PROD_NODE_2_hostname:/oracle/home/dbs

scp orapw<sid> oracle@DR_NODE_1_hostname:/oracle/home/dbs
scp orapw<sid> oracle@DR_NODE_2_hostname:/oracle/home/dbs

or

DR
==

select * from gv$pwfile_users;

we have to check sec_case_sensitive_logon parameter on primary and standby.

SQL> show parameter sec_case_sensitive_logon;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sec_case_sensitive_logon boolean FALSE
SQL>

We have to recreate the passwd file or copy Primary server to Standby server.

In cause sec_case_sensitive_logon parameter value is true, we have to use below orapwd command.
orapwd file=$ORACLE_HOME/dbs/orapwPROD password=password123 entries=10 ignorecase=y

In cause sec_case_sensitive_logon parameter value is false, we have to use below orapwd command.
orapwd file=$ORACLE_HOME/dbs/orapwPROD password=password123 entries=10

DR
==

cancelling mrp process:
alter database recover managed standby database cancel;

starting mrp process:
alter database recover managed standby database disconnect from session;

select inst_id,process,status,thread#,sequence#,block#,blocks from gv$managed_standby;

Chech DR sync and see the difference.

http://troubleurheart.blogspot.in/2011/01/monitor-dataguard-status.html
http://alazydbasblog.blogspot.in/2012/04/steps-to-resolve-error-1031-received.html
http://bitbach.wordpress.com/2010/08/27/pingarc1-heartbeat-failed-to-connect-to-standby-dgp-error-is-1031/
www.dbaglobe.com/2011/01/monitor-dataguard-status.html
http://muthuappsdba.blogspot.in/2012/12/error-16191-pingarc2-heartbeat-failed.html

Source : Internet

February 11, 2013

crsd fails to startup on 2nd node

ISSUE : 
crsd fails to start on 2nd node after server reboot in 11gR2 RAC

NODE 1:

# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
However, after both server rebooted cluster ready service on second node does not start, but on first node it works fine.

NODE 2:

# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

# crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.


December 01, 2012

How to Find Locks on Objects

ISSUE:
-----
ORA-00054: resource busy and acquire with NOWAIT specified

Locked sessions on Oracle Database Objects :-
-----------------------------------------------
Query 1:
-------
set pages 50000 lines 32767

SELECT s.inst_id,s.sid || ',' || s.serial# sess_id,
oracle_username || ' (' || s.osuser || ')' os_username,
owner || '.' || object_name,object_type,object_id,
DECODE (l.block, 0, 'Not Blocking', 1, 'Blocking', 2, 'Global') STATUS,
DECODE (v.locked_mode,
    0, 'None',
    1, 'Null',
    2, 'Row-S (SS)',
    3, 'Row-X (SX)',
    4, 'Share',
    5, 'S/Row-X (SSX)',
    6, 'Exclusive',
    TO_CHAR (lmode))  LOCK_MODE,
a.sql_text
FROM gv$locked_object v,
     dba_objects d,
     gv$lock l,
     gv$session s
     gv$sqlarea a
WHERE v.object_id = d.object_id
      AND v.object_id = l.id1
      AND v.session_id = s.sid
ORDER BY oracle_username, session_id;

Query 2: (More Information)
-------
set pages 50000 lines 32767

select
vlo.object_id, vlo.session_id, vlo.oracle_username, vlo.process
, DECODE(vlo.LOCKED_MODE,
         0,'NONE',
         1,'NULL',
         2,'ROW SHARE',
         3,'ROW EXCLUSIVE',
         4,'SHARE',
         5,'SHARE ROW EXCLUSIVE',
         6,'EXCLUSIVE', NULL) LOCK_MODE,
do.owner, do.object_name, do.object_type
, vs.saddr, vs.serial#, vs.paddr, vs.username, vs.ownerid, vs.status, vs.server, vs.schemaname, vs.osuser, vs.machine, vs.program, vs.type, vs.logon_time, vs.last_call_et, vs.blocking_session_status,
vs.event#, vs.event, vs.wait_class#, vs.wait_class, vs.wait_time, vs.seconds_in_wait, vs.state
from gv$locked_object vlo
inner join dba_objects do on (vlo.object_id = do.object_id)
left outer join gv$session vs on (vlo.session_id = vs.sid);

Query 3:
-------
set pages 50000 lines 32767

SELECT
b.inst_id,a.session_id,b.serial#,b.STATUS,b.machine,a.ORACLE_USERNAME,a.OS_USER_NAME,a.LOCKED_MODE,
c.owner,c.object_name,c.object_type,c.object_id,d.sql_text
FROM
gv$locked_object a,
gv$session b,
dba_objects c,
gv$sqlarea d
WHERE b.sid     = a.session_id
AND a.object_id = c.object_id
ORDER BY a.oracle_username, a.session_id;

Query 4:
-------
set pages 50000 lines 32767

SELECT s.inst_id,OS_USER_NAME, ORACLE_USERNAME, s.sid, o.object_name,o.object_type, s.serial#, a.sql_text
FROM gv$locked_object l, dba_objects o, gv$session s, gv$sqlarea a
WHERE l.object_id = o.object_id
AND s.SQL_ADDRESS = a.address
AND l.SESSION_ID = s.sid;


Command to kill the session:
---------------------------
ALTER SYSTEM KILL SESSION 'sid, serial#';

ALTER SYSTEM KILL SESSION 'sid, serial#,@<instance_id>';  (RAC)

Script to Kill all the locks
----------------------------
set pages 50000 lines 32767

SELECT 'ALTER SYSTEM KILL SESSION "'||TO_CHAR(s.sid)||','||TO_CHAR(s.serial#)||"';'
AS "Statement to kill"
FROM gv$locked_object l, dba_objects o, gv$session s
WHERE l.object_id = o.object_id
AND l.SESSION_ID = s.sid;



November 24, 2012

OEM:Daily / Weekly Backups not happening

Issue : 
Daily incremental/ weekly full backups scheduled through OEM is not happening as scheduled.

Troubleshooting:

1) First check whether emagent is running or not on that database node?
    a) ps -efa|grep agent

        - confirms the agent is running.

    b) cd /home/oracle/OEM_GRID/agent10g/bin/
       ./emctl status agent

       - The output says agent is up and running fine properly.

2) Checked whether agent upload is happening or not?
    ./emctl upload agent

    - ERROR. 
    May be the same thing is not allowing the backups to happen as scheduled.

Fix :
    cd /home/oracle/OEM_GRID/agent10g/bin/
    ./emctl stop agent
    ./emctl clearstate agent
    ./emctl start agent
    ./emctl upload agent

After this, scheduled OEM jobs will run as expected.

October 28, 2012

ORA-00257: archiver error. Connect internal only, until freed.

ISSUE:
ORA-00257: archiver error. Connect internal only, until freed
ARCHIVE DESTINATION FULL ORA-00257

ps -ef|grep pmon

ORACLE_SID=`ps -ef | grep asm_smon | grep -v 'grep' | grep -v 'sed' | awk '{printf $8}' | awk 'BEGIN{FS="_"} {printf $3}'`


ASM Tablespace Utilization Scripts

show parameter recovery;

1. DB_RECOVERY_FILE_DEST_SIZE (Specifies max space to use for FRA)
2. DB_RECOVERY_FILE_DEST (Location of FRA)
The DB_RECOVERY_FILE_DEST_SIZE must be set before DB_RECOVERY_FILE_DEST.

col name format a40
select
   name,
   to_char(space_limit, '999,999,999,999') as space_limit,
   to_char(space_limit - space_used + space_reclaimable,
   '999,999,999,999') as space_available,
   round((space_used - space_reclaimable)/space_limit * 100, 1) as pct_full
from
   v$recovery_file_dest;

select * from V$FLASH_RECOVERY_AREA_USAGE;(see what kind of files are available in the Flash Recovery Area)
select * from V$RECOVERY_FILE_DEST; (determine actual values)

ALTER SYSTEM SET db_recovery_file_dest_size=10G scope=both;
ALTER SYSTEM SET db_recovery_file_dest='/oradata/FRA';

For example, If FRA is in an Automatic Storage Management (ASM) disk group
ALTER SYSTEM SET db_recovery_file_dest='+FLASH’ scope=both;

RAC
ALTER SYSTEM set db_recovery_file_dest_size=60G scope=both sid='*' ;
ALTER SYSTEM SET db_recovery_file_dest='+FLASH' sid='*';
In a RAC database, all instances must have the same values for these parameters. Even though there are multiple nodes they all share the same controlfiles.

Here,if there is no space available. DBA can take the archive backup to free the space.

ls -ltr *.cmd

nohup rman target / cmdfile=archivebackup.cmd log=archivebackup_dbname_DDMONYY.log &
nohup: appending output to `nohup.out'
tail -f archivebackup_dbname_DDMONYY.log

archivebackup.cmd
-----------------
run {
DELETE NOPROMPT ARCHIVELOG UNTIL TIME 'SYSDATE-1' BACKED UP 1 TIMES TO SBT_TAPE;
allocate channel ch1 type 'sbt_tape';
allocate channel ch2 type 'sbt_tape';
allocate channel ch3 type 'sbt_tape';
allocate channel ch4 type 'sbt_tape';
BACKUP ARCHIVELOG ALL FILESPERSET 10 DELETE INPUT;
}

ps -ef| grep rman

col name format a40
select
   name,
   to_char(space_limit, '999,999,999,999') as space_limit,
   to_char(space_limit - space_used + space_reclaimable,
   '999,999,999,999') as space_available,
   round((space_used - space_reclaimable)/space_limit * 100, 1) as pct_full
from
   v$recovery_file_dest;

select * from V$FLASH_RECOVERY_AREA_USAGE;(see what kind of files are available in the Flash Recovery Area)
select * from V$RECOVERY_FILE_DEST; (determine actual values)

NOTE:

In order to solve the above error the solutions are

1) Increase the free space where archiver archives the archivelog. The location where archiver archives the log is determined by parameter file pfile or spfile.
This can be determined by loging into sqlplus and issuing

SQL> show parameter log_archive_dest

2) In case it is not possible to increase free space at the same location but if free space is available at other location then the parameter log_archive_dest (or log_archive_dest_1 in some cases) can be changed so that the new archives are produced at new location specified which has free space.

this can be done by modify init.ora file or using alter system if spfile is present

SQL> alter system set log_archive_dest_1=’

3) We can use following steps for this
   1.find the location of Archive destination by
   show parameter archive_dest

   lets say it provide  LOCATION=/u10/oradata/mydb/arch

   2.move some files to some other location using os command
   cd /u10/oradata/mydb/arch
   mv /u10/oradata/mydb/arch/* /u11/oradata/mydb/arch-bkp/

4) The option which is often used is to take a backup of the archives from the existing place and delete those archives from that place so that new archives can generated at that place .
the backup can be OS level backup and OS level del;etion but the recommended method which is compulsory to be used with ASM in place is taking any RMAN backup and delete using RMAN. as shown

rman target sys/sys

RMAN> backup archive log all device type disk format ‘/oracle/arch_%U’;

RMAN> delete archive until time ‘trunc(sysdate)’;

This will delete all the archive logs until today and space will freed and the archiver will start archiving redo logs

------------------------------------------------------------------------------------------------------------

The views v$recovery_file_dest and v$flash_recovery_area_usage does not always give the true picture of exact space used due to BUG Bug 4911954 in Oracle 10g and the versions which is confirmed to be affected is 10.2.0.2. (Reference Metalink Doc Id 4911954.8 ).
V$FLASH_RECOVERY_AREA_USAGE provides information about the flash recovery area disk space usage. Following is its main columns:
FILE_TYPE - the type of the file and can have any of the following values:
controlfile, onlinelog, archivelog, backuppiece, imagecopy, flashbacklog
PERCENT_SPACE_USED - This represents the disk space used by the file type, in percentage.
PERCENT_SPACE_RECLAIMABLE - this represents the percentage of disk space reclaimable from the file type after deleting any obsolete or redundant files, and files backed up to a tertiary device.
OTN Notes that you can see the actual space used by joining into v$recovery_file_dest:
A new view, V$FLASH_RECOVERY_AREA_USAGE, shows what's available in the flashback area.