Saturday, February 27, 2016

ORA-31640: unable to open dump file for read and ORA-19505: failed to identify file and ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory

Error:


Import: Release 11.2.0.4.0 - Production on Fri Feb 26 14:19:32 2016
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
;;;
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_05" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_05":  system/******** directory=IMP_DMP logfile=IMP_log.log dumpfile=expdp.dmp_%U remap_schema=SCOTT:USER parallel=32 exclude=GRANT,STATISTICs table_exists_action=replace
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-31693: Table data object "USER"."EMP" failed to load/unload and is being skipped due to error:
ORA-31640: unable to open dump file "/u01/IMP/expdp.dmp_10" for read
ORA-19505: failed to identify file "/u01/IMP/expdp.dmp_10"
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Solution:
Simply remove parallel clause from impdp command. Or you can add cluster=N Or Keep Import files in shared location.

Saturday, September 13, 2014

Find All Sundays In a Month of Input Date

WITH T AS
(
SELECT TO_DATE(TO_CHAR(SYSDATE,'MON-YYYY'),'MON-YYYY')+LEVEL-1 DT FROM DUAL
  CONNECT BY LEVEL < ADD_MONTHS(to_date(TO_CHAR(SYSDATE,'MON-YYYY'),'MON-YYYY'),1) - to_date(TO_CHAR(SYSDATE,'MON-YYYY'),'MON-YYYY') + 1
)
SELECT DT FROM T WHERE
TRIM(TO_CHAR(DT,'DAY')) = 'SUNDAY';

Monday, January 20, 2014

IMP-00032: SQL statement exceeded buffer length



imp Joy@temp file=emp.dmp log=IMP_emp.log fromuser=scott touser=joy ignore=y

Import: Release 10.2.0.5.0 - Production on Sat Aug 17 20:14:56 2013
Copyright (c) 1982, 2007, Oracle.  All rights reserved.

 Password: 

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options 

Export file created by EXPORT:V09.02.00 via conventional path 

Warning: the objects were exported by SCOTT, not by you

 

import done in US7ASCII character set and AL16UTF16 NCHAR character set

import server uses AL32UTF8 character set (possible charset conversion)

IMP-00032: SQL statement exceeded buffer length

IMP-00008: unrecognized statement in the export file:

  OUPS 1) TABLESPACE "USERS" LOGGING NOCOMPRESS, PARTITION "MAX_PARTITION" VALUES LESS THAN (MAXVALUE)  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 40960 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE "USERS" LOGGING NOCOMPR...Killed

Solution:
In the source database we need to run:
SQL> select dbms_lob.getlength (dbms_metadata.get_ddl('TABLE','EMP'
,'SCOTT')) SQL_LENGTH" from dual;


SQL_LENGTH

----------

 38782

The above output gives the value of buffer we need to have atleast to run the import, so I used:

imp <USER>/<PASS> file=tabs.dmp log=tabs.log fromuser=<USER> touser=<USER> commit=y ignore=Y buffer=38782



This solved the problem.

Thursday, October 11, 2012



Problem Description
Whenever you try to disable a constraint of a table it fails with error message ORA-02297: cannot disable constraint -dependencies exist as below.

SQL> alter table transaction disable constraint TRANSACTION_PK;
alter table transaction disable constraint TRANSACTION_PK
*
ERROR at line 1:
ORA-02297: cannot disable constraint (OMS.TRANSACTION_PK) - dependencies exist

Cause of the Problem
Disable constraint command fails as the table is parent table and it has foreign key that are dependent on this constraint.

Solution of the Problem
Two solutions exist for this problem.
1)Find foreign key constraints on the table and disable those foreign key constraints and then disable this table constraint.

Following query will check dependent table and the dependent constraint name. After that disable child first and then parent constraint.


SQL> SELECT p.table_name "Parent Table", c.table_name "Child Table",
       p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
       FROM user_constraints p
       JOIN user_constraints c ON(p.constraint_name=c.r_constraint_name)
       WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
       AND c.constraint_type = 'R'
       AND p.table_name = UPPER('&table_name');

Enter value for table_name: transaction
old   7:      AND p.table_name = UPPER('&table_name')
new   7:      AND p.table_name = UPPER('transaction')

Parent Table                   Child Table                    Parent Constraint              Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TRANSACTION                    USER_SALARY_RECORD             TRANSACTION_PK                 SYS_C005564
TRANSACTION                    TRANSACTION_DETAIL             TRANSACTION_PK                 TRNSCTN_DTL_TRNSCTN_FK

SQL> alter table USER_SALARY_RECORD disable constraint SYS_C005564;
Table altered.

SQL> alter table TRANSACTION_DETAIL  disable constraint TRNSCTN_DTL_TRNSCTN_FK;
Table altered.

SQL> alter table transaction disable constraint TRANSACTION_PK;
Table altered.


2)Disable the constraint with cascade option.
SQL> alter table transaction disable constraint TRANSACTION_PK cascade;
Table altered.

 

Thursday, September 27, 2012

High Executions Of Statement "delete from smon_scn_time..."

FOR:

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 10.2.0.4 - Release: 10.1 to 10.2

Information in this document applies to any platform.

ISSUE:

A delete from smon_scn_time is performing excessive gets and executions as viewed from AWR report:

Buffer Gets Executions Gets per Exec %Total CPU Time (s) Elapsed Time (s)

205,501,888 30,508 6,736.00 54.05 9,733.97 61,180.96 delete from smon_scn_time where...

The AWR report shows the following SQL with an excessive amount of executions:

delete from smon_scn_time where thread=0 and time_mp = (select min(time_mp) from smon_scn_time where

thread=0);


CAUSE:

There are inconsistencies between the indexes and table smon_scn_time.

The delete statement deletes the oldest rows from smon_scn_time to clear space for new rows. SMON wakes

up every 5 minutes and checks how many on-disk mappings we have--the max is 144000.

The new mappings are then added for the last period (since SMON last updated), and if this is over 144000,

SMON will then issue the delete statement:

delete from smon_scn_time where thread=0 and time_mp = (select min(time_mp) from smon_scn_time where

thread=0)

There will be an execution of this each time SMON wakes to update smon_scn_time, and if one deletion does

not free enough mappings, then there will be multiple executions.

What happens is due to the inconsistency between the table and indexes the delete returns zero rows; so the

delete statement is executed continuously to reduce the smon_scn_time below the maximum 14400 mappings.

When table smon_scn_time is analyzed we see the inconsistency:

SQL> analyze table smon_scn_time validate structure cascade;

analyze table smon_scn_time validate structure cascade

*

ERROR at line 1 :

ORA-01499: table/Index Cross Reference Failure - see trace file

SOLUTION:

To implement the solution, please execute the following steps:

1. Ensure you have a usable backup in case of failures

2. Drop and recreate the indexes on table smon_scn_time

connect / as sysdba

drop index smon_scn_time_scn_idx;

drop index smon_scn_time_tim_idx;

create unique index smon_scn_time_scn_idx on smon_scn_time(scn);

create unique index smon_scn_time_tim_idx on smon_scn_time(time_mp);

analyze table smon_scn_time validate structure cascade;

Friday, August 10, 2012

ERROR at line 1: ORA-00600: internal error code, arguments: [kdsgrp1], [], [], [], [], [], [], []

To Resolve This Error:
Solution 1: Try to rebuild Indexes and 
ANALYZE TABLE Table_Name VALIDATE STRUCTURE;

Solution 2:  Still You are getting the same Error...
Try this..   
BEGIN
            DBMS_REPAIR.SKIP_CORRUPT_BLOCKS
                 ( SCHEMA_NAME => 'SCOTT', OBJECT_NAME => 'DEPT',
                   OBJECT_TYPE => dbms_repair.table_object, 
                    FLAGS => dbms_repair.skip_flag);
 END;
/

Check this Query:
              SELECT OWNER, TABLE_NAME, SKIP_CORRUPT FROM DBA_TABLES WHERE OWNER = 'SCOTT'; 


OWNER TABLE_NAME SKIP_COR
SCOTT ACCOUNT DISABLED
SCOTT BONUS DISABLED
SCOTT DEPT ENABLED

 Create a Backup table same as this table.
  • Then take the backup into backup table and drop the original.
  • Rename backup to original.
  • If the table is very big use Bulk collect to take backup.
  • Run statistics for New Table.