select to_number(replace(translate(upper(column_name),'ABCDEFGHIJKLMNOPQRSTUVWXYZ','X'),'X')) FOO from table_name order by FOO;
Example:
select to_number(replace(translate(upper('123xj456iu789'),'ABCDEFGHIJKLMNOPQRSTUVWXYZ','X'),'X')) FOO from dual
FOO
------
123456789
1 rows selected
Wednesday, December 22, 2010
Friday, December 10, 2010
Snapshot Too Old Error During Export
ORA-31693: Table data object failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 11 with name "_SYSSMU11_1238457026$" too small
This occurred after we began using the FLASHBACK_TIME parameter for exports.
Thank you to the person who posted the solution here: http://www.eggheadcafe.com/software/aspnet/36119822/snapshot-too-old-error.aspx
This error occures, when at least 2 conditions are met:
1.) A long running Query (q)
2.) At least 1 short transaction (t) that modifies AND commits data
which will be selected by (q)
When (q) starts, Oracle guarantees read consistency, meaning you can
be sure that the result set of (q) reflects the state the db was in at
the start of (q).
If transaction (t) - started AFTER you started (q) - modifies data
that will be selected by (q) later on (because (q) is a long running
query), (q) will read the old, unmodified data out of the undo
segments and everything is as it should be.
The problem now is, that after (t) commits, the undo segments are
still holding the consistent data block(s), but are released and may
be used (and overwritten) by other transactions.
When (q) discovers the situation above, it will terminate with
ORA-1555.
There are basically 2 solutions to this problem:
a.) Don't do transactions while running such an extended query (which
will not be possible in real environments)
b.) Change the undo_retention - parameter of your database.
UNDO_RETENTION (the default is 900 seconds) tells the system how
long to wait before reusing the undo - segments after they have been
released by the transaction.
So if your pump-export needs i.e. 1 hour you should set this parameter
to 3600 or higher.
Note however, that undo_retention will be set for ALL undo - segments
in your system, so - depending on your transaction structure - you
might need a bigger undo-tablespace.
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 11 with name "_SYSSMU11_1238457026$" too small
This occurred after we began using the FLASHBACK_TIME parameter for exports.
Thank you to the person who posted the solution here: http://www.eggheadcafe.com/software/aspnet/36119822/snapshot-too-old-error.aspx
This error occures, when at least 2 conditions are met:
1.) A long running Query (q)
2.) At least 1 short transaction (t) that modifies AND commits data
which will be selected by (q)
When (q) starts, Oracle guarantees read consistency, meaning you can
be sure that the result set of (q) reflects the state the db was in at
the start of (q).
If transaction (t) - started AFTER you started (q) - modifies data
that will be selected by (q) later on (because (q) is a long running
query), (q) will read the old, unmodified data out of the undo
segments and everything is as it should be.
The problem now is, that after (t) commits, the undo segments are
still holding the consistent data block(s), but are released and may
be used (and overwritten) by other transactions.
When (q) discovers the situation above, it will terminate with
ORA-1555.
There are basically 2 solutions to this problem:
a.) Don't do transactions while running such an extended query (which
will not be possible in real environments)
b.) Change the undo_retention - parameter of your database.
UNDO_RETENTION (the default is 900 seconds) tells the system how
long to wait before reusing the undo - segments after they have been
released by the transaction.
So if your pump-export needs i.e. 1 hour you should set this parameter
to 3600 or higher.
Note however, that undo_retention will be set for ALL undo - segments
in your system, so - depending on your transaction structure - you
might need a bigger undo-tablespace.
Datapump Export Example Using FLASHBACK_TIME
Use this as a parameter value when creating nightly export job in OEM. This uses SYSDATE for the FLASHBACK_TIME. It also appends the current date to the dump and log file names.
SET SERVEROUTPUT ON
variable exit_code NUMBER
DECLARE
h1 NUMBER;
js VARCHAR2(20);
dt VARCHAR2(12);
begin
:exit_code := 0;
select to_char(sysdate,'YYYYMONDD') into dt from dual;
h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'FULL', job_name => 'PRODDB_EXPORT', version => 'COMPATIBLE');
dbms_datapump.set_parallel(handle => h1, degree => 1);
dbms_datapump.add_file(handle => h1, filename => 'PRODDBEXP' ||dt||'.LOG', directory => 'PRODDB_EXP_DIR', filetype => 3);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
dbms_datapump.add_file(handle => h1, filename => 'PRODDBEXP'||dt||'.DMP', directory => 'PRODDB_EXP_DIR', filetype => 1);
dbms_datapump.set_parameter(handle => h1, name => 'FLASHBACK_TIME', value => 'TO_TIMESTAMP (to_char(sysdate,''dd.mm.yyyy HH24:MI:SS''),''dd.mm.yyyy HH24:MI:SS'')');
dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
dbms_datapump.detach(handle => h1);
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(TO_CHAR(SQLCODE)||' :'||SQLERRM);
:exit_code := 2;
ROLLBACK;
end;
/
exit :exit_code;
SET SERVEROUTPUT ON
variable exit_code NUMBER
DECLARE
h1 NUMBER;
js VARCHAR2(20);
dt VARCHAR2(12);
begin
:exit_code := 0;
select to_char(sysdate,'YYYYMONDD') into dt from dual;
h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'FULL', job_name => 'PRODDB_EXPORT', version => 'COMPATIBLE');
dbms_datapump.set_parallel(handle => h1, degree => 1);
dbms_datapump.add_file(handle => h1, filename => 'PRODDBEXP' ||dt||'.LOG', directory => 'PRODDB_EXP_DIR', filetype => 3);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
dbms_datapump.add_file(handle => h1, filename => 'PRODDBEXP'||dt||'.DMP', directory => 'PRODDB_EXP_DIR', filetype => 1);
dbms_datapump.set_parameter(handle => h1, name => 'FLASHBACK_TIME', value => 'TO_TIMESTAMP (to_char(sysdate,''dd.mm.yyyy HH24:MI:SS''),''dd.mm.yyyy HH24:MI:SS'')');
dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
dbms_datapump.detach(handle => h1);
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(TO_CHAR(SQLCODE)||' :'||SQLERRM);
:exit_code := 2;
ROLLBACK;
end;
/
exit :exit_code;
Find Records Having Column with Specific Value
SET SERVEROUTPUT ON
DECLARE
CURSOR tab_cur is select * from user_tab_columns where column_name = 'COLUMN_NAME';
cnt NUMBER;
query_str VARCHAR2(1000);
BEGIN
for rec in tab_cur
loop
query_str := 'select count (*) from ' || rec.table_name || ' where COLUMN_NAME = ''&value''';
EXECUTE IMMEDIATE query_str into cnt;
IF cnt > 0 THEN DBMS_OUTPUT.PUT_LINE('There are ' || cnt || ' records in ' || rec.table_name);
END IF;
--DBMS_OUTPUT.PUT_LINE(query_str);
END LOOP;
END ;
/
DECLARE
CURSOR tab_cur is select * from user_tab_columns where column_name = 'COLUMN_NAME';
cnt NUMBER;
query_str VARCHAR2(1000);
BEGIN
for rec in tab_cur
loop
query_str := 'select count (*) from ' || rec.table_name || ' where COLUMN_NAME = ''&value''';
EXECUTE IMMEDIATE query_str into cnt;
IF cnt > 0 THEN DBMS_OUTPUT.PUT_LINE('There are ' || cnt || ' records in ' || rec.table_name);
END IF;
--DBMS_OUTPUT.PUT_LINE(query_str);
END LOOP;
END ;
/
Compile Invalid Objects
-- COMPILE INVALID TRIGGERS, FUNCTIONS, PROCEDURES, PACKAGES
EXEC DBMS_UTILITY.compile_schema(user);
-- COMPILE INVALID PACKAGES, TRIGGERS, VIEWS
BEGIN
FOR cur_rec IN (SELECT object_name, object_type FROM user_objects WHERE object_type in ('PACKAGE','TRIGGER','VIEW') AND status != 'VALID')
LOOP
BEGIN
EXECUTE IMMEDIATE 'ALTER ' || cur_rec.object_type ||
' "' || cur_rec.object_name || '" COMPILE';
END;
END LOOP;
END;
/
EXEC DBMS_UTILITY.compile_schema(user);
-- COMPILE INVALID PACKAGES, TRIGGERS, VIEWS
BEGIN
FOR cur_rec IN (SELECT object_name, object_type FROM user_objects WHERE object_type in ('PACKAGE','TRIGGER','VIEW') AND status != 'VALID')
LOOP
BEGIN
EXECUTE IMMEDIATE 'ALTER ' || cur_rec.object_type ||
' "' || cur_rec.object_name || '" COMPILE';
END;
END LOOP;
END;
/
Identify Blocking Sessions
select
x.inst_id WAITING_INSTANCE,
x.sid WAITING_SID,
w.username WAITING_USER,
w.machine WAITING_MACHINE,
w.program WAITING_PROGRAM,
w.module WAITING_MODULE,
(select distinct sql_text from v$sql where sql_id=w.sql_id) WAITING_SQL,
w.sql_exec_start WAITING_SQL_START,
w.seconds_in_wait WAITING_SECONDS,
w.blocking_session BLOCKING_SID,
b.inst_id BLOCKING_INSTANCE,
b.username BLOCKING_USER,
b.machine BLOCKING_MACHINE,
b.program BLOCKING_PROGRAM,
b.module BLOCKING_MODULE,
(select distinct sql_text from v$sql where sql_id=b.prev_sql_id) BLOCKING_SQL,
b.prev_exec_start BLOCKING_SQL_START
from
GV$lock x, -- waiting lock
gv$lock y, -- blocking lock
gv$session w, -- waiting session
gv$session b -- blocking session
where
x.request>0
and w.sid=x.sid
and y.block>0
and b.sid=y.sid
and x.id1=y.id1
and x.id2=y.id2
and w.blocking_session=b.sid
and y.addr=b.taddr
x.inst_id WAITING_INSTANCE,
x.sid WAITING_SID,
w.username WAITING_USER,
w.machine WAITING_MACHINE,
w.program WAITING_PROGRAM,
w.module WAITING_MODULE,
(select distinct sql_text from v$sql where sql_id=w.sql_id) WAITING_SQL,
w.sql_exec_start WAITING_SQL_START,
w.seconds_in_wait WAITING_SECONDS,
w.blocking_session BLOCKING_SID,
b.inst_id BLOCKING_INSTANCE,
b.username BLOCKING_USER,
b.machine BLOCKING_MACHINE,
b.program BLOCKING_PROGRAM,
b.module BLOCKING_MODULE,
(select distinct sql_text from v$sql where sql_id=b.prev_sql_id) BLOCKING_SQL,
b.prev_exec_start BLOCKING_SQL_START
from
GV$lock x, -- waiting lock
gv$lock y, -- blocking lock
gv$session w, -- waiting session
gv$session b -- blocking session
where
x.request>0
and w.sid=x.sid
and y.block>0
and b.sid=y.sid
and x.id1=y.id1
and x.id2=y.id2
and w.blocking_session=b.sid
and y.addr=b.taddr
Preview dump file before importing
If using datapump, the option is SQLFILE.
The file_name specifies where the import job will write the DDL that would be
executed during the job. The SQL is not actually executed, and the target system
remains unchanged. The file is written to the directory object specified in the
DIRECTORY parameter, unless another directory_object is explicitly specified
here. Any existing file that has a name matching the one specified with this parameter
is overwritten.
Example:
impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp SQLFILE=dpump_dir2:filename
If using original import the option is SHOW=Y.
The SHOW parameter can be used only with the FULL=y, FROMUSER, TOUSER, or
TABLES parameter.
The file_name specifies where the import job will write the DDL that would be
executed during the job. The SQL is not actually executed, and the target system
remains unchanged. The file is written to the directory object specified in the
DIRECTORY parameter, unless another directory_object is explicitly specified
here. Any existing file that has a name matching the one specified with this parameter
is overwritten.
Example:
impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp SQLFILE=dpump_dir2:filename
If using original import the option is SHOW=Y.
The SHOW parameter can be used only with the FULL=y, FROMUSER, TOUSER, or
TABLES parameter.
Subscribe to:
Posts (Atom)