Statistics On Time

Today I have been reported by development team a problem with a delete process, it was too slow. The delete for 200000 rows in a table that has  272 megs upto the HWM . The table has 597160 rows , is a very small table, but remove 200000 rows takes minutes.

This is the statement;

THOT*ANTO> explain plan for
2 DELETE FROM PROCESS_BILLING A WHERE A.ROWID = :B1
3 ;
————————————————————————————-
| 0  | DELETE STATEMENT  | | 1 | 45 | 1 (0)| 00:00:01 |
| 1  | DELETE                          | PROCESS_BILLING | | | | |
|* 2 | INDEX FULL SCAN      | PROBILL_PK | 1 | 45 | 1 (0)| 00:00:01 |
————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – filter(“A”.ROWID=CHARTOROWID(:B1))

 

Looking at the execution plan of the delete is using the PK index and access with INDEX FULL SCAN, that’s not right. A deleted using the rowid should go straight to the table and not go through any index.

Examining the statistics, the PK is LAST_ANALYZED 07/01/11. We chose as a first step to update the statistics and rebuild the index , since occasionally large deletions are performed. After rebuilding the index , statistics are updated at 15:52:53 19/MAY/2014 , runs again the slow process and it take few seconds now. Examining the execution plan it is accessed with TABLE ACCESS BY USER ROWID.

The execution plan accessed correctly;

PLAN_TABLE_OUTPUT
___________________________________________________________
Plan hash value: 761203433

———————————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
———————————————————————————————–
| 0  | DELETE STATEMENT                        | | 1 | 45 | 1 (0)| 00:00:01 |
| 1  | DELETE | PROCESS_BILLING        | | | | |
|* 2 | TABLE ACCESS BY USER ROWID | PROCESS_BILLING | 1 | 45 | 1 (0)| 00:00:01 |
———————————————————————————————–

Predicate Information (identified by operation id):
—————————————————
2 – access(CHARTOROWID(:B1))

 

Conclusion ,

It is always important to keep up to date statistics , of course, is not always easy or there are limitations that prevent us have updated statistics . Regarding indexes , at least the most modifieds, also should follow a policy of rebuild and always after big loads or deletions.

 

HTH – Antonio NAVARRO

 

Advertisements

Enq: TM – Contention

This morning we held locks in a database, as detailed below;

SID            EVENT
_______ ________________
282           SQL*Net message to client
163           enq: TM – contention
284           enq: TM – contention
217           enq: TM – contention
237           enq: TM – contention
286           enq: TM – contention
181           db file sequential read
428          db file sequential read
219          db file sequential read
192          enq: TX – row lock conten
231          enq: TM – contention
404          enq: TM – contention

 

The locks are the type enq: TM – contention that, normally, are about referential integrity, about indexing in the child table columns that go against the father. I do not agree with create indexes on FK, of course, many authors and publications that recommend always create indexes on FK. I think in large or huge databases with thousands of tables is costly in CPU time, complexity of management and space, although hardware is becoming cheaper. I preffer just create index on FK when they start giving problems.

 

In the present case shows that the following statement wait in the event enq: TM – contention;

INSERT INTO LIBRO
(
COD_EJEMPLAR,
COD_MEDIO_ALMACENAMIENTO,
COD_ISBN,
TITULO,
TCIN,
TCOUT )
VALUES
(
:V001 ,
:V002 ,
:V003 ,
:V004 ,
:V005 ,
:V006 )

 

If we see the FK for this table;

select constraint_name, constraint_type, r_constraint_name
from dba_constraints where table_name =’LIBRO’
and constraint_type =’R’;

CONSTRAINT_NAME         C  R_CONSTRAINT_NAME
____________________ _ __________________
LIBRO_EDITORIAL_FK      R EDITORIAL_PK

 

Now, on parent table we look for indexed columns;

select column_name, column_position
from dba_ind_columns where index_name =’EDITORIAL_PK’;

COLUMN_NAME                                COLUMN_POSITION
__________________________ _______________
COD_EJEMPLAR                                 1
COD_MEDIO_ALMACENAMIENTO  2

 

 

The solution will be create an index on child table about these columns;

create index fast_fk
on libro (cod_ejemplar, cod_medio_almacenamiento)
tablespace editorial_assm_inx
nologging
parallel 5
/

 

HTH – Antonio NAVARRO

 

 

 

 

 

 

 

 

 

 

 

ORA-01775 Querying Dba_waiters

Today arrived an alarm about locks in the database, when we run the following query to gather more information.

 

CLOUD*ANTO> select * from dba_waiters;

 

I get the following error

 

select * from dba_waiters
*
ERROR en línea 1:
ORA-01775: looping chain of synonyms

 

 

Looking at the definition of the synonym to discarding errors;

 

CLOUD*ANTO> SELECT OWNER, TABLE_OWNER, TABLE_NAME FROM DBA_SYNONYMS WHERE SYNONYM_NAME=’DBA_WAITERS’;

OWNER            TABLE_OWNER   TABLE_NAME
___________ _____________ ____________
PUBLIC             SYS                          DBA_WAITERS

 

Reviewing the table from sqlplus as sysdba;

 

SQL> select * from sys.dba_waiters;
select * from sys.dba_waiters
*
ERROR at line 1:
ORA-00942: table or view does not exist

 

It seems that the table is not created or was created and delete at the time, the solution is recreating these objects by the run the following script.

 

$ORACLE_HOME/rdbms/admin/catblock.sql

 

HTH – Antonio NAVARRO

 

 

 

Deadlock Don’t Kill Any Session

From the documentation;

A deadlock occurs when two or more session are waiting for data locked by each other, resulting in all the sessions being blocked. Oracle automatically detects and resolves deadlocks by rolling back the statement associated with the transaction that detects the deadlock.

 

Usually  any of the connections involved in the deadlock dies and one of them receives the following error;

ORA-00060: deadlock detected while waiting for resource

 
Today we have been reported that a Tibco adapter has fallen, a Tibco adapter is basically a connection pool from an application, in this case the pool opens 20 connections to a database. Looking at alert.log and trace files, it appears that also died the shared server. The two processes have been involved in deadlock.

 
FINDING

The shared server process has died, possibly by a bug that we reported to MOS. The pool has died because of the connection will be received an ORA-00060 and has failed to treat, knocking all the pool

So 21 connections have fallen ….

 

HTH – Antonio NAVARRO

 

The Bloom Filter

The bloom filter is a probabilistic data structure. This is a data set in which we have two premises;

  • i) The element is not in the set.
  • ii) may be that the element is in the set, with an approximate degree of certainty.

 
Basically there are two working operations.

  • i) TEST tells us whether or not an element can be.
  • ii) ADD Adds an element to the structure.

 

The set may be on the order of millions or hundreds of millions of entries, and is highly efficient to reduce disk IO. This algorithm is commonly used in operations with parallel join when tables and partitioned tables both Oracle and SQL Server, in Sybase is also used in normal joins.

The algorithm is based on an array of m bits (b1, b2, .., bm) that are set to zero, then on a data d1, k hash functions are applyed, ess if we have k = 3 (more usual value) the three hash functions will be applyed on d1, these functions return a position in the range from 0 upto m (in the array). The positions returned by the hash functions will be setted to one in the array.

 

  • fk1 (d1)
  • fk2 (d1)
  • fk3 (d1)

 
When this check if an element is in the set, the three functions will return positions that must be 1, if so the element maybe is in the set, if one bit is not (equal to zero) the element is not in the set.

Its main advantage is that we do not store in memory values as a pk fields (number, varchar, etc.) Instead we substitute these values by a single bit. If a varchar occupies 10 bytes (8 bits) would be a single value against eighty possible values.

 

HTH – Antonio NAVARRO

 

Recycle Bin Demystified

The Recycle Bin is a feature of Oracle that stores tables and indexes when we make a drop table, the recyclebin parameter must be set to ON. Often this does not interest us and we desactivadad this functionality. But that does Recycle Bin feature really, basically what it does is rename the table and the indexes associated with this table, of course, includes management of the objects in the bin and a table that is in the trash can be deleted if needed space.

 

An example of doing it by hand would be like this;

  • i) Find all indexes associated with the table

SELECT index_name FROM DBA_INDEXES WHERE table_name = table_to_recycle;

  • ii) Rename the indixes obtained in step i

ALTER INDEX index_name_n RENAME TO new_name_n;

  • iii) Rename the table

RENAME TABLE table_to_recycle TO table_recycled;

Remember for all purposes as such table is not deleted .

HTH – Antonio NAVARRO