Force Log Switch In All Instances

Today I was working with a co-worker when I saw that to make a change of redo towards the super-known command;

alter system switch logfile;

On this indicate that it is completely correct, forces a change of redolog, but clarify that in case of being in a RAC, it only does it in the node from where the command is executed. To do it in all the instances it is better to use the following command;

alter system archive log current;

This will make the logs switch on all nodes. In agreement you can always use the first option executing the node by node.

HTH – Antonio NAVARRO

Advertisements

ORA-00245 Error

Today when I was executing a archive backup after of full refresh (by OGG) I get the next error;

 
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of show command at 04/10/2019 09:26:58
RMAN-03014: implicit resync of recovery catalog failed
RMAN-03009: failure of full resync command on default channel at 04/10/2019 09:26:58
ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage

The same is generated in the first sentence of the RMAN script (resyn catalog). The problem is that snap CF backup is in a DG, which is not permited. It must be out of DG. In this case ACFS or local disk (ZFS) is possible.

Like first action I performed a specified backup like show below;

 

SQL> alter database backup controlfile to ‘/prod/backup/CONTROLFILE/snapcf_nube.f’ reuse;

 
The second action is make this change permanent. From RMAN you can use;

 

CONFIGURE CONTROLFILE AUTOBACKUP OFF;

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;

CONFIGURE SNAPSHOT CONTROLFILE NAME clear;

CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/prod/backup/CONTROLFILE/snapcf_nube.f’;

 
HTH – Antonio NAVARRO

Relocate MGMTDB Database To Another Node

Hello, today I would like to show how to move the MGMTDB database within the nodes of a cluster. This database (optional until version 12.1 if I remember correctly) is used mainly to store the telemetry that Oracle collects on the operation of the cluster on which it is executed.

In my case, I’m going to move it because I need to launch the cluster monitor (oclumon) from node 1. The main benefit of running the oclumon and that the database MGMTDB are in the same node is to reduce network traffic in the Interconnect (or private network).

 
First, check where the database is running;

griduser@node-1 $ srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node node-2

Now execute the relocate to node one;

griduser@node-1 $ srvctl relocate mgmtdb -node node-1
griduser@node-1 $

Check again;

griduser@node-1 $ srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node node-1
griduser@node-1 $

You can also check the presence of the pmon with the world famous command “ps -ef | grep pmon” as shown below;

griduser@node-1 $ ps -ef | grep pmon
    grid 10311     1   0   Nov 14 ?           6:12 asm_pmon_+ASM1
    grid  7723     1   0   Nov 14 ?           5:52 apx_pmon_+APX1
  oracle 10582     1   0   Nov 14 ?           9:59 ora_pmon_CLOUD1
    grid 29146     1   0 15:40:09 ?           0:00 mdb_pmon_-MGMTDB

 

HTH – Antonio NAVARRO

 

It Is Necessary To Relink Grid Infrastructure After Applying OS Patch

The question of the title of this post I have heard many times, but today I will answer for the part of Clusterware. The database engine I leave for another post ;). According to the Oracle documentation: “You must relink the Oracle Clusterware and Oracle ASM binaries every time you apply an operating system patch or after an operating system upgrade.”

Of course, and this is the procedure;

As root:

 
# cd Grid_home/crs/install
# rootcrs.sh -unlock

As theGrid Infrastructure owner:

 
$ export ORACLE_HOME=Grid_home
$ Grid_home/bin/relink

As root:

 
# cd Grid_home/rdbms/install/
# ./rootadd_rdbms.sh
# cd Grid_home/crs/install
# rootcrs.sh -patch

HTH – Antonio NAVARRO.

Update The Project Is Not Reflected In The Installer

I’m installing a grid infrastructure, the issue is that give an error by the limit of open files, as shown in the screenshot below.

foto_installer_fail_reading_projects_v2

The error itself is quite simple and easy to fix. The problem is that by saying to the installer that it returns to perform the check, it will fail with the same error. Basically when we login with the our user to the operating system, the project is read and these values ​​are fixed (as hardcode) to the session.

I have been researching (Google, MOS / ML,  Stackoverflow,) to see if there is any way to force the user to read the permissions again, but I have not found anything. If someone knows how their help is always welcome.

In my case I have solved it by closing the installer, logout the session and reconnecting to the machine, logically returned to execute the installer from the beginning.

HTH – Antonio NAVARRO

 

How To Get DDL From a Diskgroup

This week I had to rebuild several diskgroups asm for a new environment that I have mounted, the issue is that I did not find a “DBMS_METADATA” or a script on the internet that would serve me for what I wanted to do, on the other hand it is a simple thing and can that being something relatively easy nobody has published something similar.

Here I put the little script that I put in if it can help someone.

 

SET SERVEROUTPUT ON SIZE 100000
SET LINES 1000
SET PAGES 1000SET VER OFF 

DECLARE
IS_FIRST_TIME  BOOLEAN := TRUE;
LINEA VARCHAR2 (100);
BEGIN
DBMS_OUTPUT.PUT_LINE ('==========================================================');
/* Create command */
select 'CREATE DISKGROUP ' || '&1' INTO LINEA FROM DUAL;
DBMS_OUTPUT.PUT_LINE (LINEA);
/* Redundancy type */
select decode ( type, 'EXTERN','EXTERNAL', type)  || ' REDUNDANCY DISK' INTO LINEA from v$asm_diskgroup where name =upper ( '&&1' );
DBMS_OUTPUT.PUT_LINE (LINEA);
/* Disks associated to the DG */
FOR i IN  (
SELECT PATH  FROM V$ASM_DISK JOIN  V$ASM_DISKGROUP ADG USING (GROUP_NUMBER)
WHERE ADG.name =   upper ( '&&1' )
)  LOOP
/* Put comma if multirows are returned */
IF NOT IS_FIRST_TIME THEN
DBMS_OUTPUT.PUT (',');
END IF;
DBMS_OUTPUT.PUT_LINE (i.path);
IS_FIRST_TIME := FALSE;
END LOOP i;
/* Allocation unit */
SELECT 'ATTRIBUTE ''au_size''=''' || ALLOCATION_UNIT_SIZE/1048576 || 'M'','  INTO LINEA from v$asm_diskgroup where name =  upper ( '&&1' );
DBMS_OUTPUT.PUT_LINE (LINEA);
/* Compatible ASM */
SELECT  '''compatible.asm''=''' || VALUE  || ''',' INTO LINEA  FROM V$ASM_ATTRIBUTE A JOIN  V$ASM_DISKGROUP ADG USING (GROUP_NUMBER)  WHERE ADG.name =  upper ( '&&1' )
AND A.NAME = 'compatible.asm';
DBMS_OUTPUT.PUT_LINE (LINEA);
/* Compatible RDBMS */
SELECT  '''compatible.rdbms''=''' || VALUE  || ''';'  INTO LINEA  FROM V$ASM_ATTRIBUTE A JOIN  V$ASM_DISKGROUP ADG USING (GROUP_NUMBER)  WHERE ADG.name =  upper ( '&&1' )
AND A.NAME = 'compatible.rdbms';
DBMS_OUTPUT.PUT_LINE (LINEA);
END;
/

HTH – Antonio NAVARRO