Oracle 11G R2 OLR :
11gR2 Clusterware :Oracle Local Registry (OLR)
In
11gR2, Oracle has introduced a new registry to maintain the clusterware
resources (css, crs,evm,gip and more) in a new registry called Oracle
Local Registry (OLR).
Multiple processes on each node have simultaneous read and write access to the OLR particular to the node on which they reside, regardless of whether Oracle Clusterware is running or fully functional.
By default, OLR is located at Grid_home/cdata/host_name.olr on each node.The OCR still exists, but maintains only the cluster resources.
Until Oracle Database 11gR1, the RAC configurations consisted of just one registry when running Oracle Clusterware. Shortly called OCR, Oracle Cluster Registry, maintained the cluster level resource information, privileges etc. To be precise, the OCR maintained information about 2 sets of node level resources, namely, the Oracle Clusterware Components (CRS, CSS, evm) as well as Cluster resources (DB, Listener etc).
Why this method?
Multiple processes on each node have simultaneous read and write access to the OLR particular to the node on which they reside, regardless of whether Oracle Clusterware is running or fully functional.
By default, OLR is located at Grid_home/cdata/host_name.olr on each node.The OCR still exists, but maintains only the cluster resources.
Until Oracle Database 11gR1, the RAC configurations consisted of just one registry when running Oracle Clusterware. Shortly called OCR, Oracle Cluster Registry, maintained the cluster level resource information, privileges etc. To be precise, the OCR maintained information about 2 sets of node level resources, namely, the Oracle Clusterware Components (CRS, CSS, evm) as well as Cluster resources (DB, Listener etc).
Why this method?
Before we get
into this, we should see some of the improvements in Oracle 11gR2 RAC
infrastructure. Until 11gR2, the CRS resources namely the OCR components and
the voting disks were maintained in RAW or shared file systems. With the new
11gR2, the Oracle clusterware related files can be maintained in Oracle ASM
(Automatic Storage Management). A feature that was introduced with Oracle 10g
DB release. This ability to host OCR and Voting disks in ASM poses an
interesting situation.
In order for the cluster resources to be up, the ASM needs to be up. If ASM needs to be up, the clusterware components should be functional. By having all the CRS and cluster resource information stored in OCR, this contradicting situation cannot be resolved unless somehow the cluster specific components detail is separately maintained from other resources/services.
As a solution, Oracle has come up with a new approach; the Oracle Local Registry. The Oracle Local registry maintains the node specific information and gets created with Oracle Clusterware installation of OCR. Since this maintains node specific resources, the clusterware components (crs,css,ctss,evm,gip, and asm) can be made available, with ASM being made available, this makes the OCR and voting disks access possible which eventually opens up the various cluster resources and components.
Without OLR, the clusterware resources will not start which in turn will not start the dependent components.
Important Notes for OLR:
In order for the cluster resources to be up, the ASM needs to be up. If ASM needs to be up, the clusterware components should be functional. By having all the CRS and cluster resource information stored in OCR, this contradicting situation cannot be resolved unless somehow the cluster specific components detail is separately maintained from other resources/services.
As a solution, Oracle has come up with a new approach; the Oracle Local Registry. The Oracle Local registry maintains the node specific information and gets created with Oracle Clusterware installation of OCR. Since this maintains node specific resources, the clusterware components (crs,css,ctss,evm,gip, and asm) can be made available, with ASM being made available, this makes the OCR and voting disks access possible which eventually opens up the various cluster resources and components.
Without OLR, the clusterware resources will not start which in turn will not start the dependent components.
Important Notes for OLR:
- The OLR is backed up at the end of an installation or an upgrade. After that time, you can only manually back up the OLR. Automatic backups are not supported for the OLR. You should create a new backup when you migrate the OCR from Oracle ASM to other storage, or you migrate the OCR from other storage to Oracle ASM.
- By default, OLR is located at Grid_home/cdata/host_name.olr on each node.
- Oracle recommends that you use the -manualbackup and -restore commands and not the -import and -export commands.
- When exporting OLR, Oracle recommends including "olr",
the host name, and the timestamp in the name string.
For example: olr_myhost1_20090603_0130_export
Information of the OLR:
A quick
(dirty) peek at the olr shows the resources that are being maintained.
!DAEMON_TRACING_LEVELS
ora!asm
ora!crsd
ora!cssd
ora!cssdmonitor
ora!ctssd
ora!diskmon
ora!drivers!acfs
ora!evmd
ora!gipcd
ora!gpnpd
ora!mdnsd
Information of the OCR:
A quick
(dirty) peek at the ocr shows the resources that are being maintained.
ora!LOCD
ora!locd!db
ora!LSNRGRID!lsnr
ora!LISTENER!lsnr
dora!FRADG!dg
fora!DATADG!dg
aora!linux2!vip
ora!oc4j
ora!LISTENER_SCAN1!lsnr
ora!scan1!vip
ora!registry!acfs
ora!CRS!dg
iora!asm
dora!eons
ora!ons
ora!gsd
ora!linux1!vip
ora!net1!network
Important File
The olr.loc in
11gr2 is located in /etc/oracle or /var/opt/oracle. It depends of OS.
Example of the olr.loc file
olrconfig_loc=/u01/rk/grid/cdata/rac1.olr
crs_home=/u01/rk/grid
=========================================================
Oracle Clusterware
11g Release 2 an additional component related to the OCR called the Oracle
Local Registry (OLR) is installed on each node in the cluster. The OLR is a
local registry for node specific resources. The OLR is not shared by other
nodes in the cluster. It is installed and configured when Oracle clusterware is
installed.
Purpose of OLR
————–
It is the very first file
that is accessed to startup clusterware when OCR is stored on ASM. OCR should
be accessible to find out the resources which need to be started on a node. If
OCR is on ASM, it can’t be read until ASM (which itself is a resource for the
node and this information is stored in OCR) is up. To resolve this problem,
information about the resources which need to be started on a node is stored in
an operating system file which is called Oracle Local Registry or OLR.
Since OLR is a file an operating system file, it can be accessed by various
processes on the node for read/write irrespective of the status of the clusterware
(up/down). Hence, when a node joins the cluster, OLR on that node
is read, various resources ,including ASM are started on the node .
Once ASM is up , OCR is accessible and is used henceforth to manage all the
clusterware resources. If
OLR is missing or corrupted, clusterware can’t be started on that node!
Where is OLR located?
———————
The OLR file is located in
the grid_home/cdata/.olr . The location of OLR is stored in
/etc/oracle/olr.loc.and used by OHASD .
What does OLR contain?
———————-
The OLR stores
- version of clusterware
- clusterware configuration
- configuration of various
resources which need to be started on the node
etc.
A quick
peek at the backup of the olr shows the resources that are being
maintained.
[root@host01
~]# ocrconfig -local -manualbackup
host01 2013/01/18
01:20:27
/u01/app/11.2.0/grid/cdata/host01/backup_20130118_012027.olr
[root@host01~]# strings
/u01/app/11.2.0/grid/cdata/host01/backup_20130118_012027.olr |grep -v type
|grep ora!
ora!drivers!acfs
ora!crsd
ora!asm
ora!evmd
ora!ctssd
ora!cssd
ora!cssdmonitor
ora!diskmon
ora!gpnpd
ora!gipcd
ora!mdnsd
OLR administration
——————-
You can view the status of
the OLR file on each node by using the ocrcheck command with the –local parameter
as seen here:
#ocrcheck -local
[root@qpass-test-rac-1
bin]# ./ocrcheck -local
Status of
Oracle Local Registry is as follows :
Version
: 3
Total space (kbytes)
: 262120
Used space (kbytes) : 2676
Available space (kbytes)
: 259444
ID
: 709934618
Device/File Name
: /opt/app/oragrid/11.2.0/cdata/qpass-test-rac-1.olr
Device/File integrity check succeeded
Local registry integrity check succeeded
Logical corruption check succeeded
[root@qpass-test-rac-1 bin]#
ocrdump can be used to dump the
contents of the OLR to tthe text terminal:
#ocrdump -local -stdout
You can use the ocrconfig
command to export and import the OLR as seen in these examples:
#ocrconfig -local -export
#ocrconfig -local -import
And you can repair the OLR
file should it become corrupted with the ocrconfig command as seen in this
example:
#ocrconfig -local -repair olr
The OLR is backed up at the end of an installation or an
upgrade. After that time, you can only manually back up the OLR. Automatic
backups are not supported for the OLR.
To manually back up OLR:
# ocrconfig –local –manualbackup
To view the contents of the OLR backup file:
#ocrdump
-local -backupfile olr_backup_file_name
To change the
OLR backup location
#ocrconfig
-local -backuploc new_olr_backup_path
To restore OLR:
# crsctl stop crs
# ocrconfig -local -restore file_name
# ocrcheck -local
# crsctl start crs
$ cluvfy comp olr
=========================================================
Grid Plug and Play ( GPnP ) – new 11.2 feature
Contents
1. Overview
2. GPNPTOOL COMMAND REFERENCE
3. How to extract data from profile.xml in a readable format
4. Updating profile.xml and using of gpntool put
5. References
Overview
GPNP PROFILE
The GPnP profile is a small XML file located in GRID_HOME/gpnp//profiles/peer under the name profile.xml.It is stored in the local OCR and in the cluster OCR. In case of errors GPnPD deamon re-creates the profile. Never change this XML file directly – instead use tools like:
The GPnP profile is a small XML file located in GRID_HOME/gpnp//profiles/peer under the name profile.xml.It is stored in the local OCR and in the cluster OCR. In case of errors GPnPD deamon re-creates the profile. Never change this XML file directly – instead use tools like:
- asmcmd
- OUI
- oifcfd
- ASMCA
The GPNP has 2 parts a WALLET and a PROFILE configuration:
PROFILE configuration:
# ls -l $GRID_HOME/gpnp/grac1/profiles/peer/profile.xml
-rw-r–r– 1 grid oinstall 1891 Jul 17 18:27 /u01/app/11203/grid/gpnp/grac1/profiles/peer/profile.xml
# ls -l $GRID_HOME/gpnp/grac1/profiles/peer/profile.xml
-rw-r–r– 1 grid oinstall 1891 Jul 17 18:27 /u01/app/11203/grid/gpnp/grac1/profiles/peer/profile.xml
The WALLET information can be found in :
/u01/app/11203/grid/gpnp/grac1/wallets/
This XML Profile is used to establish the correct global
personality of a node. Each node maintains a local copy of the GPnP Profile and
is maintainid by the GPnP Deamon (GPnPD) . The GPnP profile is a small
XML file containing bootstrap information for the cluster. Each node maintains
a local copy of the GPnP Profile. The profile doesn’t contain any node related
information and exists on every node in the GPnP cache.
GPnP Profile contains various attributes:
- Cluster name
- Network classifications (Public/Private)
- Storage to be used for CSS
- Storage to be used for ASM : SPFILE location,ASM DiskString etc
- Digital signature information :
Using gpnptool to verify profile.xml
The gpnptool can be used for reading the gpnp profile.
# $GRID_HOME/bin/gpnptool get ( formatted output )
# $GRID_HOME/bin/gpnptool get ( formatted output )
Warning: some command line parameters were defaulted. Resulting
command line:
/u01/app/11203/grid/bin/gpnptool.bin get -o-
ProfileSequence=”4″
ClusterUId=”2ae3c3415014ef2abf2ff662c5bf8512″
ClusterName=”GRACE2″ i
PALocation=”">
GPnPd daemon replicates changes to the profile and is modified during following operations:
/u01/app/11203/grid/bin/gpnptool.bin get -o-
ProfileSequence=”4″
ClusterUId=”2ae3c3415014ef2abf2ff662c5bf8512″
ClusterName=”GRACE2″ i
PALocation=”">
GPnPd daemon replicates changes to the profile and is modified during following operations:
- Installation
- System boot
- When updated
The XML
Profile is updated whenever changes are made to a cluster with configuration
tools like
- oifcfg (Change network)
- crsctl (change location of voting disk)
- asmcmd (change ASM_DISKSTRING, SPfile location) etc.
The first usage of the XMl
profile is during booting Clusterware and reading of the ASM SPfile. To start
clusterware, voting disk needs to be accessed. If voting disk is on ASM, this
information is read from GPnP profile (). The voting disk
is read using kfed utility even if ASM is not up. Please read the
following for a full sample of the ASM
startup steps. Next, the clusterware checks if all the nodes
have the updatedGPnP
profile and
the node joins the cluster based on the GPnP configuration . Whenver a node is
started/added to the cluster, the clusterware software on the starting node
starts a GPnP agent.
- If the node is already part of the cluster, the GPnP agent reads the existing profile on that node.
- If the node is being added to the cluster, GPnP agent locates agent on another existing node using multicast protocol (provided by mDNS) and gets the profile from that agent.
Next CRSD needs to read OCR to
startup various resources on the node and hence update it as status of
resources changes. Since OCR is also on ASM, location of ASM SPfile should be
known.
The order of searching the ASM SPfile is
The order of searching the ASM SPfile is
- GPnP profile
- $ORACLE_HOME/dbs/spfile
- $ORACLE_HOME/dbs/init>
GPNPTOOL COMMAND REFERENCE
How to
read the profile
# $GRID_HOME/bin/gpnptool get
# $GRID_HOME/bin/gpnptool get
How to
find GPnP Deamons are running on the local node
# $GRID_HOME/bin/gpnptool lfind
Success. Local gpnpd found.
How to find the location of ASM spfile if the ASM is down
# $GRID_HOME/bin/gpnptool getpval -asm_spf -p=/u01/app/11203/grid/gpnp/grac1/profiles/peer/profile.xml
Warning: some command line parameters were defaulted. Resulting command line:
/u01/app/11203/grid/bin/gpnptool.bin getpval -asm_spf -p=/u01/app/11203/grid/gpnp/grac1/profiles/peer/profile.xml -o-
+DATA/grace2/asmparameterfile/registry.253.821039237
# $GRID_HOME/bin/gpnptool lfind
Success. Local gpnpd found.
How to find the location of ASM spfile if the ASM is down
# $GRID_HOME/bin/gpnptool getpval -asm_spf -p=/u01/app/11203/grid/gpnp/grac1/profiles/peer/profile.xml
Warning: some command line parameters were defaulted. Resulting command line:
/u01/app/11203/grid/bin/gpnptool.bin getpval -asm_spf -p=/u01/app/11203/grid/gpnp/grac1/profiles/peer/profile.xml -o-
+DATA/grace2/asmparameterfile/registry.253.821039237
Check if
GPnP configuration is valid
# $GRID_HOME/bin/gpnptool check -p=/u01/app/11203/grid/gpnp/grac1/profiles/peer/profile.xml
Profile cluster=”GRACE2″, version=4
GPnP profile signed by peer, signature valid.
Got GPnP Service current profile to check against.
Current GPnP Service Profile cluster=”GRACE2″, version=4
Error: profile version 4 is older than- or duplicate of- GPnP Service current profile version 4.
Profile appears valid, but push will not succeed.
# $GRID_HOME/bin/gpnptool check -p=/u01/app/11203/grid/gpnp/grac1/profiles/peer/profile.xml
Profile cluster=”GRACE2″, version=4
GPnP profile signed by peer, signature valid.
Got GPnP Service current profile to check against.
Current GPnP Service Profile cluster=”GRACE2″, version=4
Error: profile version 4 is older than- or duplicate of- GPnP Service current profile version 4.
Profile appears valid, but push will not succeed.
Verify
profile signature
# $GRID_HOME/bin/gpnptool verify -p=/u01/app/11203/grid/gpnp/grac1/profiles/peer/profile.xml -w=”file://u01/app/11203/grid/gpnp/grac1/wallets/peer” -wu=peer
Profile signature is valid.
# $GRID_HOME/bin/gpnptool verify -p=/u01/app/11203/grid/gpnp/grac1/profiles/peer/profile.xml -w=”file://u01/app/11203/grid/gpnp/grac1/wallets/peer” -wu=peer
Profile signature is valid.
Check if
a specific remote GPnPD is resonding
# $GRID_HOME/bin/gpnptool find -h=grac2
Found 1 instances of service ‘gpnp’.
mdns:service:gpnp._tcp.local.://grac2:37069/agent=gpnpd,cname=GRACE2,host=grac2,pid=3124/gpnpd h:grac2 c:GRACE2
# $GRID_HOME/bin/gpnptool find -h=grac2
Found 1 instances of service ‘gpnp’.
mdns:service:gpnp._tcp.local.://grac2:37069/agent=gpnpd,cname=GRACE2,host=grac2,pid=3124/gpnpd h:grac2 c:GRACE2
Check
whether all peers are responding
# $GRID_HOME/bin/gpnptool find -c=GRACE2
Found 2 instances of service ‘gpnp’.
mdns:service:gpnp._tcp.local.://grac2:37069/agent=gpnpd,cname=GRACE2,host=grac2,pid=3124/gpnpd h:grac2 c:GRACE2
mdns:service:gpnp._tcp.local.://grac1:59485/agent=gpnpd,cname=GRACE2,host=grac1,pid=3196/gpnpd h:grac1 c:GRACE2
# $GRID_HOME/bin/gpnptool find -c=GRACE2
Found 2 instances of service ‘gpnp’.
mdns:service:gpnp._tcp.local.://grac2:37069/agent=gpnpd,cname=GRACE2,host=grac2,pid=3124/gpnpd h:grac2 c:GRACE2
mdns:service:gpnp._tcp.local.://grac1:59485/agent=gpnpd,cname=GRACE2,host=grac1,pid=3196/gpnpd h:grac1 c:GRACE2
How to extract data from profile.xml in a readable format
Extract ProfileSequence ClusterName
[grid@grac41 ~]$ $GRID_HOME/bin/gpnptool get 2>/dev/null | xmllint --format - | awk '/ProfileSequence/ { printf("%s %s\n", $9,$11); }'
ProfileSequence="11" ClusterName="grac4"
Extract Network and ASM specific data
[grid@grac41 ~]$ $GRID_HOME/bin/gpnptool get 2>/dev/null | xmllint --format - | egrep 'CSS-Profile|ASM-Profile|Network id'
Script get_profile.sh to check your cluster nodes
#!/bin/bash
host1=grac41
host2=grac42
host3=grac43
echo "*** GPnP Info - Verify profile.xml on all nodes"
ssh $host1 /bin/hostname;$GRID_HOME/bin/gpnptool get 2>/dev/null | xmllint --format - | awk '/ProfileSequence/ { printf("%s %s\n", $9,$11); }'
ssh $host2 /bin/hostname;$GRID_HOME/bin/gpnptool get 2>/dev/null | xmllint --format - | awk '/ProfileSequence/ { printf("%s %s\n", $9,$11); }'
ssh $host3 /bin/hostname;$GRID_HOME/bin/gpnptool get 2>/dev/null | xmllint --format - | awk '/ProfileSequence/ { printf("%s %s\n", $9,$11); }'
ssh $host1 /bin/hostname; $GRID_HOME/bin/gpnptool get 2>/dev/null | xmllint --format - | egrep 'CSS-Profile|ASM-Profile|Network id'
ssh $host2 /bin/hostname; $GRID_HOME/bin/gpnptool get 2>/dev/null | xmllint --format - | egrep 'CSS-Profile|ASM-Profile|Network id'
ssh $host3 /bin/hostname; $GRID_HOME/bin/gpnptool get 2>/dev/null | xmllint --format - | egrep 'CSS-Profile|ASM-Profile|Network id'
Output - should be identical on all nodes
[grid@grac41 ~]$ get_profile.sh
*** GPnP Info - Verify profile.xml on all nodes
grac41.example.com
ProfileSequence="11" ClusterName="grac4"
grac42.example.com
ProfileSequence="11" ClusterName="grac4"
grac43.example.com
ProfileSequence="11" ClusterName="grac4"
grac41.example.com
grac42.example.com
grac43.example.com
DiscoveryString="/dev/asm*,/dev/oracleasm/disks/*"
SPFile="+OCR/grac4/asmparameterfile/spfileCopyASM.ora"/>
====================================================
No comments:
Post a Comment