Check the location of voting disk
$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 0b268c78cdb24f96bfd4d9e01083c9b0 (/dev/oracleasm/asm-data01) [DATA]
Located 1 voting disk(s).
$
– we can see that only one copy of the voting disk is there on data diskgroup which has external redundancy. Oracle writes the voting devices to the underlying disks at pre-designated locations so that it can get the contents of these files when the cluster starts up.
To Relocate voting disk
– Create new disk-group OCRVD with normal redundancy and 2 disks.
– Try to move voting disk from disk-group DATA to OCRVD disk-group
– Fails as we should have at least 3 disks in the OCRVD disk-group
$ crsctl replace votedisk +OCRVD
Failed to create voting files on disk group OCRVD.
Change to configuration failed, but was successfully rolled back.
CRS-4000: Command Replace failed, or completed with errors.
– Add another disk to test diskgroup and mark it as quorum disk. The quorum disk is one small Disk (500 MB should be on the safe side here, since the Voting File is only about 280 MB in size) to keep one Mirror of the Voting File. Other two disks will contain each one Voting File and all the other stripes of the Database Area as well, but quorum will only get that one Voting File.
– Now try to move the voting disk from DATA disk-group to OCRVD disk-group.
– Now the operation is successful
$ crsctl replace votedisk +OCRVD
Successful addition of voting disk 00ce3c95c6534f44bfffa645a3430bc3.
Successful addition of voting disk a3751063aec14f8ebfe8fb89fccf45ff.
Successful addition of voting disk 0fce89ac35834f99bff7b04ccaaa8006.
Successful deletion of voting disk 243ec3b2a3cf4fbbbfed6f20a1ef4319.
Successfully replaced voting disk group with +OCRVD.
CRS-4266: Voting file(s) successfully replaced
Why odd number of voting disk?
It can be seen that number of voting disks whose failure can be tolerated is same for (2n-1) as well as 2n voting disks where n can be 1, 2 or 3. Hence to save a redundant voting disk, (2n-1) i.e. an odd number of voting disks are desirable.
Space occupied by ocr & vote disk
[oracle@example1a ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 4044
Available space (kbytes) : 405524
ID : 903643710
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
[oracle@example1a ~]$
Viewing Available OCR Backups
[oracle@example1a ~]$ ocrconfig -showbackup
example1a 2018/07/31 11:59:50 /u01/app/12.1.0/grid/cdata/example1/backup00.ocr 0
example1a 2018/07/31 07:59:50 /u01/app/12.1.0/grid/cdata/example1/backup01.ocr 0
example1a 2018/07/31 03:59:50 /u01/app/12.1.0/grid/cdata/example1/backup02.ocr 0
example1a 2018/07/30 03:59:48 /u01/app/12.1.0/grid/cdata/example1/day.ocr 0
example1a 2018/07/20 23:59:31 /u01/app/12.1.0/grid/cdata/example1/week.ocr 0
PROT-25: Manual backups for the Oracle Cluster Registry are not available
[oracle@example1a ~]$
To manually backup the contents of the OCR:
Log in as the root user.
Use the following command to force Oracle Clusterware to perform an immediate backup of the OCR:
# ocrconfig -manualbackup
The date and identifier of the recently generated OCR backup is displayed.
If you must change the location for the OCR backup files, then use the following command, where directory_name is the new location for the backups:
# ocrconfig -backuploc directory_name
Find current/ active version of OCR
[oracle@example1a ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[oracle@example1a ~]$
Repairing an OCR Configuration on a Local Node
As the root user, run one or more of the following commands on the node on which Oracle Clusterware is stopped, depending on the number and type of changes that were
made to the OCR configuration:
[root]# ocrconfig –repair -add new_ocr_file_name
[root]# ocrconfig –repair -delete ocr_file_name
[root]# ocrconfig –repair -replace source_ocr_file -replacement dest_ocr_file
These commands update the OCR configuration only on the node from which you run the command.
Restart Oracle Clusterware on the node you have just repaired.
As the root user, check the OCR configuration integrity of your cluster using the following command:
[root]# ocrcheck
$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 0b268c78cdb24f96bfd4d9e01083c9b0 (/dev/oracleasm/asm-data01) [DATA]
Located 1 voting disk(s).
$
– we can see that only one copy of the voting disk is there on data diskgroup which has external redundancy. Oracle writes the voting devices to the underlying disks at pre-designated locations so that it can get the contents of these files when the cluster starts up.
To Relocate voting disk
– Create new disk-group OCRVD with normal redundancy and 2 disks.
– Try to move voting disk from disk-group DATA to OCRVD disk-group
– Fails as we should have at least 3 disks in the OCRVD disk-group
$ crsctl replace votedisk +OCRVD
Failed to create voting files on disk group OCRVD.
Change to configuration failed, but was successfully rolled back.
CRS-4000: Command Replace failed, or completed with errors.
– Add another disk to test diskgroup and mark it as quorum disk. The quorum disk is one small Disk (500 MB should be on the safe side here, since the Voting File is only about 280 MB in size) to keep one Mirror of the Voting File. Other two disks will contain each one Voting File and all the other stripes of the Database Area as well, but quorum will only get that one Voting File.
– Now try to move the voting disk from DATA disk-group to OCRVD disk-group.
– Now the operation is successful
$ crsctl replace votedisk +OCRVD
Successful addition of voting disk 00ce3c95c6534f44bfffa645a3430bc3.
Successful addition of voting disk a3751063aec14f8ebfe8fb89fccf45ff.
Successful addition of voting disk 0fce89ac35834f99bff7b04ccaaa8006.
Successful deletion of voting disk 243ec3b2a3cf4fbbbfed6f20a1ef4319.
Successfully replaced voting disk group with +OCRVD.
CRS-4266: Voting file(s) successfully replaced
Why odd number of voting disk?
It can be seen that number of voting disks whose failure can be tolerated is same for (2n-1) as well as 2n voting disks where n can be 1, 2 or 3. Hence to save a redundant voting disk, (2n-1) i.e. an odd number of voting disks are desirable.
Space occupied by ocr & vote disk
[oracle@example1a ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 4044
Available space (kbytes) : 405524
ID : 903643710
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
[oracle@example1a ~]$
Viewing Available OCR Backups
[oracle@example1a ~]$ ocrconfig -showbackup
example1a 2018/07/31 11:59:50 /u01/app/12.1.0/grid/cdata/example1/backup00.ocr 0
example1a 2018/07/31 07:59:50 /u01/app/12.1.0/grid/cdata/example1/backup01.ocr 0
example1a 2018/07/31 03:59:50 /u01/app/12.1.0/grid/cdata/example1/backup02.ocr 0
example1a 2018/07/30 03:59:48 /u01/app/12.1.0/grid/cdata/example1/day.ocr 0
example1a 2018/07/20 23:59:31 /u01/app/12.1.0/grid/cdata/example1/week.ocr 0
PROT-25: Manual backups for the Oracle Cluster Registry are not available
[oracle@example1a ~]$
To manually backup the contents of the OCR:
Log in as the root user.
Use the following command to force Oracle Clusterware to perform an immediate backup of the OCR:
# ocrconfig -manualbackup
The date and identifier of the recently generated OCR backup is displayed.
If you must change the location for the OCR backup files, then use the following command, where directory_name is the new location for the backups:
# ocrconfig -backuploc directory_name
Find current/ active version of OCR
[oracle@example1a ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[oracle@example1a ~]$
Repairing an OCR Configuration on a Local Node
As the root user, run one or more of the following commands on the node on which Oracle Clusterware is stopped, depending on the number and type of changes that were
made to the OCR configuration:
[root]# ocrconfig –repair -add new_ocr_file_name
[root]# ocrconfig –repair -delete ocr_file_name
[root]# ocrconfig –repair -replace source_ocr_file -replacement dest_ocr_file
These commands update the OCR configuration only on the node from which you run the command.
Restart Oracle Clusterware on the node you have just repaired.
As the root user, check the OCR configuration integrity of your cluster using the following command:
[root]# ocrcheck
No comments:
Post a Comment