2014年8月21日 星期四

[Oracle] 安裝Oracle RAC 11g R2 Cluster on Oracle Linux 5.4 and iSCSI + udev (二)

--在此篇Oracle Grid Infrastructure安裝過程中有部分參數需參照在上一篇的環境設定(請點我)
--在node1安裝Oracle Grid Infrastructure
01. [grid@node1 ~]$ cd Desktop/grid/
02. [grid@node1 grid]$ ./runInstaller
03. 在[Select Installation Option]視窗中點選[Install and Configure Grid Infrastructure for a Cluster]後按[Next]

04. 在[Select Installation Type]視窗中點選[Advanced Installation]後按[Next]

05. 在[Select Product Languages]視窗中加入[Traditional Chinese]後按[Next]

06. 在[Grid Plug and Play Information]視窗中輸入以下資料並將[Configure GNS]取消勾選後按[Next]
       Cluster Name:node-scan
       SCAN Name:node-scan.dba.local

07. 在[Cluster Node Information]視窗中按下下方[Add]

08. 在[Edit Cluster Node Details]視窗中輸入以下資料後按下[OK]
       Hostname:node2.dba.local
       Virtual IP Name:node2-vip.dba.local

 09. 回到[Cluster Node Information]視窗中按下下方的[SSH Connectivity]

10. 在[Cluster Node Information]視窗中輸入[grid]帳號的密碼後按下[Setup] (圖片中誤植Test)

11. 在[Oracle Grid Infrastructure]視窗中按下[OK]

12. 在[Cluster Node Information]視窗中按下[Test]

13. 在[Oracle Grid Infrastructure]視窗中按下[OK]

14. 回到[Cluster Node Information]視窗中按下[Next]

15. 在[Specify Network Interface Usage]視窗中按[Next]

16. 在[Storage Option Information]視窗中選取[Automatic Storage Management (ASM)]後按[Next]

17. 在[Create ASM Disk Group]視窗中輸入並勾選以下資料後按[Next]
       Disk Group Name:CRS
       Redundancy:External
       Add Disk:點選[Candidate Disks]
                          勾選[ORCL:CRSVOL1]         

18. 在[Specify ASM Password]視窗中勾選[Use same passwords for these accounts]後輸入asm管理帳號密碼後按[Next]

19. 在[Oracle Grid Infrastructure]視窗中按下[OK]

20. 在[Failure Isolation Support]視窗中勾選[Do not use Intelligent Platform Management Interface (IPMI)]後按[Next]

 21. 在[Privileged Operating System Groups]視窗中按[Next]

 22.在[Specify Installation Location]視窗中按[Next]

24. 在[Create Inventory]視窗中按[Next]

 25. 在[Prerequisite Checks]視窗中請確認清單中的Package只要安裝的版本比下圖所列的版本新就可以,然後勾選[Ignore All]後按[Next]

26. 在[Summary]視窗中按[Next]

27. 當出現[Execute Configuration scripts]視窗時切換到root帳號在每個節點依序執行.sh,執行完後按下[OK]
      --在node1執行
         27-1. [grid@node1 ~]$ su
         27-2. [root@node1 grid]# /u01/app/oraInventory/orainstRoot.sh
                      Changing permissions of /u01/app/oraInventory.
                      Adding read,write permissions for group.
                      Removing read,write,execute permissions for world.

                      Changing groupname of /u01/app/oraInventory to oinstall.
                     The execution of the script is complete.

       --在node2執行
          27-3. [root@node2 ~]# /u01/app/oraInventory/orainstRoot.sh
                        Changing permissions of /u01/app/oraInventory.
                        Adding read,write permissions for group.
                        Removing read,write,execute permissions for world.

                        Changing groupname of /u01/app/oraInventory to oinstall.
                        The execution of the script is complete.

          27-4. [root@node1 ~]# /u01/app/11.2.0/grid/root.sh
                        執行結果在本文最下方(註一)      
          27-5. [root@node2 ~]# /u01/app/11.2.0/grid/root.sh
                        執行結果在本文最下方(註二)
 
28. 當出現[ins-20802 oracle cluster verification utility failed的錯誤訊息]按下[OK],可忽略此錯誤有幾個條件,請參閱本文最下方(註三)
 
29. 在[Setup]視窗中按下[Next]
 
30. 在[Finish]視窗中按下[Close],完成安裝
 
--在每個節點驗證ORACLE RAC安裝
31. [grid@node1 ~]$ crsctl check crs
  CRS-4638: Oracle High Availability Services is online
  CRS-4537: Cluster Ready Services is online
  CRS-4529: Cluster Synchronization Services is online
  CRS-4533: Event Manager is online

32. [grid@node2 ~]$ crsctl check crs
  CRS-4638: Oracle High Availability Services is online
  CRS-4537: Cluster Ready Services is online
  CRS-4529: Cluster Synchronization Services is online
  CRS-4533: Event Manager is online

33. [grid@node1 ~]$ crs_stat -t -v
  Name           Type           R/RA   F/FT   Target    State     Host       
  ----------------------------------------------------------------------
  ora.CRS.dg     ora....up.type 0/5    0/     ONLINE    ONLINE    node1      
  ora....ER.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    node1      
  ora....N1.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    node1      
  ora.asm        ora.asm.type   0/5    0/     ONLINE    ONLINE    node1      
  ora.eons       ora.eons.type  0/3    0/     ONLINE    ONLINE    node1      
  ora.gsd        ora.gsd.type   0/5    0/     OFFLINE   OFFLINE              
  ora....network ora....rk.type 0/5    0/     ONLINE    ONLINE    node1      
  ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    node1      
  ora....E1.lsnr application    0/5    0/0    ONLINE    ONLINE    node1      
  ora.node1.gsd  application    0/5    0/0    OFFLINE   OFFLINE              
  ora.node1.ons  application    0/3    0/0    ONLINE    ONLINE    node1      
  ora.node1.vip  ora....t1.type 0/0    0/0    ONLINE    ONLINE    node1      
  ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    node2      
  ora....E2.lsnr application    0/5    0/0    ONLINE    ONLINE    node2      
  ora.node2.gsd  application    0/5    0/0    OFFLINE   OFFLINE              
  ora.node2.ons  application    0/3    0/0    ONLINE    ONLINE    node2      
  ora.node2.vip  ora....t1.type 0/0    0/0    ONLINE    ONLINE    node2      
  ora.oc4j       ora.oc4j.type  0/5    0/0    OFFLINE   OFFLINE              
  ora.ons        ora.ons.type   0/3    0/     ONLINE    ONLINE    node1      
  ora....ry.acfs ora....fs.type 0/5    0/     ONLINE    ONLINE    node1      
  ora.scan1.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    node1 

34. [grid@node2 ~]$ crs_stat -t -v
  Name           Type           R/RA   F/FT   Target    State     Host       
  ----------------------------------------------------------------------
  ora.CRS.dg     ora....up.type 0/5    0/     ONLINE    ONLINE    node1      
  ora....ER.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    node1      
  ora....N1.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    node1      
  ora.asm        ora.asm.type   0/5    0/     ONLINE    ONLINE    node1      
  ora.eons       ora.eons.type  0/3    0/     ONLINE    ONLINE    node1      
  ora.gsd        ora.gsd.type   0/5    0/     OFFLINE   OFFLINE              
  ora....network ora....rk.type 0/5    0/     ONLINE    ONLINE    node1      
  ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    node1      
  ora....E1.lsnr application    0/5    0/0    ONLINE    ONLINE    node1      
  ora.node1.gsd  application    0/5    0/0    OFFLINE   OFFLINE              
  ora.node1.ons  application    0/3    0/0    ONLINE    ONLINE    node1      
  ora.node1.vip  ora....t1.type 0/0    0/0    ONLINE    ONLINE    node1      
  ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    node2      
  ora....E2.lsnr application    0/5    0/0    ONLINE    ONLINE    node2      
  ora.node2.gsd  application    0/5    0/0    OFFLINE   OFFLINE              
  ora.node2.ons  application    0/3    0/0    ONLINE    ONLINE    node2      
  ora.node2.vip  ora....t1.type 0/0    0/0    ONLINE    ONLINE    node2      
  ora.oc4j       ora.oc4j.type  0/5    0/0    OFFLINE   OFFLINE              
  ora.ons        ora.ons.type   0/3    0/     ONLINE    ONLINE    node1      
  ora....ry.acfs ora....fs.type 0/5    0/     ONLINE    ONLINE    node1      
  ora.scan1.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    node1 

35. [grid@node1 ~]$ olsnodes -n
  node1   1
  node2   2

36. [grid@node2 ~]$ olsnodes -n
  node1   1
  node2   2

37. [grid@node1 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
  LISTENER_SCAN1
  LISTENER

38. [grid@node2 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
  LISTENER

39. [grid@node1 ~]$ srvctl status asm -a
  ASM is running on node1,node2
  ASM is enabled.

40. [grid@node2 ~]$ srvctl status asm -a
  ASM is running on node1,node2
  ASM is enabled.

41. [grid@node1 ~]$ ocrcheck
  Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2404
         Available space (kbytes) :     259716
         ID                       :  231138437
         Device/File Name         :       +CRS
                                    Device/File integrity check succeeded

                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user
42. [grid@node2 ~]$ ocrcheck
  Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2404
         Available space (kbytes) :     259716
         ID                       :  231138437
         Device/File Name         :       +CRS
                                    Device/File integrity check succeeded

                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user
43. [grid@node1 ~]$ crsctl query css votedisk
  ##  STATE    File Universal Id                File Name Disk group
  --  -----    -----------------                --------- ---------
   1. ONLINE   29ef427a0ded4f0ebfd38ab442a46622 (ORCL:CRSVOL1) [CRS]
  Located 1 voting disk(s).

44. [grid@node2 ~]$ crsctl query css votedisk
  ##  STATE    File Universal Id                File Name Disk group
  --  -----    -----------------                --------- ---------
   1. ONLINE   29ef427a0ded4f0ebfd38ab442a46622 (ORCL:CRSVOL1) [CRS]
  Located 1 voting disk(s).

 
 --在每個節點執行備份root.sh
45. [root@node1 ~]# cd /u01/app/11.2.0/grid
46. [root@node1 grid]# cp root.sh root.sh.racnode1.AFTER_INSTALL_20140821
47. [root@node2 ~]# cd /u01/app/11.2.0/grid
48. [root@node2 grid]# cp root.sh root.sh.racnode1.AFTER_INSTALL_20140821


註一:
[root@node1 ~]# /u01/app/11.2.0/grid/root.sh
  Running Oracle 11g root.sh script...

  The following environment variables are set as:
      ORACLE_OWNER= grid
      ORACLE_HOME=  /u01/app/11.2.0/grid

  Enter the full pathname of the local bin directory: [/usr/local/bin]:
     Copying dbhome to /usr/local/bin ...
     Copying oraenv to /usr/local/bin ...
     Copying coraenv to /usr/local/bin ...


  Creating /etc/oratab file...
  Entries will be added to the /etc/oratab file as needed by
  Database Configuration Assistant when a database is created
  Finished running generic part of root.sh script.
  Now product-specific root actions will be performed.
  2014-08-20 14:27:41: Parsing the host name
  2014-08-20 14:27:41: Checking for super user privileges
  2014-08-20 14:27:41: User has super user privileges
  Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
  Creating trace directory
  LOCAL ADD MODE
  Creating OCR keys for user 'root', privgrp 'root'..
  Operation successful.
    root wallet
    root wallet cert
    root cert export
    peer wallet
    profile reader wallet
    pa wallet
    peer wallet keys
    pa wallet keys
    peer cert request
    pa cert request
    peer cert
    pa cert
    peer root cert TP
    profile reader root cert TP
    pa root cert TP
    peer pa cert TP
    pa peer cert TP
    profile reader pa cert TP
    profile reader peer cert TP
    peer user cert
    pa user cert
  Adding daemon to inittab
  CRS-4123: Oracle High Availability Services has been started.
  ohasd is starting
  CRS-2672: Attempting to start 'ora.gipcd' on 'node1'
  CRS-2672: Attempting to start 'ora.mdnsd' on 'node1'
  CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded
  CRS-2676: Start of 'ora.gipcd' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.gpnpd' on 'node1'
  CRS-2676: Start of 'ora.gpnpd' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'
  CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.cssd' on 'node1'
  CRS-2672: Attempting to start 'ora.diskmon' on 'node1'
  CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded
  CRS-2676: Start of 'ora.cssd' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.ctssd' on 'node1'
  CRS-2676: Start of 'ora.ctssd' on 'node1' succeeded

  ASM created and started successfully.

  DiskGroup CRS created successfully.

  clscfg: -install mode specified
  Successfully accumulated necessary OCR keys.
  Creating OCR keys for user 'root', privgrp 'root'..
  Operation successful.
  CRS-2672: Attempting to start 'ora.crsd' on 'node1'
  CRS-2676: Start of 'ora.crsd' on 'node1' succeeded
  CRS-4256: Updating the profile
  Successful addition of voting disk 29ef427a0ded4f0ebfd38ab442a46622                                                                                                                                                                                                                                ??b
  .
  Successfully replaced voting disk group with +CRS.
  CRS-4256: Updating the profile
  CRS-4266: Voting file(s) successfully replaced
  ##  STATE    File Universal Id                File Name Disk group
  --  -----    -----------------                --------- ---------
   1. ONLINE   29ef427a0ded4f0ebfd38ab442a46622 (ORCL:CRSVOL1) [CRS]
  Located 1 voting disk(s).
  CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
  CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
  CRS-2673: Attempting to stop 'ora.asm' on 'node1'
  CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
  CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
  CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
  CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'node1'
  CRS-2677: Stop of 'ora.cssdmonitor' on 'node1' succeeded
  CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
  CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
  CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
  CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
  CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
  CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
  CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
  CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.mdnsd' on 'node1'
  CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.gipcd' on 'node1'
  CRS-2676: Start of 'ora.gipcd' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.gpnpd' on 'node1'
  CRS-2676: Start of 'ora.gpnpd' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'
  CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.cssd' on 'node1'
  CRS-2672: Attempting to start 'ora.diskmon' on 'node1'
  CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded
  CRS-2676: Start of 'ora.cssd' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.ctssd' on 'node1'
  CRS-2676: Start of 'ora.ctssd' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.asm' on 'node1'
  CRS-2676: Start of 'ora.asm' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.crsd' on 'node1'
  CRS-2676: Start of 'ora.crsd' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.evmd' on 'node1'
  CRS-2676: Start of 'ora.evmd' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.asm' on 'node1'
  CRS-2676: Start of 'ora.asm' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.CRS.dg' on 'node1'
  CRS-2676: Start of 'ora.CRS.dg' on 'node1' succeeded
  CRS-2672: Attempting to start 'ora.registry.acfs' on 'node1'
  CRS-2676: Start of 'ora.registry.acfs' on 'node1' succeeded

  node1     2014/08/20 14:32:34     /u01/app/11.2.0/grid/cdata/node1/backup_20140820_143234.olr
  Configure Oracle Grid Infrastructure for a Cluster ... succeeded
  Updating inventory properties for clusterware
  Starting Oracle Universal Installer...

  Checking swap space: must be greater than 500 MB.   Actual 5951 MB    Passed
  The inventory pointer is located at /etc/oraInst.loc
  The inventory is located at /u01/app/oraInventory
  'UpdateNodeList' was successful.

 
註二:
[root@node2 ~]# /u01/app/oraInventory/orainstRoot.sh
  Changing permissions of /u01/app/oraInventory.
  Adding read,write permissions for group.
  Removing read,write,execute permissions for world.

  Changing groupname of /u01/app/oraInventory to oinstall.
  The execution of the script is complete.
  [root@node2 ~]# /u01/app/11.2.0/grid/root.sh
  Running Oracle 11g root.sh script...

  The following environment variables are set as:
      ORACLE_OWNER= grid
      ORACLE_HOME=  /u01/app/11.2.0/grid

  Enter the full pathname of the local bin directory: [/usr/local/bin]:
     Copying dbhome to /usr/local/bin ...
     Copying oraenv to /usr/local/bin ...
     Copying coraenv to /usr/local/bin ...


  Creating /etc/oratab file...
  Entries will be added to the /etc/oratab file as needed by
  Database Configuration Assistant when a database is created
  Finished running generic part of root.sh script.
  Now product-specific root actions will be performed.
  2014-08-20 14:34:12: Parsing the host name
  2014-08-20 14:34:12: Checking for super user privileges
  2014-08-20 14:34:12: User has super user privileges
  Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
  Creating trace directory
  LOCAL ADD MODE
  Creating OCR keys for user 'root', privgrp 'root'..
  Operation successful.
  Adding daemon to inittab
  CRS-4123: Oracle High Availability Services has been started.
  ohasd is starting
  CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node node1, number 1, and is terminating
  An active cluster was found during exclusive startup, restarting to join the cluster
  CRS-2672: Attempting to start 'ora.mdnsd' on 'node2'
  CRS-2676: Start of 'ora.mdnsd' on 'node2' succeeded
  CRS-2672: Attempting to start 'ora.gipcd' on 'node2'
  CRS-2676: Start of 'ora.gipcd' on 'node2' succeeded
  CRS-2672: Attempting to start 'ora.gpnpd' on 'node2'
  CRS-2676: Start of 'ora.gpnpd' on 'node2' succeeded
  CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node2'
  CRS-2676: Start of 'ora.cssdmonitor' on 'node2' succeeded
  CRS-2672: Attempting to start 'ora.cssd' on 'node2'
  CRS-2672: Attempting to start 'ora.diskmon' on 'node2'
  CRS-2676: Start of 'ora.diskmon' on 'node2' succeeded
  CRS-2676: Start of 'ora.cssd' on 'node2' succeeded
  CRS-2672: Attempting to start 'ora.ctssd' on 'node2'
  CRS-2676: Start of 'ora.ctssd' on 'node2' succeeded
  CRS-2672: Attempting to start 'ora.drivers.acfs' on 'node2'
  CRS-2676: Start of 'ora.drivers.acfs' on 'node2' succeeded
  CRS-2672: Attempting to start 'ora.asm' on 'node2'
  CRS-2676: Start of 'ora.asm' on 'node2' succeeded
  CRS-2672: Attempting to start 'ora.crsd' on 'node2'
  CRS-2676: Start of 'ora.crsd' on 'node2' succeeded
  CRS-2672: Attempting to start 'ora.evmd' on 'node2'
  CRS-2676: Start of 'ora.evmd' on 'node2' succeeded

  node2     2014/08/20 14:36:49     /u01/app/11.2.0/grid/cdata/node2/backup_20140820_143649.olr
  Configure Oracle Grid Infrastructure for a Cluster ... succeeded
  Updating inventory properties for clusterware
  Starting Oracle Universal Installer...

  Checking swap space: must be greater than 500 MB.   Actual 5951 MB    Passed
  The inventory pointer is located at /etc/oraInst.loc
  The inventory is located at /u01/app/oraInventory
  'UpdateNodeList' was successful.

 
註三:
--出現ins-20802 oracle cluster verification utility failed的錯誤訊息
因為我環境SCAN IP沒有設定在DNS中而是設定在/etc/hosts中所以才會出現此錯誤,若手動ping這個IP,如果可以成功,則這個錯誤可以忽略。
--在node1上
[grid@node1 ~]$ ping node-scan.dba.local
PING node-scan.dba.local (192.168.200.50) 56(84) bytes of data.
64 bytes from node-scan.dba.local (192.168.200.50): icmp_seq=1 ttl=64 time=0.022 ms
64 bytes from node-scan.dba.local (192.168.200.50): icmp_seq=2 ttl=64 time=0.029 ms
64 bytes from node-scan.dba.local (192.168.200.50): icmp_seq=3 ttl=64 time=0.029 ms
64 bytes from node-scan.dba.local (192.168.200.50): icmp_seq=4 ttl=64 time=0.025 ms

--在node2上
[root@node2 ~]# ping node-scan.dba.local
PING node-scan.dba.local (192.168.200.50) 56(84) bytes of data.
64 bytes from node-scan.dba.local (192.168.200.50): icmp_seq=1 ttl=64 time=1.27 ms
64 bytes from node-scan.dba.local (192.168.200.50): icmp_seq=2 ttl=64 time=0.313 ms
64 bytes from node-scan.dba.local (192.168.200.50): icmp_seq=3 ttl=64 time=0.287 ms
64 bytes from node-scan.dba.local (192.168.200.50): icmp_seq=4 ttl=64 time=0.217 ms
64 bytes from node-scan.dba.local (192.168.200.50): icmp_seq=5 ttl=64 time=0.226 ms
64 bytes from node-scan.dba.local (192.168.200.50): icmp_seq=6 ttl=64 time=0.223 ms


以下節錄自原廠說明文件(PRVF-4664 PRVF-4657: Found inconsistent name resolution entries for SCAN name (文件 ID 887471.1))
Cause 1. SCAN name is expected to be resolved by local hosts file

SCAN name is resolved by local hosts file (/etc/hosts or %SystemRoot%\system32\drivers\etc\hosts) instead of DNS or GNS
Solution: Oracle strongly recommend to use DNS or GNS for SCAN name resolution as hosts file support only one IP for SCAN
If the intention is to use hosts file for SCAN name resolution, and ping command returns correct SCAN VIP, you can ignore the error and move forward.
If the intention is to use DNS or GNS for SCAN name resolution, comment out entries in local hosts file for SCAN name on all nodes, and re-run "$GRID_HOME/bin/cluvfy comp scan" to confirm.
 

Cause 2. nslookup fails to find record for SCAN name:

nslookup cluscan.us.oracle.com
 ..

 ** server can't find eotcs.us.eot.com: NXDOMAIN

Solution: Engage System Administrator(SA) to check resolver configuration (/etc/resolv.conf on Linux/UNIX), correct any misconfiguration on all nodes and re-run "$GRID_HOME/bin/cluvfy comp scan" to confirm.

 Cause 3. SCAN name is canonical name(CNAME record) in DNS

nslookup cluscan.us.oracle.com
 ..
cluscan.us.oracle.com     canonical name = cluscan.a3.oracle.com
 Name:   cluscan.a3.oracle.com
 Address: 10.4.0.201
 Name:   cluscan.a3.oracle.com
 Address: 10.4.0.202
 Name:   cluscan.a3.oracle.com
 Address: 10.4.0.203


Solution: Engage SA to update SCAN record in DNS to A type instead of CNAME type.

Cause 4. DNS is configured properly in DNS but other naming resolution(nis, ldap..) is being used and doesn't have proper SCAN info
Solution: Engage SA to check name resolution switch configuration (/etc/nsswitch.conf on Linux, Solaris and hp-ux or /etc/netsvc.conf on AIX) and correct any misconfiguration on all nodes. Example hosts in nsswitch.conf:

hosts:      files dns nis
Once it's corrected, execute "$GRID_HOME/bin/cluvfy comp scan" to confirm

Cause 5. Persistent cache for nscd has incorrect information.
Solution: Engage SA to restart nscd and clear persistent cache on all nodes. Example on Linux

# /sbin/service nscd restart
 # /usr/sbin/nscd --invalidate=hosts

Once it's corrected, execute "$GRID_HOME/bin/cluvfy comp scan" to confirm.
 
------------------------------------------------------------------------------------------------------------
What's the expected output when executing nslookup

nslookup cluscan.us.oracle.com
 ..
 Name:   cluscan.us.oracle.com
 Address: 10.4.0.201
 Name:   cluscan.us.oracle.com
 Address: 10.4.0.202
 Name:   cluscan.us.oracle.com
 Address: 10.4.0.203


ping -c 1 cluscan.us.oracle.com
 PING cluscan.us.oracle.com (10.4.0.201) 56(84) bytes of data.
 64 bytes from cluscan.us.oracle.com (10.4.0.201): icmp_seq=1 ttl=64 time=0.258 ms

 --- cluscan.us.oracle.com ping statistics ---
 1 packets transmitted, 1 received, 0% packet loss, time 0ms
 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms

ping -c 1 cluscan.us.oracle.com
 PING cluscan.us.oracle.com (10.4.0.202) 56(84) bytes of data.
 64 bytes from cluscan.us.oracle.com (10.4.0.202): icmp_seq=1 ttl=64 time=0.258 ms

 --- cluscan.us.oracle.com ping statistics ---
 1 packets transmitted, 1 received, 0% packet loss, time 0ms
 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms

ping -c 1 cluscan.us.oracle.com
 PING cluscan.us.oracle.com (10.4.0.203) 56(84) bytes of data.
 64 bytes from cluscan.us.oracle.com (10.4.0.203): icmp_seq=1 ttl=64 time=0.258 ms

 --- cluscan.us.oracle.com ping statistics ---
 1 packets transmitted, 1 received, 0% packet loss, time 0ms
 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms

From above you can see:
   1. nslookup is returning proper SCAN name and IPs
   2. ping to SCAN name is Round Robin resolved to different IP each time by DNS

 If you see different behaviour than above, please engage SA to investigate.
 ------------------------------------------------------------------------------------------------------------
 
Ping command reference
 

Linux:        /bin/ping -c 1 cluscan.us.oracle.com
 Solaris:      /usr/sbin/ping -s cluscan.us.oracle.com 1500 1
 hp-ux:       /usr/sbin/ping cluscan.us.oracle.com -n 1
 AIX:         /usr/sbin/ping -c 1 cluscan.us.oracle.com
 Windows: ping -n 1 cluscan.us.oracle.com


 ------------------------------------------------------------------------------------------------------------

安裝Oracle RAC 11g R2 Cluster on Oracle Linux 5.4 and iSCSI + udev (一)
安裝Oracle RAC 11g R2 Cluster on Oracle Linux 5.4 and iSCSI + udev (三)

 

沒有留言:

張貼留言