iSelfSchooling.com  Since 1999     References  |  Search more  | Oracle Syntax  | Free Online Oracle Training

    Home      .Services     Login       Start Learning     Certification      .                 .Share your BELIEF(s)...

 

. Online Accounting        .Copyright & User Agreement   |
    .Vision      .Biography     .Acknowledgement

.Contact Us      .Comments/Suggestions       Email2aFriend    |

 

How to install Oracle RAC on Sun Cluster?

 

More Resources by Google:

Step-By-Step Installation of RAC on Sun Cluster v3

This document will provide the reader with step-by-step instructions on how to install a cluster, install Oracle Real Application Clusters (RAC) and start a cluster database on Sun Cluster v3. For additional explanation or information on any of these steps, please see the references listed at the end of this document.

 

1. Configuring the Clusters Hardware


1.1 Minimal Hardware list / System Requirements

For a two node cluster the following would be a minimum recommended hardware list.

1.1.1 Hardware

·         For Sun[™] servers, Sun or third-party storage products, Cluster interconnects, Public networks, Switch options, Memory, Swap & CPU requirements consult the operating system or hardware vendor. Sun's interconnect, RSM, is now supported in v9.2.0.2 with patch number 2454989.

·         System disk partitions

·         /globaldevices - a 100Mb file system that will be used by the scinstall(1M) utility for global devices.

·         Volume manager - a 10Mb partition for volume manager use on a slice at the end of the disk (slice 7). If your cluster uses VERITAS Volume Manager (VxVM) and you intend to encapsulate the root disk, you need two unused slices available for use by VxVM.

As with any other system running the Solaris Operating System (SPARC) environment, you can configure the root (/), /var, /usr, and /opt directories as separate file systems, or you can include all the directories in the root (/) file system. The following describes the software contents of the root (/), /var, /usr, and /opt directories in a Sun Cluster configuration. Consider this information when you plan your partitioning scheme.

root (/) - The Sun Cluster software itself occupies less than 40 Mbytes of space in the root (/) file system. For best results, you need to configure ample additional space and inode capacity for the creation of both block special devices and character special devices used by VxVM software, especially if a large number of shared disks are in the cluster. Therefore, add at least 100 Mbytes to the amount of space you would normally allocate for your root (/) filesystem.

/var - The Sun Cluster software occupies a negligible amount of space in the /var file system at installation time. However, you need to set aside ample space for log files. Also, more messages might be logged on a clustered node than would be found on a typical standalone server. Therefore, allow at least 100 Mbytes for the /var file system.

/usr - Sun Cluster software occupies less than 25 Mbytes of space in the /usr file system. VxVM software require less than 15 Mbytes.

/opt - Sun Cluster framework software uses less than 2 Mbytes in the /opt file system. However, each Sun Cluster data service might use between 1 Mbyte and 5 Mbytes. VxVM software can use over 40 Mbytes if all of its packages and tools are installed. In addition, most database and applications software is installed in the /opt file system. If you use Sun Management Center software to monitor the cluster, you need an additional 25 Mbytes of space on each node to support the Sun Management Center agent and Sun Cluster module packages.

An example system disk layout is as follows:-

A sample system disk layout

 

Slice

Contents

Allocation (in Mbytes)

Description

0

/

1168

441 Mbytes for Solaris Operating System (SPARC) environment software.
100 Mbytes extra for root (/).
100 Mbytes extra for /var
25 Mbytes for Sun Cluster software.
55 Mbytes for volume manager software.
1 Mbyte for Sun Cluster HA for NFS software.
25 Mbytes for the Sun Management Center agent and Sun Cluster module agent packages.
421 Mbytes (the remaining free space on the disk) for possible future use by database and application software.

1

swap

750

Minimum size when physical memory is less than 750 Mbytes

2

overlap

2028

The entire disk.

3

/globaldevices

100

The Sun Cluster software later assigns this slice a different mount point and mounts it as a cluster file system.

4

unused

-

Available as a free slice for encapsulating the root disk under VxVM

5

unused

-

 

6

unused

-

 

7

volume manager

10

Used by VxVM for installation after you free the slice.

1.1.2 Software

·         For Solaris Operating System (SPARC)[™], Sun Cluster, Volume Manager and File System support consult the operating system vendor. Sun Cluster have scalable services with Global File Systems (GFS) based around the Proxy File Systems (PXFS). PXFS allows file access locations transparent and is Sun's implementation of Cluster File Systems. Currently, Sun is not supporting anything relating to GFS for RAC. Check with Sun for updates on this status.

1.1.3 Patches

The Sun Cluster nodes might require patches in the following areas:-

·         Solaris Operating System (SPARC) Environment patches

·         Storage Array interface firmware patches

·         Storage Array disk drive firmware patches

·         Veritas Volume Manager patches

Some patches, such as those for Veritas Volume Manager cannot be installed until after the volume management software installation is completed. Before installing any patches, always do the following:-

·         make sure all cluster nodes have the same patch levels

·         do not install any firmware-related patches without qualified assistance

·         always obtain the most current patch information

·         read all patch README notes carefully.

Specific Solaris Operating System (SPARC) patches maybe required and it is recommended that the latest Solaris Operating System (SPARC) release, Sun's recommended patch clusters and Sun Cluster updates are applied. Current Sun Cluster updates include release 11/00, update one 07/01, update two 12/01 and update three 05/02. To determine which patches have been installed, enter the following commands:

$ showrev -p

For the latest Sun Cluster 3.0 required patches see SunSolve document id 24617 Sun Cluster 3.0 Early Notifier.

1.2 Installing Sun StorEdge Disk Arrays

Follow the procedures for an initial installation of a StorEdge disk enclosures or arrays, prior to installing the Solaris Operating System (SPARC) operating environment and Sun Cluster software. Perform this procedure in conjunction with the procedures in the Sun Cluster 3.0 Software Installation Guide and your server hardware manual. Multihost storage in clusters uses the multi-initiator capability of the Small Computer System Interface (SCSI) specification. For conceptual information on multi-initiator capability, see the Sun Cluster 3.0 Concepts document.

 

1.3 Installing Cluster Interconnect and Public Network Hardware

The following procedures are needed for installing cluster hardware during an initial cluster installation, before Sun Cluster software is installed. Separate procedures need to be followed for installing Ethernet-based interconnect hardware, PCI-SCI-based interconnect hardware, and public network hardware (see Sun's current installation notes).

·         If not already installed, install host adapters in your cluster nodes. For the procedure on installing host adapters, see the documentation that shipped with your host adapters and node hardware. Install the transport cables (and optionally, transport junctions), depending on how many nodes are in your cluster:

·         A cluster with only two nodes can use a point-to-point connection, requiring no cluster transport junctions. Use a point-to-point (crossover) Ethernet cable if you are connecting 100BaseT or TPE ports of a node directly to ports on another node. Gigabit Ethernet uses the standard fiber optic cable for both point-to-point and switch configurations.

Note: If you use a transport junction in a two-node cluster, you can add additional nodes to the cluster without bringing the cluster offline to reconfigure the transport path.

·         A cluster with more than two nodes requires two cluster transport junctions. These transport junctions are Ethernet-based switches (customer-supplied).

You install the cluster software and configure the interconnect after you have installed all other hardware.

 

 

2. Creating a Cluster

2.1 Sun Cluster Software Installation

The Sun Cluster v3 host system (node) installation process is completed in several major steps. The general process is:-

·         repartition boot disks to meet SunCluster v3.

·         install the Solaris Operating System (SPARC) Environment software

·         configure the cluster host systems environment

·         install Solaris 8 Operating System (SPARC) Environment patches

·         install hardware-related patches

·         install Sun Cluster v3 on the first cluster node

·         install Sun Cluster v3 on the remaining nodes

·         install any Sun Cluster patches and updates

·         perform postinstallation checks and configuration

You can use two methods to install the Sun Cluster v3 software on the cluster nodes:-

·         interactive installation using the scinstall installation interface

·         automatic Jumpstart installation (requires a pre-existing Solaris Operating System (SPARC) JumpStart server)

This note assumes an interactive installation of Sun Cluster v3 with update 2. The Sun Cluster installation program, scinstall, is located on the Sun Cluster v3 CD in the /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools directory. When you start the program without any options, it prompts you for cluster configuration information that is stored for use later in the process. Although the Sun Cluster software can be installed on all nodes in parallel, you can complete the installation on the first node and then run scinstall on all other nodes in parallel. The additional nodes get some basic information from the first, or sponsoring node, that was configured.

2.2 Form a One-Node Cluster

As root:-

# cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools

# ./scinstall

*** Main Menu ***

Please select from one of the following (*) options:

* 1) Establish a new cluster using this machine as the first node
* 2) Add this machine as a node in an established cluster
3) Configure a cluster to be JumpStarted from this install server
4) Add support for new data services to this cluster node
5) Print release information for this cluster node

* ?) Help with menu options
* q) Quit

Option: 1

*** Establishing a New Cluster ***
...
Do you want to continue (yes/no) [yes]? yes

When prompted whether to continue to install Sun Cluster software packages, type yes.

>>> Software Package Installation <<<

Installation of the Sun Cluster framework software packages will take a few minutes to complete.

Is it okay to continue (yes/no) [yes]? yes

** Installing SunCluster 3.0 **
SUNWscr.....done
...Hit ENTER to continue:

After all packages are installed, press Return to continue to the next screen.

Specify the cluster name.

>>> Cluster Name <<<
...
What is the name of the cluster you want to establish? clustername

Run the preinstallation check.

>>> Check <<<

This step runs sccheck(1M) to verify that certain basic hardware and
software pre-configuration requirements have been met. If sccheck(1M)
detects potential problems with configuring this machine as a cluster
node, a list of warnings is printed.

Hit ENTER to continue:

Specify the names of the other nodes that will become part of this cluster.

>>> Cluster Nodes <<<
...
Node name: node2
Node name (Ctrl-D to finish): <Control-D>

This is the complete list of nodes:
...
Is it correct (yes/no) [yes]?

Specify whether to use data encryption standard (DES) authentication.

By default, Sun Cluster software permits a node to connect to the cluster only if the node is physically connected to the private interconnect and if the node name was specified. However, the node actually communicates with the sponsoring node over the public network, since the private interconnect is not yet fully configured. DES authentication provides an additional level of security at installation time by enabling the sponsoring node to more reliably authenticate nodes that attempt to contact it to update the cluster configuration.

If you choose to use DES authentication for additional security, you must configure all necessary encryption keys before any node can join the cluster. See the keyserv(1M) and publickey(4) man pages for details.

>>> Authenticating Requests to Add Nodes <<<
...
Do you need to use DES authentication (yes/no) [no]?

Specify the private network address and netmask.

>>> Network Address for the Cluster Transport <<<
...
Is it okay to accept the default network address (yes/no) [yes]?
Is it okay to accept the default netmask (yes/no) [yes]?

Note: You cannot change the private network address after the cluster is successfully formed.

Specify whether the cluster uses transport junctions.

If this is a two-node cluster, specify whether you intend to use transport junctions.

>>> Point-to-Point Cables <<<
...
Does this two-node cluster use transport junctions (yes/no) [yes]?

Tip - You can specify that the cluster uses transport junctions, regardless of whether the nodes are directly connected to each other. If you specify that the cluster uses transport junctions, you can more easily add new nodes to the cluster in the future.

If this cluster has three or more nodes, you must use transport junctions. Press Return to continue to the next screen.

>>> Point-to-Point Cables <<<
...
Since this is not a two-node cluster, you will be asked to configure two transport junctions.
Hit ENTER to continue:

Does this cluster use transport junctions?

If yes, specify names for the transport junctions. You can use the default names switchN or create your own names.

>>> Cluster Transport Junctions <<<
...
What is the name of the first junction in the cluster [switch1]?
What is the name of the second junction in the cluster [switch2]?

Specify the first cluster interconnect transport adapter.

Type help to list all transport adapters available to the node.

>>> Cluster Transport Adapters and Cables <<<
...
What is the name of the first cluster transport adapter (help) [adapter]?

Name of the junction to which "adapter" is connected [switch1]?
Use the default port name for the "adapter" connection (yes/no) [yes]?

Hit ENTER to continue:

Note: If your configuration uses SCI adapters, do not accept the default when you are prompted for the adapter connection (the port name). Instead, provide the port name (0, 1, 2, or 3) found on the Dolphin switch itself, to which the node is physically cabled. The following example shows the prompts and responses for declining the default port name and specifying the Dolphin switch port name 0.

Use the default port name for the "adapter" connection (yes/no) [yes]? no
What is the name of the port you want to use? 0

Choose the second cluster interconnect transport adapter.

Type help to list all transport adapters available to the node.

What is the name of the second cluster transport adapter (help) [adapter]?

You can configure up to two adapters by using the scinstall command. You can configure additional adapters after Sun Cluster software is installed by using the scsetup utility.

If your cluster uses transport junctions, specify the name of the second transport junction and its port.

Name of the junction to which "adapter" is connected [switch2]?
Use the default port name for the "adapter" connection (yes/no) [yes]?

Hit ENTER to continue:

Note: If your configuration uses SCI adapters, do not accept the default when you are prompted for the adapter port name. Instead, provide the port name (0, 1, 2, or 3) found on the Dolphin switch itself, to which the node is physically cabled. The following example shows the prompts and responses for declining the default port name and specifying the Dolphin switch port name 0.

Use the default port name for the "adapter" connection (yes/no) [yes]? no
What is the name of the port you want to use? 0

Specify the global devices file system name.

>>> Global Devices File System <<<
...
The default is to use /globaldevices.

Is it okay to use this default (yes/no) [yes]?

Do you have any Sun Cluster software patches to install?

>>> Automatic Reboot <<<
...
Do you want scinstall to reboot for you (yes/no) [yes]?

Accept or decline the generated scinstall command. The scinstall command generated from your input is displayed for confirmation.

>>> Confirmation <<<

Your responses indicate the following options to scinstall:

scinstall -ik \
...
Are these the options you want to use (yes/no) [yes]?
Do you want to continue with the install (yes/no) [yes]?

If you accept the command and continue the installation, scinstall processing continues. Sun Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.pid file, where pid is the process ID number of the scinstall instance.

After scinstall returns you to the Main Menu, you can rerun menu option 1 and provide different answers. Your previous session answers display as the defaults.

Install any Sun Cluster software patches. See the Sun Cluster 3.0 Release Notes for the location of patches and installation instructions. Reboot the node to establish the cluster. If you rebooted the node after you installed patches, you do not need to reboot the node a second time.

The first node reboot after Sun Cluster software installation forms the cluster and establishes this node as the first-installed node of the cluster. During the final installation process, the scinstall utility performs the following operations on the first cluster node:-

·         installs cluster software packages

·         disables routing on the node (touch /etc/notrouter)

·         creates an installation log (/var/cluster/logs/install)

·         reboots the node

·         creates the Disk ID devices during the reboot

You can then install additional nodes in the cluster.


2.3 Installing Additional Nodes

After you complete the Sun Cluster software installation on the first node, you can run scinstall in parallel on all remaining cluster nodes. The additional nodes are placed in install mode so they do not have a quorum vote. Only the first node has a quorum vote.

As the installation on each new node completes, each node reboots and comes up in install mode without a quorum vote. If you reboot the first node at this point, all the other nodes would panic because they cannot obtain a quorum. You can, however, reboot the second or later nodes freely. They should come up and join the cluster without errors.

Cluster nodes remain in install mode until you use the scsetup command to reset the install mode.

You must perform postinstallation configuration to take the nodes out of install mode and also to establish quorum disk(s).

·         Ensure that the first-installed node is successfully installed with Sun Cluster software and that the cluster is established.

·         If you are adding a new node to an existing, fully installed cluster, ensure that you have performed the following tasks.

·         Prepare the cluster to accept a new node.

·         Install Solaris Operating System (SPARC) software on the new node.

·         Become superuser on the cluster node to install.

·         Start the scinstall utility.
# ./scinstall

*** Main Menu ***

Please select from one of the following (*) options:

* 1) Establish a new cluster using this machine as the first node
* 2) Add this machine as a node in an established cluster
3) Configure a cluster to be JumpStarted from this install server
4) Add support for new data services to this cluster node
5) Print release information for this cluster node

* ?) Help with menu options
* q) Quit

Option: 2

*** Adding a Node to an Established Cluster ***
...
Do you want to continue (yes/no) [yes]? yes

·         When prompted whether to continue to install Sun Cluster software packages, type yes
>>> Software Installation <<<

Installation of the Sun Cluster framework software packages will only
take a few minutes to complete.

Is it okay to continue (yes/no) [yes]? yes

** Installing SunCluster 3.0 **
SUNWscr.....done
...Hit ENTER to continue:

·         Specify the name of any existing cluster node, referred to as the sponsoring node.
>>> Sponsoring Node <<<
...
What is the name of the sponsoring node? node1

>>> Cluster Name <<<
...
What is the name of the cluster you want to join? clustername

>>> Check <<<

This step runs sccheck(1M) to verify that certain basic hardware and software pre-configuration requirements have been met. If sccheck(1M) detects potential problems with configuring this machine as a cluster node, a list of warnings is printed.

Hit ENTER to continue:

·         Specify whether to use autodiscovery to configure the cluster transport.
>>> Autodiscovery of Cluster Transport <<<

If you are using ethernet adapters as your cluster transport adapters, autodiscovery is the best method for configuring the cluster transport.

Do you want to use autodiscovery (yes/no) [yes]?
...
The following connections were discovered:

node1:adapter switch node2:adapter
node1:adapter switch node2:adapter


Is it okay to add these connections to the configuration (yes/no) [yes]?

·         Specify whether this is a two-node cluster.
>>> Point-to-Point Cables <<<
...
Is this a two-node cluster (yes/no) [yes]?

Does this two-node cluster use transport junctions (yes/no) [yes]?

·         Did you specify that the cluster will use transport junctions? If yes, specify the transport junctions.
>>> Cluster Transport Junctions <<<
...
What is the name of the first junction in the cluster [switch1]?
What is the name of the second junction in the cluster [switch2]?

·         Specify the first cluster interconnect transport adapter.
>>> Cluster Transport Adapters and Cables <<<
...
What is the name of the first cluster transport adapter (help)? adapter

·         Specify what the first transport adapter connects to. If the transport adapter uses a transport junction, specify the name of the junction and its port.
Name of the junction to which "adapter" is connected [switch1]?
...
Use the default port name for the "adapter" connection (yes/no) [yes]?

OR
Name of adapter on "node1" to which "adapter" is connected? adapter

·         Specify the second cluster interconnect transport adapter.

What is the name of the second cluster transport adapter (help)? adapter

·         Specify what the second transport adapter connects to. If the transport adapter uses a transport junction, specify the name of the junction and its port.
Name of the junction to which "adapter" is connected [switch2]?
Use the default port name for the "adapter" connection (yes/no) [yes]?

Hit ENTER to continue:
OR
Name of adapter on "node1" to which "adapter" is connected? adapter

·         Specify the global devices file system name.
>>> Global Devices File System <<<
...
The default is to use /globaldevices.

Is it okay to use this default (yes/no) [yes]?

·         Do you have any Sun Cluster software patches to install? If not:-
>>> Automatic Reboot <<<
...
Do you want scinstall to reboot for you (yes/no) [yes]?


>>> Confirmation <<<

Your responses indicate the following options to scinstall:

scinstall -i \
...
Are these the options you want to use (yes/no) [yes]?
Do you want to continue with the install (yes/no) [yes]?

·         Install any Sun Cluster software patches.

·         Reboot the node to establish the cluster unless you rebooted the node after you installed patches.

Do not reboot or shut down the first-installed node while any other nodes are being installed, even if you use another node in the cluster as the sponsoring node. Until quorum votes are assigned to the cluster nodes and cluster install mode is disabled, the first-installed node, which established the cluster, is the only node that has a quorum vote. If the cluster is still in install mode, you will cause a system panic because of lost quorum if you reboot or shut down the first-installed node. Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure PostInstallation Configuration.

 

2.4 Post Installation Configuration

Post-installation can include a number of tasks such as installing a volume manager and or database software. There are other tasks that must be completed first.

·         taking the cluster nodes out of install mode

·         defining quorum disks

Before a new cluster can operate normally, the install mode attribute must be reset on all nodes. You can this in a single step using the scsetup utility. This utility is a menu-driven interface that prompts for quorum device information the first time it is run on a new cluster installation. Once the quorum device is defined, the install mode attribute is reset on all nodes. Use the scconf command as follows to disable or enable install mode:-

·         scconf -c -q reset (reset install mode)

·         scconf -c -q installmode (enable install mode)

# /usr/cluster/bin/scsetup
>>> Initial Cluster Setup <<<

This program has detected that the cluster "installmode" attribute is set ...

Please do not proceed if any additional nodes have yet to join the cluster.

Is it okay to continue (yes/no) [yes]? yes

Which global device do you want to use (d<N>) ? dx

Is it okay to proceed with the update (yes/no) [yes]? yes

scconf -a -q globaldev=dx

Do you want to add another quorum disk (yes/no) ? no
Is it okay to reset "installmode" (yes/no) [yes] ? yes

scconf -c -q reset
Cluster initialization is complete.

Although it appears that the scsetup utility uses two simple scconf commands to define the quorum device and reset install mode, the process is more complex. The scsetup utility performs numerous verification checks for you. It is recommended that you do not use scconf manually to perform these functions.

2.5 PostInstallation Verification

When you have completed the Sun Cluster software installation on all nodes, verify the following information:

·         DID device configuration

·         General CCR configuration information

Each attached system sees the same DID devices but might use a different logical path to access them. You can verify the DID device configuration with the scdidadm command the following scdidadm output demonstrates how a DID device can have a different logical path from each connected node.

# scdidadm -L

The list on each node should be the same. Output resembles the following.

1 phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
2 phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
2 phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
3 phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
3 phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3

...

The scstat utility displays the current status of various cluster components. You can use it to display the following information:

·         the cluster name and node names

·         names and status of cluster members

·         status of resource groups and related resources

·         cluster interconnect status

The following scstat command option displays the cluster membership and quorum vote information.

# /usr/cluster/bin/scstat -q

Cluster configuration information is stored in the CCR on each node. You should verify that the basic CCR values are correct. The scconf -p command displays general cluster information along with detailed information about each node in the cluster.

$ /usr/cluster/bin/scconf -p

2.6 Basic Cluster Administration

Checking Status Using the scstat Command

Without any options, the scstat command displays general information for all cluster nodes. You can use options to restrict the status information to a particular type of information and/or to a particular node.

The following command displays the cluster transport status for a single node with gigabit ethernet:-

$ /usr/cluster/bin/scstat -W -h <node1>

-- Cluster Transport Paths --

Endpoint Endpoint Status
-------- -------- ------
Transport path: <node2>:ge1 <node1>:ge1 Path online
Transport path: <node2>:ge0 <node1>:ge0 Path online


Checking Status Using the sccheck Command

The sccheck command verifies that all of the basic global device structure is correct on all nodes. Run the sccheck command af