RAC: Frequently Asked Questions (RAC FAQ) (Doc ID 220970.1)

0    193    1

Tags:

👉 本文共约70235个字,系统预计阅读时间或需265分钟。

本页目录 隐藏
Answers

QUESTIONS AND ANSWERS

RAC - Real Application Clusters

RAC

RAC One Node

QoS - Quality of Servce Management

Clusterware

Autonomous Computing

Rapid Home Provisioning

Answers

How do I measure the bandwidth utilization of my NIC or my interconnect?

Oracle RAC depends on both (a) Latency and (b) Bandwidth
(a) Latency can be best measured by running a AWR or statspack report and reviewing the cluster section
(b) Bandwidth can be measured using OS provided utilities like iptraf, Netperf, topaz (AIX). ** Some of these utilities may not be available on All platforms.

Keep in mind that, if the network is utilized at 50% bandwidth, this means that 50% of the time it is busy and not available to potential users. In this case delays (due to Network collisions) will increase the latency even though the bandwidth might look "reasonable". So always keep an eye on both "Latency and Bandwidth"


How can I validate the scalability of my shared storage? (Tightly related to RAC / Application scalability)

RAC scalability is dependent at the storage unit's ability to process I/O's per second (throughput) in a scalable fashion, specifically from multiple sources (nodes).

Oracle recommends using ORION (Oracle I/O test tool) which simulates Oracle I/O. Note: Starting with 11.2 the orion tool is included with the RDBMS/RAC software, see ORACLE_HOME/bin. On other Unix platforms (as well as Linux) one can use IOzone, if prebuilt binary not available you should build from source, make sure to use version 3.271 or later and if testing raw/block devices add the "-I" flag.

In a basic read test you will try to demonstrate that a certain IO throughput can be maintained as nodes are added. Try to simulate your database io patterns as much as possible, i.e. blocksize, number of simultaneous readers, rates, etc. For example, on a 4 node cluster, from node 1 you measure 20MB/sec, then you start a read stream on node 2 and see another 20MB/sec while the first node shows no decrease. You then run another stream on node 3 and get another 20MB/sec, in the end you run 4 streams on 4 nodes, and get an aggregated 80MB/sec or close to that. This will prove that the shared storage is scalable. Obviously if you see poor scalability in this phase, that will be carried over and be observed or interpreted as poor RAC / Application scalability.


Is Oracle RAC supported on logical partitions (i.e. LPARs) or other virtualization technologies?

Check http://www.oracle.com/technetwork/database/virtualizationmatrix-172995.html for more details on supported virtualization and partitioning technologies.


How should voting disks be implemented in an extended cluster environment?

How should voting disks be implemented in an extended cluster environment?
Can I use standard NFS for the third site voting disk?

Standard NFS is only supported for the tie-breaking voting disk in an extended cluster environment. See platform and mount option restrictions here .Otherwise just as with database files, we only support voting files on certified NAS devices, with the appropriate mount options. Please refer to My Oracle Support Document 359515.1 for a full description of the required mount options.


What are the cdmp directories in the background_dump_dest used for?

These directories are produced by the diagnosibility daemon process (DIAG). DIAG is a database process which as one of its tasks, performs cache dumping. The DIAG process dumps out tracing to file when it discovers the death of an essential process (foreground or background) in the local instance. A dump directory named something like cdmp_ is created in the bdump or background_dump_dest directory, and all the trace dump files DIAG creates are placed in this directory.


How do I gather all relevant Oracle and OS log/trace files in an Oracle RAC cluster to provide to Support?

We recommend to install TFA in every cluster.

This is a great tool to collect the different logs across the cluster for database and cluster diagnostics. It can be run manually, automatically or at any given interval. It is included in 12.1.0.2.
If you are in 12.1.0.1 you need to download it, you can do so by going to myoracle support note 1513912.1

11.2.03 Grid Infrastructure deployments do not include TFA, but 11.2.04 deployments do include it.

TFA narrows the data for only what is relevant to the range of time you are analyzing, creates a zip file and uploads it to support.

The TFA analyzer get’s this zip file and provides an easy way to navigate the data, show relations between logs across nodes making it easier to analyze the data.


How should one review the ability to scale out to more nodes in your cluster?

Once a customer is using RAC on a two node cluster and want to see how far they can actually scale it, the following are some handy tips to follow:
\1. Ensure they are using a real enough workload that it does not have false bottlenecks.
\2. Have tuned the application so it is reasonable scalable on their current RAC environment.
\3. Make sure you are measuring a valid scalability measure. This should either be doing very large batch jobs quicker (via parallelism) or being able to support a greater number of short transactions in a shorter time.
\4. Actual scalability will vary for each application and its bottlenecks. Thus the request to do the above items. You would see similar scalability if scaling up on a SMP.
\5. For failover, you should see what happens if you lose a node. If you have 2 nodes, you lose half your power and really get into trouble or have lots of extra capacity.
\6. Measuring that load balancing is working properly. Make sure you are using RCLB and a FAN aware connection pool.
\7. Your customer should also testing using DB Services.
\8. Get familiar w/ EM GC to manage a cluster and help eliminate a lot of the complexity of many of the nodes.
\9. Why stop at 6 nodes? A maximum of 3 way messaging ensure RAC can scale much, much further.


Can I ignore 10.2 CLUVFY on Solaris warning about failed package existence checks?

Complete error is

Package existence check failed for "SUNWscucm:3.1".

Package existence check failed for "SUNWudlmr:3.1".

Package existence check failed for "SUNWudlm:3.1".

Package existence check failed for "ORCLudlm:Dev_Release_06/11/04,_64bit_3.3.4.8_reentrant".

Package existence check failed for "SUNWscr:3.1".

Package existence check failed for "SUNWscu:3.1".

Cluvfy checks all possible prerequisites and reports whether your system passed the checks or not. You should then cross reference with the install guide to see if the checks that failed are required for your type of installation. In the above case, if you are not planning on using Sun Cluster, then you can continue the install.


What is a CVU component?

CVU (Cluster Verification Utility) supports the notion of Component verification. The verifications in this category are not associated with any specific stage. The user can verify the correctness of a specific cluster component. A component can range from a basic one, like free disk space to a complex one like CRS Stack. The integrity check for CRS stack will transparently span over verification of multiple sub-components associated with CRS stack. This encapsulation of a set of tasks within specific component verification should be of a great ease to the user.


Why cluvfy reports "unknown" on a particular node?

According to the Cluster Verification Utility Reference from the Clusterware Administration and Deployment Guide from the 12c Documentation:

If a cluvfy command responds with UNKNOWN for a particular node, then this is because CVU cannot determine whether a check passed or failed. The cause could be a loss of reachability or the failure of user equivalence to that node. The cause could also be any system problem that was occurring on that node when CVU was performing a check.

The following is a list of possible causes for an UNKNOWN response:

  • The node is down
  • Executables that CVU requires are missing in Grid_home/bin or the Oracle home directory
  • The user account that ran CVU does not have privileges to run common operating system executables on the node
  • The node is missing an operating system patch or a required package
  • The node has exceeded the maximum number of processes or maximum number of open files, or there is a problem with IPC segments, such as shared memory or semaphores

What are the requirements for CVU?

According to the Cluster Verification Utility Reference from the Clusterware Administration and Deployment Guide from the 12c Documentation, CVU requirements are:

  • At least 30 MB free space for the CVU software on the node from which you run CVU

  • A work directory with at least 25 MB free space on each node. The default location of the work directory is /tmp on Linux and UNIX systems, and the value specified in the TEMP environment variable on Windows systems. You can specify a different location by setting the CV_DESTLOC environment variable.

    When using CVU, the utility attempts to copy any needed information to the CVU work directory. It checks for the existence of the work directory on each node. If it does not find one, then it attempts to create one. Make sure that the CVU work directory either exists on all nodes in your cluster or proper permissions are established on each node for the user running CVU to create that directory.

  • Java 1.4.1 on the local node


What about discovery? Does CVU discover installed components?

CVU performs system checks in preparation for installation, patch updates and/or other system changes. Checks performed by CVU can be:

  • Free disk space
  • Clusterware Integrity
  • Memory
  • Processes
  • Other important cluster components
  • All available network interfaces
  • Shared Storage
  • Clusterware home

For more information please check the Cluster Verification Utility Reference from the Clusterware Administration and Deployment Guide from the 12c Documentation.


How do I check Oracle RAC certification?

Please refer to my oracle support https://support.oracle.com/ for information regarding Certification of Oracle RAC and all other products from the stack,


Is Veritas Storage Foundation supported with Oracle RAC?

Veritas certifies Veritas Storage Foundation for Oracle RAC with each release. Check Veritas Support Matrix for the latest details. Also visit My Oracle Support for a list of certified 3rd party products with Oracle RAC.


Can I use ASM as mechanism to mirror the data in an Extended RAC cluster?

Yes, please refer to the Extended Clusters Technical Brief for more information on Extended Clusters and the RAC Stack.


What are the changes in memory requirements from moving from single instance to RAC?

If you are keeping the workload requirements per instance the same, then about 10% more buffer cache and 15% more shared pool is needed. The additional memory requirement is due to data structures for coherency management. The values are heuristic and are mostly upper bounds. Actual resource usage can be monitored by querying current and maximum columns for the gcs resource/locks and ges resource/locks entries in V$RESOURCE_LIMIT.

But in general, please take into consideration that memory requirements per instance are reduced when the same user population is distributed over multiple nodes. In this case:

Assuming the same user population N number of nodes M buffer cache for a single system then

(M / N) + ((M / N )*0.10) [ + extra memory to compensate for failed-over users ]

Thus for example with a M=2G & N=2 & no extra memory for failed-over users

=( 2G / 2 ) + (( 2G / 2 )) *0.10

=1G + 100M


What are the default values for the command line arguments?

Here are the default values and behavior for different stage and component commands:

For component nodecon:
If no -i or -a arguments is provided, then cluvfy will get into the discovery mode.

For component nodereach:
If no -srcnode is provided, then the local(node of invocation) will be used as the source node.

For components cfs, ocr, crs, space, clumgr:
If no -n argument is provided, then the local node will be used.

For components sys and admprv:
If no -n argument is provided, then the local node will be used.
If no -osdba argument is provided, then 'dba' will be used. If no -orainv argument is provided, then 'oinstall' will be used.

For component peer:
If no -osdba argument is provided, then 'dba' will be used.
If no -orainv argument is provided, then 'oinstall' will be used.

For stage -post hwos:
If no -s argument is provided, then cluvfy will get into the discovery mode.

For stage -pre clusvc:
If no -c argument is provided, then cluvfy will skip OCR related checks.
If no -q argument is provided, then cluvfy will skip voting disk related checks.
If no -osdba argument is provided, then 'dba' will be used.
If no -orainv argument is provided, then 'oinstall' will be used.

For stage -pre dbinst:
If -cfs_oh flag is not specified, then cluvfy will assume Oracle home is not on a shared file system.
If no -osdba argument is provided, then 'dba' will be used.
If no -orainv argument is provided, then 'oinstall' will be used.


Do I have to be root to use CVU?

No. CVU is intended for database and system administrators. CVU assumes the current user as Grid/Database user.


What is nodelist?

Nodelist is a comma separated list of hostnames without domain. Cluvfy will ignore any domain while processing the nodelist. If duplicate entities after removing the domain exist, cluvfy will eliminate the duplicate names while processing. Wherever supported, you can use '-n all' to check on all the cluster nodes.


How do I check minimal system requirements on the nodes?

The component verification command sys is meant for that. To check the system requirement for RAC, use '-p database' argument. To check the system requirement for CRS, use '-p crs' argument.


How do I get detail output of a check?

Cluvfy supports a verbose feature. By default, cluvfy reports in non-verbose mode and just reports the summary of a test. To get detailed output of a check, use the flag '-verbose' in the command line. This will produce detail output of individual checks and where applicable will show per-node result in a tabular fashion.


Why the peer comparison with -refnode says passed when the group or user does not exist?

Peer comparison with the -refnode feature acts like a baseline feature. It compares the system properties of other nodes against the reference node. If the value does not match( not equal to reference node value ), then it flags that as a deviation from the reference node. If a group or user does not exist on reference node as well as on the other node, it will report this as 'matched' since there is no deviation from the reference node. Similarly, it will report as 'mismatched' for a node with higher total memory than the reference node for the above reason.


At what point cluvfy is usable? Can I use cluvfy before installing Oracle Clusterware?

You can run cluvfy at any time, even before CRS installation. In fact, cluvfy is designed to assist the user as soon as the hardware and OS is up. If you invoke a command which requires CRS or RAC on local node, cluvfy will report an error if those required products are not yet installed.

Cluvfy can also be invoked after install to check if any new hardware component added after the install (like more shared disks etc) are accessible from all the nodes.


Is there a way to compare nodes?

You can use the peer comparison feature of cluvfy for this purpose. The command 'comp peer' will list the values of different nodes for several pre-selected properties. You can use the peer command with -refnode argument to compare those properties of other nodes against the reference node.


What is a stage?

CVU supports the notion of Stage verification. It identifies all the important stages in RAC deployment and provides each stage with its own entry and exit criteria. The entry criteria for a stage define a specific set of verification tasks to be performed before initiating that stage. This pre-check saves the user from entering into a stage unless its pre-requisite conditions are met. The exit criteria for a stage define another specific set of verification tasks to be performed after completion of the stage. The post-check ensures that the activities for that stage have been completed successfully. It identifies any stage specific problem before it propagates to subsequent stages; thus making it difficult to find its root cause. An example of a stage is "pre-check of database installation", which checks whether the system meets the criteria for RAC install.


How do I know about cluvfy commands? The usage text of cluvfy does not show individual commands.

Cluvfy has context sensitive help built into it. Cluvfy shows the most appropriate usage text based on the cluvfy command line arguments. If you type 'cluvfy' on the command prompt, cluvfy displays the high level generic usage text, which talks about valid stage and component syntax. If you type 'cluvfy comp -list', cluvfy will show valid components with brief description on each of them. If you type 'cluvfy comp -help', cluvfy will show detail syntax for each of the valid components. Similarly, 'cluvfy stage -list' and 'cluvfy stage -help' will list valid stages and their syntax respectively. If you type an invalid command, cluvfy will show the appropriate usage for that particular command. For example, if you type 'cluvfy stage -pre dbinst', cluvfy will show the syntax for pre-check of dbinst stage.


Can I check if the storage is shared among the nodes?

Yes, you can use 'comp ssa' command to check the sharedness of the storage. Please refer to the known issues section for the type of storage supported by cluvfy in the cluvfy help command output.


Does Database blocksize or tablespace blocksize affect how the data is passed across the interconnect?

Oracle ships database block buffers, i.e. blocks in a tablespace configured for 16K will result in a 16K data buffer shipped, blocks residing in a tablespace with base block size (8K) will be shipped as base blocks and so on; the data buffers are broken down to packets of MTU sizes.

There are optimizations in newer releases like compressing etc that are beyond the scope of this FAQ


What is Oracle's position with respect to supporting RAC on Polyserve CFS?

Please check the certification matrix available through My Oracle Support for your specific release.


How do I check network or node connectivity related issues?

Use component verifications commands like 'nodereach' or 'nodecon' for this purpose. For detail syntax of these commands, type cluvfy comp -help on the command prompt. If the 'cluvfy comp nodecon' command is invoked without -i, cluvfy will attempt to discover all the available interfaces and the corresponding IP address & subnet. Then cluvfy will try to verify the node connectivity per subnet. You can run this command in verbose mode to find out the mappings between the interfaces, IP addresses and subnets. You can check the connectivity among the nodes by specifying the interface name(s) through -i argument.


What is CVU? What are its objectives and features?

CVU brings ease to RAC users by verifying all the important components that need to be verified at different stages in a RAC environment. The wide domain of deployment of CVU ranges from initial hardware setup through fully operational cluster for RAC deployment and covers all the intermediate stages of installation and configuration of various components. The command line tool is cluvfy. Cluvfy is a non-intrusive utility and will not adversely affect the system or operations stack.


Is there a cluster file system (CFS) Available for Linux?

Yes, ACFS (ASM Cluster File System with Oracle Database 11g Release 2) and OCFS (Oracle Cluster Filesystem) are available for Linux. The following My Oracle Support document has information for obtaining the latest version of OCFS:

Document 238278.1 - How to find the current OCFS version for Linux


Is OCFS2 certified with Oracle RAC 10g?

Yes. See Certify to find out which platforms are currently certified.


How do I check the Oracle Clusterware stack and other sub-components of it?

Cluvfy provides commands to check a particular sub-component of the CRS stack as well as the whole CRS stack. You can use the 'comp ocr' command to check the integrity of OCR. Similarly, you can use 'comp crs' and 'comp clumgr' commands to check integrity of crs and cluster manager sub-components. To check the entire CRS stack, run the stage command 'clucvy stage -post crsinst'.


Where can I find the CVU trace files?

CVU log files can be found under $CV_HOME/cv/log directory. The log files are automatically rotated and the latest log file has the name cvutrace.log.0. It is a good idea to clean up unwanted log files or archive them to reclaim disk place.

In recent releases, CVU trace files are generated by default. Setting SRVM_TRACE=false before invoking cluvfy disables the trace generation for that invocation.


Can I use Oracle Clusterware for failover of the SAP Enqueue and VIP services when running SAP in a RAC environment?

Oracle has created sapctl to do this and it is available for certain platforms. SAPCTL will be available for download on SAP Services Marketplace on AIX and Linux. Please check the market place for other platforms


How do I turn on tracing?

Set the environmental variable SRVM_TRACE to true. For example, in tcsh "setenv SRVM_TRACE true" will turn on tracing. Also it may help to run cluvfy with -verbose attribute
$script run.log
$export SRVM_TRACE=TRUE
$cluvfy -blah -verbose
$exit


How do I check whether OCFS is properly configured?

You can use the cluvfy component command 'cfs' to check this. Provide the OCFS file system you want to check through the -f argument. Note that, the sharedness check for the file system is supported for OCFS version 1.0.14 or higher.


My customer is about to install 10202 clusterware on new Linux machines. He is getting "No ORACM running" error when run rootpre.sh and exited? Should he worry about this message?

It is an informational message. Generally for such scripts, you can issue echo “$?” to ensure that it returns a zero value. The message is basically saying, it did not find an oracm. If Customer were installing 10g on an existing 9i cluster (which will have oracm) then this message would have been serious. But since customer is installing this on a fresh new box, They can continue the install.


Can different releases of Oracle RAC be installed and run on the same physical Linux cluster?

Yes!!!

The details answer is broken into three categories

  • Oracle Version 10g and above only
    We only require that Oracle Clusterware version be higher than or equal to the Database release. Customer can run multiple releases on the same cluster.
  • Oracle Version 10g and higher alongside Oracle Version less than 10g
    Oracle Clusterware (CRS) will not support a Oracle 9i RAC database so you will have to leave the current configuration in place. You can install Oracle Clusterware and Oracle RAC 10g or 11g into the same cluster. On Windows and Linux, you must run the 9i Cluster Manager for the 9i Database and the Oracle Clusterware for the 10g Database. When you install Oracle Clusterware, your 9i srvconfig file will be converted to the OCR. Oracle 9i RAC, Oracle RAC 10g, and Oracle RAC 11g will use the OCR. Do not restart the 9i gsd after you have installed Oracle Clusterware. Remember to check certify for details of what vendor clusterware can be run with Oracle Clusterware. Oracle Clusterware must be the highest level (down to the patchset). IE Oracle Clusterware 11g Release 2 will support Oracle RAC 10g and Oracle RAC 11g databases. Oracle Clusterware 10g can only support Oracle RAC 10g databases.

Oracle Clusterware fails to start after a reboot due to permissions on raw devices reverting to default values. How do I fix this?

After a successful installation of Oracle Clusterware a simple reboot and Oracle Clusterware fails to start. This is because the permissions on the raw devices for the OCR and voting disks e.g. /dev/raw/raw{x} revert to their default values (root:disk) and are inaccessible to Oracle. This change of behavior started with the 2.6 kernel; in RHEL4, OEL4, RHEL5, OEL5, SLES9 and SLES10. In RHEL3 the raw devices maintained their permissions across reboots so this symptom was not seen.

The way to fix this is on RHEL4, OEL4 and SLES9 is to create /etc/udev/permission.d/40-udev.permissions (you must choose a number that's lower than 50). You can do this by copying /etc/udev/permission.d/50-udev.permissions, and removing the lines that are not needed (50-udev.permissions gets replaced with upgrades so you do not want to edit it directly, also a typo in the 50-udev.permissions can render the system non-usable). Example permissions file:
# raw devicesraw/raw[1-2]:root:oinstall:0640raw/raw[3-5]:oracle:oinstall:0660

Note that this applied to all raw device files, here just the voting and OCR devices were specified.

On RHEL5, OEL5 and SLES10 a different file is used /etc/udev/rules.d/99-raw.rules, notice that now the number must be (any number) higher than 50. Also the syntax of the rules is different than the permissions file, here's an example:

This is explained in detail in Document 414897.1 .


Customer did not load the hangcheck-timer before installing RAC, Can the customer just load the hangcheck-timer ?

YES. hangcheck timer is a kernel module that is shipped with the Linux kernel, all you have to do is load it as follows:

For more details see Document 726833.1


After installing patchset 9013 and patch_2313680 on Linux, the startup was very slow

Please carefully read the following new information about configuring Oracle Cluster Management on Linux, provided as part of the patch README:

Three parameters affect the startup time:

soft_margin (defined at watchdog module load)

-m (watchdogd startup option)

WatchdogMarginWait (defined in nmcfg.ora).

WatchdogMarginWait is calculated using the formula:

WatchdogMarginWait = soft_margin(msec) + -m + 5000(msec).

[5000(msec) is hardcoded]

Note that the soft_margin is measured in seconds, -m and WatchMarginWait are measured in milliseconds.

Based on benchmarking, it is recommended to set soft_margin between 10 and 20 seconds. Use the same value for -m (converted to milliseconds) as used for soft_margin. Here is an example:

soft_margin=10 -m=10000 WatchdogMarginWait = 10000+10000+5000=25000

If CPU utilization in your system is high and you experience unexpected node reboots, check the wdd.log file. If there are any 'ping came too late' messages, increase the value of the above parameters.


Is there a way to verify that the Oracle Clusterware is working properly before proceeding with RAC install?

Yes. You can use the post-check command for cluster services setup(-post clusvc) to verify CRS status. A more appropriate test would be to use the pre-check command for database installation(-pre dbinst). This will check whether the current state of the system is suitable for RAC install.


How do I configure my RAC Cluster to use the RDS Infiniband?

Ensure that the IB (Infiniband) Card is certified for the OS, Driver, Oracle version etc.

You may need to relink Oracle using the command

$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk ipc_rds ioracle

You can check your interconnect through the alert log at startup. Check for the string “cluster interconnect IPC version:Oracle RDS/IP (generic)” in the alert.log file.

See Document 751343.1 for more details.


Why is validateUserEquiv failing during install (or cluvfy run)?

SSH must be set up as per the pre-installation tasks. It is also necessary to have file permissions set as described below for features such as Public Key Authorization to work. If your permissions are not correct, public key authentication will fail, and will fallback to password authentication with no helpful message as to why. The following server configuration files and/or directories must be owned by the account owner or by root and GROUP and WORLD WRITE permission must be disabled.

$HOME
$HOME/.rhosts
$HOME/.shosts
$HOME/.ssh
$HOME/.ssh.authorized-keys
$HOME/.ssh/authorized-keys2 #Openssh specific for ssh2 protocol.

SSH (from OUI) will also fail if you have not connected to each machine in your cluster as per the note in the installation guide:

The first time you use SSH to connect to a node from a particular system, you may see a message similar to the following:

The authenticity of host 'node1 (140.87.152.153)' can't be established. RSA key fingerprint is 7z:ez:e7:f6:f4:f2:4f:8f:9z:79:85:62:20:90:92:z9.
Are you sure you want to continue connecting (yes/no)?

Enter |yes| at the prompt to continue. You should not see this message again when you connect from this system to that node. Answering yes to this question causes an entry to be added to a "known-hosts" file in the .ssh directory which is why subsequent connection requests do not re-ask.
This is known to work on Solaris and Linux but may work on other platforms as well.


What is Runtime Connection Load Balancing?

Runtime connection Load balancing enables the connection pool to route incoming work requests to the available database connection that will provide it with the best service. This will provide the best service times globally, and routing responds fast to changing conditions in the system. Oracle has implemented runtime connection load balancing with ODP.NET and JDBC connection pools. Runtime Connection Load Balancing is tightly integrated with the automatic workload balancing features introduced with Oracle Database 10g I.E. Services, Automatic Workload Repository, and the new Load Balancing Advisory.


How do I enable the load balancing advisory?

Load balancing advisory requires the use of services and Oracle Net connection load balancing.
To enable it, on the server: set a goal (service_time or throughput, and set CLB_GOAL=SHORT ) for the service.
For client, you must be using the connection pool.
For JDBC, enable the datasource parameter FastConnectionFailoverEnabled.
For ODP.NET enable the datasource parameter Load Balancing=true.


What are the network requirements for an extended RAC cluster?

Necessary Connections

Interconnect, SAN, and IP Networking need to be kept on separate channels, each with required redundancy. Redundant connections must not share the same Dark Fiber (if used), switch, path, or even building entrances. Keep in mind that cables can be cut.

The SAN and Interconnect connections need to be on dedicated point-to-point connections. No WAN or Shared connection allowed. Traditional cables are limited to about 10 km if you are to avoid using repeaters. Dark Fiber networks allow the communication to occur without repeaters. Since latency is limited, Dark Fiber networks allow for a greater distance in separation between the nodes. The disadvantage of Dark Fiber networks are they can cost hundreds of thousands of dollars, so generally they are only an option if they already exist between the two sites.

If direct connections are used (for short distances) this is generally done by just stringing long cables from a switch. If a DWDM or CWDM is used then then these are directly connected via a dedicated switch on either side.
Note of caution: Do not do RAC Interconnect over a WAN. This is a the same as doing it over the public network which is not supported and other uses of the network (i.e. large FTPs) can cause performance degradations or even node evictions.

For SAN networks make sure you are using SAN buffer credits if the distance is over 10km.
If Oracle Clusterware is being used, we also require that a single subnet be setup for the public connections so we can fail over VIPs from one side to another.


What is the maximum distance between nodes in an extended RAC environment?

The high impact of latency create practical limitations as to where this architecture can be deployed. While there is not fixed distance limitation, the additional latency on round trip on I/O and a one way cache fusion will have an affect on performance as distance increases. For example tests at 100km showed a 3-4 ms impact on I/O and 1 ms impact on cache fusion, thus the farther distance is the greater the impact on performance. This architecture fits best where the 2 datacenters are relatively close (<~25km) and the impact is negligible. Most customers implement under this distance w/ only a handful above and the farthest known example is at 100km. Largest distances than the commonly implemented may want to estimate or measure the performance hit on their application before implementing. Due ensure a proper setup of SAN buffer credits to limit the impact of distance at the I/O layer.


Are crossover cables supported as an interconnect with Oracle RAC on any platform?

NO. CROSS OVER CABLES ARE GENERALLY NOT SUPPORTED. The requirement is to use a switch.

The only exception is the Oracle Database Appliance (ODA), for which crossover cables are used.

Detailed Reasons:

\1) cross-cabling limits the expansion of Oracle RAC to two nodes.

\2) cross-cabling is unstable. Experience has also shown that a lot of adapters misbehave when used in a crossover configuration, leading to unexpected behavior in the cluster.


What do I do if I see GC CR BLOCK LOST in my top 5 Timed Events in my AWR Report?

You should never see this or BLOCK RETRY events. A number of issues can lead to waits on this event and is covered in Document 563566.1 Work with your system administrator or/and network administrator to diagnose the issue.

Common symptom of packet loss is reflected in netstat -s as shown below

Ip:
84884742 total packets received
1201 fragments dropped after timeout
3384 packet reassembles failed

Fragments dropped or packet reassemblies failed should either be 0 or not increasing over time.

ifconfig –a:

eth0 Link encap:Ethernet HWaddr
inet addr: Bcast: Mask:
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:21721236 errors:135 dropped:0 overruns:0 frame:95
TX packets:273120 errors:0 dropped:0 overruns:0 carrier:0

High number of errors increasing over time is an indicator of a problem that should be diagnosed and fixed. Note 563566.1 provides a useful guide to common issues and solutions that can be used to fix the problem


Will adding a new instance to my Oracle RAC database (new node to the cluster) allow me to scale the workload?

YES! Oracle RAC allows you to dynamically scale out your workload by adding another node to the cluster. You must remember that adding more work to the database means that in addition to the CPU and Memory that the new node brings, you will have to ensure that your I/O subsystem can support the additional I/O requirements. In an Oracle RAC environment, you need to look at the total I/O across all instances in the cluster.

Are Red Hat GFS and GULM certified for DLM?

Both are part of Red Hat RHEL 5. For Oracle Database 10g Release 2 on Linux x86 and Linux x86-64, it is certified on OEL5 and RHEL5 as per certify. GFS is not certified yet , certification in progress by RedHat. OCFS2 is certified and it's the preferred choice for Oracle. ASM is recommended storage for the database. Since GFS is part of the RHEL5 distribution and Oracle fully supports RHEL under the Unbreakable Linux Program, Oracle will support GFS as part of RHEL5 for customers buying the Unbreakable Linux Support. This only applies to RHEL5 and not to RHEL4 where GFS is distributed with an additional fee


How do I configure raw devices in order to install Oracle Clusterware 10g on RHEL5 or OEL5?

The raw devices OS support scripts like /etc/sysconfig/rawdevices are not shipped on RHEL5 or OEL5, this is because raw devices are being deprecated on Linux. This means that in order to install Oracle Clusterware 10g you'd have to manually bind the raw devices to the block devices for the OCR and voting disks so that the 10g installer will proceed without error.

Refer to Document 465001.1 for exact details on how to do the above.

Oracle Clusterware 11g and higher releases doesn't require this configuration since the installer can detect and use block devices directly.


Is Server Side Load Balancing supported/recommended/proven technology in Oracle EBusiness Suite?

Yes, Customers are using it successfully today. It is recommended to set up both Client and Server side load balancing. Connections coming from 8.0.6 home (forms and ccm), are directed to RAC instance based on the sequence its listed in the TNS entry description list and may not get load balanced optimally. For Oracle RAC 10.2 or higher do NOT set PREFER_LEAST_LOADED_NODE = OFF in your listener.ora.
Please set the CLB_GOAL on the service.


How do I change my Veritas SF RAC installation to use UDP instead of LLT?

Using UDP with Veritas Clusterware and Oracle RAC 10g seems to require an exception from Veritas so this may be something you should check with them.

To make it easier for customers to convert their LLT environments to UPD, Oracle has created Patch 6846006 on 10.2.0.3 which contains the libraries that were overwritten by the Veritas installation (IE those mentioned above). Converting from specialized protocols to UDP requires a relink after the Oracle libraries have been restored. This needs a complete cluster shutdown and cannot be accomplished in a rolling fashion.

NOTE: Oracle RAC 11g will not support LLT for interconnect.


How to reorder or rename logical network interface (NIC) names in Linux

Linux Operating system assigns logical Network interface (NIC)s names based on device discovery during boot process. This is fairly consistent between reboots, However sometimes a kernel upgrade or driver update can affect discovery and cause the logical name to be different. For example, What used to be eth0, would now be eth1.

Udev rules can be used to persist logical NIC names. For more details on logical NIC names, please refer here.


How does UDP over Infiniband compare to UDP over Gigabit Ethernet when used for the RAC interconnect?

Infiniband in general provides lower latency and higher bandwidth than Ethernet and hence is commonly assumed to provide better performance. On the other hand, Infiniband infrastructures are generally more expensive than traditional Ethernet infrastructures, which may need to be considered depending on the use case.

Using Infiniband, customers can choose between two different protocols for the inter-instance communication of RAC-enabled database instances, while Oracle Clusterware will use UDP (starting with 11g Rel. 2) over IPoIB (IP over Infiniband). By default, inter-instance communication of RAC-enabled database instances will use the same approach. Alternatively, the RDS (Reliable Datagram socket) can be used with Infiniband, which then must be enabled explicitly by linking RDS protocol support into the database home.

Oracle Exadata Database Machines and other Engineered Systems use RDS for the inter-instance communication by default. For generic systems, RDS support is provided on certain hardware and software configurations. More details can be found here: Oracle technology site for Linux, Oracle Technology site for Unix , Oracle technology site for Windows


Is the hangcheck timer still needed with Oracle RAC 10g and 11gR1?

YES! hangcheck-timer is required for 10g and 11gR1 (11.1.*). It is no longer needed in Oracle Clusterware 11gR2 and higher releases.

The hangcheck-timer module monitors the Linux kernel for extended operating system hangs that could affect the reliability of the RAC node ( I/O fencing) and cause database corruption. To verify the hangcheck-timer module is running on every node:

To ensure the module is loaded every time the system reboots, verify that the local system startup file (/etc/rc.d/rc.local) contains the command above.

For additional information please review the Oracle RAC Install and Configuration Guide (5-41) and Document 726833.1.


Can I have different servers in my Oracle RAC? Can they be from different vendors? Can they be of different sizes?

Oracle Real Application Clusters (RAC) requires all the nodes to run the same Operating System binary in a cluster (IE All nodes must be Windows 2008 or all nodes must be Oracle Linux 6). All nodes must be the same architecture (I.E. All nodes must be either 32 bit or all nodes must be 64 bit or all nodes must be HP-UX PARISC since you cannot mix PARISC with Itanium).

Oracle RAC does support a cluster with nodes that have different hardware configurations. An example is a cluster with 3 nodes with 4 CPUs and another node with 6 CPUs. This can easily occur when adding a new node after the cluster has been in production for a while. For this type of configuration, customers must consider some additional features to get the optimal cluster performance. The servers used in the cluster can be from different vendors; this is fully supported as long as they run the same binaries. Since many customers implement Oracle RAC for high availability, you must make sure that your hardware vendor will support the configuration. If you have a failure, will you get support for the hardware configuration?

The installation of Oracle Clusterware expects the network interface to be the same name on all nodes in the cluster. If you are using different hardware, you may need to work with your operating system vendor to make sure the network interface names are the same name on all nodes (IE eth0). Customers implementing uneven cluster configurations need to consider how they will balance the workload across the cluster. Some customers have chosen to manually assign different workloads to different nodes. This can be done using database services however it is often difficult to predict workloads and the system cannot dynamically react to changes in workload. Changes to workload require the DBA to modify the service. You will also need to consider how you will survive failures in the cluster. Will the service levels be maintained if the larger node in the cluster fails? Especially in a small cluster, the impact of losing a node could impact the ability to continue processing the application workload.

The impact of the different sized nodes depends on how much difference there is in the size. If there is a large difference between the nodes in terms of memory and CPU size, than the "bigger" nodes will attract more load, obviously, and in the case of failure the "smaller" node(s) will become overpowered. In such a case, static routing of workload via services e.g. batch and certain services, which can be suspended/stopped if the large node fails and the cluster has significantly reduced capacity, may be advisable. The general recommendation is that the nodes should be sized in such a way that the aggregated peak load of the large node(s) can be absorbed by the smaller node(s), i.e. smaller node should have sufficient capacity to run the essential services alone. Another option is to add another small node to the cluster on demand in case that the large one fails.

It should also be noted especially if there is a large difference between the sizes of the nodes, the small nodes can slow down the larger node. This could be critical one if the smaller node is very busy and must serve data to the large node.

To help balance workload across a cluster, Oracle RAC 10g Release 2 and above provides the Load Balancing Advisory (LBA). The load balancing advisory runs in an Oracle RAC database and monitors the work executed by the service on all instances where the service is active in the cluster. The LBA provides recommendations to the subscribed clients about the state of the service and where the client should direct connection requests. Setting the GOAL on the service activates the load balancing advisory. Clients that can utilize the load balancing advisory are Oracle JDBC Implicit Connection Cache, Oracle Universal Connection Pool for Java, Oracle Call Interface Session Pool, ODP.NET Connection Pool, and Oracle Net Services Connection Manager. The Oracle Listener also uses the Load Balancing Advisory if CLB_GOAL parameter is set to SHORT (recommended Best Practice if using an integrated Oracle Client mentioned here). If CLB_GOAL is set to LONG (default), the Listener will load balance the number of sessions for the service across the instances where the service is available. See the Oracle Real Application Clusters Administration and Deployment Guide for details on implementing services and the various parameter settings.


How many nodes can one have in an HP-UX/Solaris/AIX/Windows/Linux cluster?

Technically and since Oracle RAC 10g Release 2, 100 nodes are supported in one cluster. This includes running 100 database instances belonging to the same (production) database on this cluster, using the Oracle Database Enterprise Edition (EE) with the Oracle RAC option and Oracle Clusterware only (no third party / vendor cluster solution underneath).

Note that using the Oracle Database Standard Edition (SE), which includes the Oracle RAC functionality, further restrictions regarding the number of nodes per cluster apply. Also note that one cannot use a third party or vendor cluster for an Oracle Database Standard Edition based Oracle RAC cluster. For more information see the licensing information.

When using a third party / vendor cluster software the following limits apply (subject to change without notice):
Solaris Cluster: 8
HP-UX Service Guard: 16
HP Tru64: 8
IBM AIX:
* 8 nodes for Physical Shared (CLVM) SSA disk
* 16 nodes for Physical Shared (CLVM) non-SSA disk
* 128 nodes for Virtual Shared Disk (VSD)
* 128 nodes for GPFS
* Subject to storage subsystem limitations
Veritas: 8-16 nodes (check w/ Veritas)

Node limitations should always be checked with the cluster software vendor.


Are 3rd party cluster solutions supported on Linux?

For certified third party cluster solutions, please refer to the certification section of Oracle Support (My Oracle Support).

If a third party cluster solution is certified with Oracle Real Application Clusters (RAC) or Oracle Clusterware, it will be listed as certified. Note that Oracle RAC One Node certification follows Oracle RAC certification in principle.

Note also that no third party cluster solution is certified under Oracle RAC used with the Oracle Standard Edition, regardless of the operating system used. This is a licensing restriction and can be found in the Oracle Licensing guide.


How many nodes are supported or can be used in an Oracle RAC Database?

Technically and *since* Oracle RAC 10g Release 2, 100 nodes are supported in one cluster. This includes running 100 database instances belonging to the same (production) database on this cluster, using the Oracle Database Enterprise Edition (EE) with the Oracle RAC option and Oracle Clusterware only (no third party / vendor cluster solution underneath).

In previous releases, the DBCA (as a result of further MAXINSTANCES-parameter related restrictions) would only allow creating 63 instances per database. These restrictions have been lifted with Oracle 11g Release 1 and later versions, in favor of supporting 100 nodes as described. For completeness: With Oracle RAC 10g Release 1 the maximum was 63. In Oracle RAC 9i the maximum is platform specific due to the different cluster software support by different vendors.

Note that using the Oracle Database Standard Edition (SE), which includes the Oracle RAC functionality, further restrictions regarding the number of nodes per cluster apply. For more information, see: Special-Use Licensing


What are my options for setting the Load Balancing Advisory GOAL on a Service?

The load balancing advisory is enabled by setting the GOAL on your service either through PL/SQL DBMS_SERVICE package or EM DBControl Clustered Database Services page. There are 3 options for GOAL:
None - Default setting, turn off advisory
THROUGHPUT - Work requests are directed based on throughput. This should be used when the work in a service completes at homogenous rates. An example is a trading system where work requests are similar lengths.
SERVICE_TIME - Work requests are directed based on response time. This should be used when the work in a service completes at various rates. An example is as internet shopping system where work requests are various lengths
Note: If using GOAL, you should set CLB_GOAL=SHORT


Is Oracle Database on VMware support? Is Oracle RAC on VMware supported?

Oracle Database support on VMware is outlined in My Oracle Support Document 249212.1. Effectively, for most customers, this means they are not willing to run production Oracle databases on VMware. Regarding Oracle RAC - the explicit mention not to run RAC on vmware was removed in 11.2.0.2 (Novemeber 2010)


What is 'cvuqdisk' rpm? Why should I install this rpm?

CVU requires root privilege to gather information about the scsi disks during discovery. A small binary uses the setuid mechanism to query disk information as root. Note that this process is purely a read-only process with no adverse impact on the system. To make this secured, this binary is packaged in the cvuqdisk rpm and need root privilege to install on a machine. If this package is installed on all the nodes, CVU will be able to perform discovery and shared storage accessibility checks for scsi disks. Otherwise, it complains about the missing package 'cvuqdisk'. Note that, this package should be installed only on RedHat Linux 3.0 distribution. Discovery of scsi disks for RedHat Linux 2.1 is not supported.


What is the Load Balancing Advisory?

To assist in the balancing of application workload across designated resources, Oracle Database 10g Release 2 provides the Load Balancing Advisory. This Advisory monitors the current workload activity across the cluster and for each instance where a service is active; it provides a percentage value of how much of the total workload should be sent to this instance as well as service quality flag. The feedback is provided as an entry in the Automatic Workload Repository and a FAN event is published. The easiest way for an application to take advantage of the load balancing advisory, is to enable Runtime Connection Load Balancing with an integrated client.


A customer installed 10g Release 2 on Linux RH4 Update 2, 2.6.9-22.ELsmp #1 SMP x86_64 GNU/Linux, and got the error Error in invoking target 'all_no_orcl'. Customer ignored the error and the install succeeded without any other errors and oracle apparently worked fine. What should they do?

Because of compatibility with their storage array (EMC DMX with Powerpath 4.5) they must use update 2. Oracle install guide states that RH4 64 bits update 1 "or higher" should be used for 10g R2.
The binutils patch binutils-.15.92.0.2-13.0.0.0.2.x86_64.rpm is needed to relink without error. Red Hat is aware of the bug. Customers should use the latest update (or at least update 3 to fix).


Are Oracle Applications certified with RAC?

For Siebel, PeopleSoft see http://realworld.us.oracle.com/isv/siebel.htm Oracle 9i RAC (9.2) and Oracle RAC 10g (10.1) are certified with Oracle Applications EBusiness Suite. .


Do I have to type the nodelist every time for the CVU commands? Is there any shortcut?

You do not have to type the nodelist every time for the CVU commands. Typing the nodelist for a large cluster is painful and error prone. Here are few short cuts. To provide all the nodes of the cluster, type '-n all'. Cluvfy will attempt to get the nodelist in the following order: 1. If a vendor clusterware is available, it will pick all the configured nodes from the vendor clusterware using lsnodes utility. 2. If CRS is installed, it will pick all the configured nodes from Oracle clusterware using olsnodes utility. 3. In none of the above, it will look for the CV_NODE_ALL environmental variable. If this variable is not defined, it will complain. To provide a partial list(some of the nodes of the cluster) of nodes, you can set an environmental variable and use it in the CVU command. For example: setenv MYNODES node1,node3,node5 cluvfy comp nodecon -n $MYNODES


How do I check user accounts and administrative permissions related issues?

Use admprv component verification command. Refer to the usage text for detail instruction and type of supported operations. To check whether the privilege is sufficient for user equivalence, use '-o user_equiv' argument. Similarly, the '-o crs_inst' will verify whether the user has the correct permissions for installing CRS. The '-o db_inst' will check for permissions required for installing RAC and '-o db_config' will check for permissions required for creating a RAC database or modifying a RAC database configuration.


How to configure bonding on Suse SLES9.


How to configure bonding on Suse SLES8.

Please see Document 291958.1


How to configure concurrent manager in a RAC environment?

Large clients commonly put the concurrent manager on a separate server now (in the middle tier) to reduce the load on the database server. The concurrent manager programs can be tied to a specific middle tier (e.g., you can have CMs running on more than one middle tier box). It is advisable to use specilize CM. CM middle tiers are set up to point to the appropriate database instance based on product module being used.


What is the optimal migration path to be used while migrating the E-Business suite to Oracle RAC?

Following is the recommended and most optimal path to migrate you E-Business suite to an Oracle RAC environment:

\1. Migrate the existing application to new hardware. (If applicable).

\2. Use Clustered File System (ASM recommended) for all data base files or migrate all database files to raw devices. (Use dd for Unix or ocopy for NT)

\3. Install/upgrade to the latest available e-Business suite.

\4. Ensure the database version is supported with Oracle RAC

\5. In step 4, install Oracle RAC option and use Installer to perform install for all the nodes.

\6. Clone Oracle Application code tree.

Reference Documents:
Oracle E-Business Suite Release 11i with 9i RAC: Installation and Configuration : <>
E-Business Suite 11i on RAC : Configuring Database Load balancing & Failover: <>
Oracle E-Business Suite 11i and Database - FAQ : Document 285267.1


Can I use TAF with e-Business in a RAC environment?

TAF itself does not work with e-Business suite due to Forms/TAF limitations, but you can configure the tns failover clause. On instance failure, when the user logs back into the system, their session will be directed to a surviving instance, and the user will be taken to the navigator tab. Their committed work will be available; any uncommitted work must be re-started.

We also recommend you configure the forms error URL to identify a fallback middle tier server for Forms processes, if no router is available to accomplish switching across servers.


Can I use Automatic Undo Management with Oracle Applications?

Yes. In a RAC environment we highly recommend it.


Which e-Business version is prefereable?

Versions 11.5.5 onwards are certified with Oracle9i and hence with Oracle9i RAC. However we recommend the latest available version.


Should functional partitioning be used with Oracle Applications?

We do not recommend functional partitioning unless throughput on your server architecture demands it. Cache fusion has been optimized to scale well with non-partitioned workload.

If your processing requirements are extreme and your testing proves you must partition your workload in order to reduce internode communications, you can use Profile Options to designate that sessions for certain applications Responsibilities are created on a specific middle tier server. That middle tier server would then be configured to connect to a specific database instance.

To determine the correct partitioning for your installation you would need to consider several factors like number of concurrent users, batch users, modules used, workload characteristics etc.


Is the Oracle E-Business Suite (Oracle Applications) certified against RAC?

Yes. (There is no seperate certification required for RAC.)""


I am seeing the wait events 'ges remote message', 'gcs remote message', and/or 'gcs for action'. What should I do about these?

These are idle wait events and can be safetly ignored. The 'ges remote message' might show up in a 9.0.1 statspack report as one of the top wait events. To have this wait event not show up you can add this event to the PERFSTAT.STATS$IDLE_EVENT table so that it is not listed in Statspack reports.


Do I need to relink the Oracle Clusterware / Grid Infrastructure home after an OS upgrade?

Using Oracle Clusterware 10g and 11.1, Oracle Clusterware binaries cannot be relinked. However, the client shared libraries, which are part of the home can be relinked, in most cases there should not be a need to relink them. See Document 743649.1 for more information.

Using Oracle Grid Infrastructure 11.2 and higher, there are some executables in the Grid home that can and should be relinked after an OS upgrade. The following steps describe how to relink an Oracle Grid Infrastructure for Clusters home:

As root:

# cd Grid_home/crs/install
# perl rootcrs.pl -unlock

As the grid infrastructure for a cluster owner:

$ export ORACLE_HOME=Grid_home
$ Grid_home/bin/relink

As root again:

# cd Grid_home/crs/install
# perl rootcrs.pl -patch

Note: If using Oracle Grid Infrastructure for Standalone Environments (Oracle Restart), see the Oracle Documentation for more information: https://docs.oracle.com/database/121/LADBI/oraclerestart.htm#LADBI999

Note: It is recommended to use the Perl version that comes along with your Grid Infrastructure Install i.e Grid_home/perl/bin/perl rootcrs.pl -patch.


How can I configure database instances to run on 12.1.0.1 Oracle Flex Cluster Leaf nodes?

Oracle 12c introduces a new cluster topology called Flex Cluster where servers in the cluster can assume the specific roles - HUB and LEAF. In the 12.1.0.1 release, the LEAF nodes can only be configured for non-database applications. Database instances are not supported to run on the 12.1.0.1 Flex Cluster LEAF nodes. Please see the 12c Flex Cluster FAQ statement under Oracle Clusterware.


When configuring the NIC cards and switch for a GigE Interconnect should it be set to FULL or Half duplex in Oracle RAC?

You must use Full Duplex for all network communication. Half Duplex means you can only either send OR receive at a time.

Note that modern OS's default to Full Duplex unless there is some cable problem or some mis-configuration in the switch


How can a NAS storage vendor certify their storage solution for Oracle RAC ?

Please refer to this link on OTN for details on Oracle RAC Technologies Matrix (storage being part of it).


Is Infiniband supported for the Oracle RAC interconnect?

Yes, it is supported.


What kind of HW components do you recommend for the interconnect?

The general recommendation for the interconnect is to provide the highest bandwidth interconnect, together with the lowest latency protocol that is available for a given platform.

You should use a redundant 1 Gigabit Ethernet and you should use Load Balancing Across, we recommend you use HAIP's for this or the Redundant Interconnect Usage Feature. Do remember that if you use this feature you must use different subnets for the Interconnect.


Where can I find a list of supported solutions to ensure NIC availability / redundancy (for the interconnect) per platform?

IBM AIX - available solutions:

    • Etherchannel (OS based)

    • HACMP based network failover solution

    HP HP/UX - available solutions:

    • APA - Auto Port Aggregation (OS based)

    • MC/Serviceguard based network failover solution

    • Combination of both solutions

    More information: Document 296874.1 and Auto Port Aggregation (APA) Support Guide

    Sun Solaris - available solutions:

    • Sun Trunking (OS based)

    • Sun IPMP (OS based)

    • Sun Cluster based network failover solution (clprivnet)

    More information for Oracle RAC 10g and Oracle RAC 11g Release 1:

    • Configure IPMP for the Oracle VIP and IPMP introduction
    • How to Setup IPMP as Cluster Interconnect
    • More information for Oracle RAC 11g Release 2:
    • My Oracle Support Document 1069584.1 - Solaris IPMP and Trunking for the cluster interconnect in Oracle Grid Infrastructure

    Linux - available solutions:

    • Bonding

    More information: Document 298891.1

    Windows - available solutions:

    • Teaming

    On Windows teaming solutions to ensure NIC availability are usually part of the network card driver.
    Thus, they depend on the network card used. Please, contact the respective hardware vendor for more information.

    • OS independent solution:
    • Redundant Interconnect Usage enables load-balancing and high availability across multiple (up to four) private networks (also known as interconnects).
  • Oracle RAC 11g Release 2, Patch Set One (11.2.0.2) enables Redundant Interconnect Usage as a feature for all platforms, except Windows.

  • On systems that use Solaris Cluster, Redundant Interconnect Usage will use clprivnet.


What is Cache Fusion and how does this affect applications?

Cache Fusion is a new parallel database architecture for exploiting clustered computers to achieve scalability of all types of applications. Cache Fusion is a shared cache architecture that uses high speed low latency interconnects available today on clustered systems to maintain database cache coherency. Database blocks are shipped across the interconnect to the node where access to the data is needed. This is accomplished transparently to the application and users of the system. As Cache Fusion uses at most a 3 point protocol, this means that it easily scales to clusters with a large numbers of nodes.

For further information please refer to

Cache Fusion and the Global Cache Service
Part Number A96597-01
http://docs.oracle.com/cd/B10501_01/rac.920/a96597/pslkgdtl.htm

Additional Information can be found at:

Document 139436.1 Understanding 9i Real Application Clusters Cache Fusion


Can I run more than one clustered database on a single Oracle RAC cluster?

You can run multiple databases in a Oracle RAC cluster, either one instance per node (w/ different databases having different subsets of nodes in a cluster), or multiple instances per node (all databases running across all nodes) or some combination in between. Running multiple instances per node does cause memory and resource fragmentation, but this is no different from running multiple instances on a single node in a single instance environment which is quite common. It does provide the flexibility of being able to share CPU on the node, but the Oracle Resource Manager will not currently limit resources between multiple instances on one node. You will need to use an OS level resource manager to do this.


What are the restrictions on the SID with an Oracle RAC database? Is it limited to 5 characters?

The SID prefix in 10g Release 1 and prior versions was restricted to five characters by install/config tools so that an ORACLE_SID of upto max of 5+3=8 characters can be supported in an Oracle RAC environment. The SID prefix is relaxed up to 8 characters in 10g Release 2, see Bug 4024251 for more information.
With Oracle RAC 11g Release 2, SIDs in Oracle RAC with Policy Managed database are dynamically allocated by the system when the instance starts. This supports a dynamic grid infrastructure which allows the instance to start on any server in the cluster.


Is it supported to install Oracle Clusterware and Oracle RAC as different users?

Yes, Oracle Clusterware and Oracle RAC can be installed as different users. The Oracle Clusterware user and the Oracle RAC user must both have OINSTALL as their primary group. Every Database home can have a different OSDBA group with a different username.


Is it difficult to transition (migrate) from Single Instance to Oracle RAC?

If the cluster and the cluster software are not present, these components must be installed and configured. The Oracle RAC option must be added using the Oracle Universal Installer, which necessitates the existing DB instance must be shut down. There are no changes necessary on the user data within the database. However, a shortage of freelists and freelist groups can causecontention with header blocks of tables and indexes as multiple instances vie for the same block. This may cause a performance problem and require data partitioning. However, the need for these changes should be rare.

Recommendation: apply automatic space segment management to perform these changes automatically. The free space management will replace the freelists and freelist groups and is better. The database requires one Redo thread and one Undo tablespace for each instance, which are easily added with SQL commands or with Enterprise Manager tools. NOTE: With Oracle RAC 11g Release 2, you do not neet to pre-create redo threads or undo tablespaces if you are using Oracle Managed Files (EG ASM).

Datafiles will need to be moved to either a clustered file system (CFS) so that all nodes can access them. Oracle recommends the use of Automatic Storage Management (ASM) Also, the MAXINSTANCES parameter in the control file must be greater than or equal to number of instances you will start in the cluster.

For more detailed information, please see Migrating from single-instance to RAC in the Oracle Documentation.

With Oracle Database 10g Release 2, $ORACLE_HOME/bin/rconfig tool can be used to convert Single instance database to RAC. This tool takes in a xml input file and convert the Single Instance database whose information is provided in the xml. You can run this tool in "verify only" mode prior to performing actual conversion. This is documented in the Oracle RAC Admin book and a sample xml can be found $ORACLE_HOME/assistants/rconfig/sampleXMLs/ConvertToRAC.xml. This tool only supports databases using a clustered file system or ASM. You cannot use it with raw devices. Grid Control 10g Release 2 provides a easy to use wizard to perform this function.

Oracle Enterprise Manager includes workflows to assiste with migrations. (I.E. Migrating to ASM, Creating Standby, Converting Standby to RAC etc) The migration is automated in Enterprise Manager Grid Control 10.2.0.5.


Is rcp and/or rsh required for normal Oracle RAC operation ?

rcp"" and ""rsh"" are not required for normal Oracle RAC operation. However in older versions ""rsh"" and ""rcp"" were used by the installer and therefore should to be enabled for Oracle RAC and patchset installation. In later releases, ssh is used by default for these operations.


Does Oracle Clusterware or Oracle Real Application Clusters support heterogeneous platforms?

Oracle Clusterware and Oracle Real Application Clusters do not support heterogeneous platforms in the same cluster. We do support machines of different speeds and size in the same cluster. All nodes must run the same operating system (I.E. they must be binary compatible). In an active data-sharing environment, like Oracle RAC, we do not support machines having different chip architectures.


What are the dependencies between OCFS and ASM in Oracle Database 10g ?

In an Oracle RAC 10g environment, there is no dependency between Automatic Storage Management (ASM) and Oracle Cluster File System (OCFS).

OCFS is not required if you are using Automatic Storage Management (ASM) for database files. You can use OCFS on Windows( Version 2 on Linux ) for files that ASM does not handle - binaries (shared oracle home), trace files, etc. Alternatively, you could place these files on local file systems even though it's not as convenient given the multiple locations.

Oracle recommends using ASM/ACFS for your database files


Why does the NOAC attribute need to be set on NFS mounted RAC Binaries?

The noac attribute is required because the installer determines sharedness by creating a file and checking for that file’s existance on remote node. If the noac attribute is not enabled then this test will incorrectly fail. This will confuse installer and opatch. Some other minor issues issues with spfile in the default $ORACLE_HOME/dbs will definitely be affected.


My customer has an XA Application with a Oracle RAC Database, can I do Load Balancing across the Oracle RAC instances?

No, not in the traditional Oracle Net Services Load Balancing. We have written a document that explains the best practices for 9i, 10g Release 1 and 10g Release 2**** . With the introduction of Services in Oracle Database 10g, life gets easier. To understand services, read the Oracle RAC Admin and Deployment Guide.
With Oracle RAC 11g, Oracle provides transparent support for XA global transactions in an Oracle RAC environment which supports load balancing with Oracle Net Services across Oracle RAC instances.


What would you recommend to customer, Oracle Clusterware or Vendor Clusterware (I.E. HP Service Guard, HACMP, Sun Cluster, Veritas etc.) with Oracle Real Application Clusters?

You will be installing and using Oracle Clusterware whether or not you use the Vendor Clusterware. Oracle Clusterware provides a complete clustering solution and is required for Oracle RAC or Automatic Storage Management (including ACFS).
Vendor clusterware is only required with Oracle 9i RAC. Check the certification matrix in MyOracleSupport for details of certified vendor clusterware.


Do we have to have Oracle Database on all nodes?

Each node of a cluster that is being used for a clustered database will typically have the database and Oracle RAC software loaded on it, but not actual datafiles (these need to be available via shared disk). For example, if you wish to run Oracle RAC on 2 nodes of a 4-node cluster, you would need to install the clusterware on all nodes, Oracle RAC on 2 nodes and it would only need to be licensed on the two nodes running the Oracle RAC database. Note that using a clustered file system, or NAS storage can provide a configuration that does not necessarily require the Oracle binaries to be installed on all nodes.
With Oracle RAC 11g Release 2, if you are using policy managed databases, then you should have the Oracle RAC binaries accessible on all nodes in the cluster.


What software is necessary for Oracle RAC? Does it have a separate installation CD to order?

Oracle Real Application Clusters is an option of Oracle Database and therefore part of the Oracle Database CD. With Oracle 9i, Oracle 9i RAC is part of Oracle9i Enterprise Edition. If you install 9i EE onto a cluster, and the Oracle Universal Installer (OUI) recognizes the cluster, you will be provided the option of installing RAC. In versions prior to 10g, most UNIX platforms require a vendor supplied clusterware. For Intel platforms (Linux and Windows), Oracle provides the Clusterware software within the Oracle9i Enterprise Edition release.

With Oracle Database 10g and higher releases, Oracle RAC is an option of EE and available as part of SE. Oracle provides Oracle Clusterware on its own CD for all platforms included in the database CD pack.

Please check the certification matrix (Note 184875.1) or with the appropriate platform vendor for more information.

With Oracle Database 11g Release 2, Oracle Clusterware and Automatic Storage Management are installed as a single set of binaries called the Grid Infrastructure. The media for the grid infrastructure is on a separate CD or under the grid directory. For standalone servers, Automatic Storage Management and Oracle Restart are installed as the grid infrastructure for a standalone server which is installed from the same media.


I have changed my spfile with alter system set parameter_name =.... scope=spfile. The spfile is on ASM storage and the database will not start.

How to recover:

In $ORACLE_HOME/dbs

. oraenv &ltinstance_name&gt

sqlplus "/ as sysdba"

startup nomount

create pfile='recoversp' from spfile
/
shutdown immediate
quit

Now edit the newly created pfile to change the parameter to something sensible.

Then:

sqlplus "/ as sysdba"

startup pfile='recoversp' (or whatever you called it in step one).

create spfile='+//spfile.ora' from pfile='recoversp'
/
N.B.The name of the spfile is in your original init(instance_name).ora so adjust to suit

shutdown immediate
startup
quit


How to use VLANs for Oracle RAC and the Oracle Clusterware Interconnect?

It is Oracle's standing requirement to separate the various types of communication in an Oracle RAC cluster. This requirement addresses the following separation of communication:

  • Each node in an Oracle RAC cluster must have at least one public network.

  • Each node in an Oracle RAC cluster must have at least one private network, also referred to as "cluster interconnect".

  • Each node in an Oracle RAC cluster must have at least an additional network interface, if the shared storage is accessed using a network based connection.

    Cluster interconnect network separation can be satisfied either by using standalone, dedicated switches, which provide the highest degree of network isolation, or Virtual Local Area Networks defined on the Ethernet switch, which provide broadcast domain isolation between IP networks. VLANs are fully supported for Oracle Clusterware interconnect deployments. Partitioning the Ethernet switch with VLANs allows for:

  • Sharing the same switch for private and public communication.

  • Sharing the same switch for the private communication of more than one cluster.

  • Sharing the same switch for private communication and shared storage access.

  • The following best practices should be followed:

    The Cluster Interconnect VLAN must be on a non-routed IP subnet.
    All Cluster Interconnect networks must be configured with non-routed IPs. The server-server communication should be single hop through the switch via the interconnect VLAN. There is no VLAN-VLAN communication.

    Oracle recommends maintaining a 1:1 mapping of subnet to VLAN.
    The most common VLAN deployments maintain a 1:1 mapping of subnet to VLAN. It is strongly recommended to avoid multi-subnet mapping to a single VLAN. Best practice recommends a single access VLAN port configured on the switch for the cluster interconnect VLAN. The server side network interface should have access to a single VLAN.

    The shared switch should be configured to mitigate the cost of Spanning Tree
    The switch vendor’s best practices should be followed to either disable or limit the cost of Spanning Tree convergence for the cluster interconnect VLAN.

    Sharing the same switch for private communication and shared storage access
    This configuration is supported, if the underlying network infrastructure supports Data Center Bridging (DCB), zero packet loss and can satisfy the latency and throughput defined for the application. This may require imposing a Quality of Service (QoS) on the shared switch to prioritize network based communication to the storage. Fiber Channel over Ethernet (FCoE) converged networks are supported for certified configurations.

For more details, Review the technical brief "Oracle RAC and Oracle Clusterware Interconnect VLAN Deployment Considerations (PDF)"


What storage is supported with Standard Edition Oracle RAC?

As per the licensing documentation, you must use ASM for all database files with SE Oracle RAC. There is no support for CFS or NFS.
From Oracle Database 10g Release 2 Licensing Doc:
Oracle Standard Edition and Oracle Real Application Clusters (RAC) When used with Oracle Real Application Clusters in a clustered server environment, Oracle Database Standard Edition requires the use of Oracle Clusterware. Third-party clusterware management solutions are not supported. In addition, Automatic Storage Management (ASM) must be used to manage all database-related files, including datafiles, online logs, archive logs, control file, spfiles, and the flash recovery area. Third-party volume managers and file systems are not supported for this purpose.


Can I use Oracle RAC in a distributed transaction processing environment?

YES. Best practices is to have all tightly coupled branches of a distributed transaction running on an Oracle RAC database must run on the same instance. Between transactions and between services, transactions can be load balanced across all of the database instances.
Prior to Oracle RAC 11g, you must use services to manage DTP environments. By defining the DTP property of a service, the service is guaranteed to run on one instance at a time in an Oracle RAC database. All global distributed transactions performed through the DTP service are ensured to have their tightly-coupled branches running on a single Oracle RAC instance.
Oracle RAC 11g provides transparent support for XA global transactions in an Oracle RAC environment and you do not need to use DTP services.


Does changing uid or gid of the Oracle User affect Oracle Clusterware?

Yes, Changing UID/GID does affect Oracle Clusterware and should not be done. There are a lot of files in the Oracle Clusterware home and outside of the Oracle Clusterware home that are chgrp'ed to the appropriate groups for security and appropriate access. The filesystem records the uid (not the username), and so if you exchange the names, now the files are owned by the wrong group.


Can I use iSCSI storage with my Oracle RAC cluster?

For iSCSI, Oracle has made the statement that, as a block protocol, this technology does not require validation for single instance database. There are many early adopter customers of iSCSI running Oracle9i and Oracle Database 10g. As for Oracle RAC, Oracle has chosen to validate the iSCSI technology (not each vendor's targets) for the 10g platforms - this has been completed for Linux and Windows. For Windows we have tested up to 4 nodes - Any Windows iSCSI products that are supported by the host and storage device are supported by Oracle. We don't support NAS devices for Windows, however some NAS devices (eg NetApp) can also present themselves as iSCSI devices. If this is the case then a customer can use this iSCSI device with Windows as long as the iSCSI device vendor supports Windows as an initiator OS. No vendor-specific information will be posted on Certify.


Can we designate the place of archive logs on both ASM disk and regular file system, when we use SE RAC?

Yes, - customers may want to create a standby database for their SE RAC database so placing the archive logs additionally outside ASM is OK.


Is it a good idea to add anti-virus software to my RAC cluster?

For customers who choose to run anti-virus (AV) software on their database servers, they should be aware that the nature of AV software is that disk IO bandwidth is reduced slightly as most AV software checks disk writes/reads. Also, as the AV software runs, it will use CPU cycles that would normally be consumed by other server processes (e.g your database instance). As such, databases will have faster performance when not using AV software. As some AV software is known to lock the files whilst it scans then it is a good idea to exclude the Oracle Datafiles/controlfiles/logfiles from a regular AV scan


I want to use rconfig to convert a single instance to Oracle RAC but I am using raw devices in Oracle RAC. Does rconfig support RAW ?

No. rconfig supports ASM and shared file system only.


What combinations of Oracle Clusterware, Oracle RAC and ASM versions can I use?

See Document 337737.1 for a detailed support matrix. Basically the Clusterware version must be equal to or higher than the ASM or RAC Database version.

Note: With Oracle Database 11g Release 2, You must upgrade Oracle Clusterware and ASM to 11g Release 2 at the same time.


I had a 3 node Oracle RAC. One of the nodes had to be completely rebuilt as a result of a problem. As there are no backups, What is the proper procedure to remove the 3rd node from the cluster so it can be added back in?

Follow the documentation for removing a node but you can skip all the steps in the node-removal doc that need to be run on the node being removed, like steps 4, 6 and 7 (See Chapter 10 of Oracle RAC Admin and Deployment Guide). Make sure that you remove any database instances that were configured on the failed node with srvctl, and listener resources also, otherwise rootdeltenode.sh will have trouble removing the nodeapps.

Just running rootdeletenode.sh isn't really enough, because you need to update the installer inventory as well, otherwise you won't be able to add back the node using addNode.sh. And if you don't remove the instances and listeners you'll also have problems adding the node and instance back again.

Can we output the backupset onto regular file system directly (not onto flash recovery area) using RMAN command, when we use SE RAC?

Yes, - customers might want to backup their database to offline storage so this is also supported.


How do I check for network problems on my interconect?

\1. Confirm that full duplex is set correctly for all interconnect links on all interfaces on both ends. Do not rely on auto negotiation.
\2. ifconfig -a will give you an indication of collisions/errors/overuns and dropped packets
\3. netstat -s will give you a listing of receive packet discards, fragmentation and reassembly errors for IP and UDP.
\4. Set the udp buffers correctly
\5. Check your cabling
Note: If you are seeing issues with RAC, RAC uses UDP as the protocol. Oracle Clusterware uses TCP/IP.

Review Note <> for more details on Network issues


How many NICs do I need to implement Oracle RAC?

At minimum you need 2 Nics: external (public), interconnect (private). When storage for Oracle RAC is provided by Ethernet based networks (e.g. NAS/nfs or iSCSI), you will need a third interface for I/O so a minimum of 3. Anything else will cause performance and stability problems under load. From an HA perspective, you want these to be redundant, thus needing a total of 6.


Are there any issues for the interconnect when sharing the same switch as the public network by using VLAN to separate the network?

Oracle RAC and Oracle Clusterware deployment best practices recommend that the interconnect be deployed on a stand-alone, physically separate, dedicated switch.

Many customers, however, have consolidated these stand-alone switches into larger managed switches. A consequence of this consolidation is a merging of IP networks on a single shared switch, segmented by VLANs. There are caveats associated with such deployments.

The Oracle RAC cache fusion protocol exercises the IP network more rigorously than non-RAC Oracle databases. The latency and bandwidth requirements as well as availability requirements of the Oracle RAC / Oracle Clusterware interconnect IP network are more in-line with high performance computing.

Deploying the Oracle RAC / Oracle Clusterware interconnect on a shared switch, segmented by a VLAN may expose the interconnect links to congestion and instability in the larger IP network topology.

If deploying the interconnect on a VLAN, there should be a 1:1 mapping of the VLAN to a non-routable subnet and the VLAN should not span multiple VLANs (tagged) or multiple switches.

Deployment concerns in this environment include Spanning Tree loops when the larger IP network topology changes, Asymmetrical routing that may cause packet flooding, and lack of fine grained monitoring of the VLAN/port.


I could not get the user equivalence check to work on my Solaris 10 server when trying to install 10.2.0.1 Oracle Clusterware. The install ran fine without issue. << Message: Result: User equivalence check failed for user "oracle". >>

Cluvfy and the OUI tries to find SSH on Solaris at /usr/local/bin. Workaround is to create a softlink from /usr/bin/ssh to /usr/local/bin.

Note: User equivalence is required for installations (IE using OUI) and patching. DBCA, NETCA, and DBControl also require user equivalence.


Why does netca always creates the listener which listens to public ip and not VIP only?

This is for backward compatibility with existing clients: consider pre-10g to 10g server upgrade. If we made upgraded listener to only listen on VIP, then clients that didn't upgrade will not be able to reach this listener anymore.


Can my customer use Veritas Agents to manage their Oracle RAC database on Unix with SFRAC installed?

For details on the support of SFRAC and Veritas Agents with RAC 10g, please see Document 397460.1 Oracle's Policy for Supporting Oracle RAC 10g (applies to Oracle RAC 11g too) with Symantec SFRAC on Unix and Document 332257.1 Using Oracle Clusterware with Vendor Clusterware FAQ


The Veritas installation document on page 219 asks for setting LD_LIBRARY_PATH_64. Should I remove this?

Yes You do not need to set LD_LIBRARY_PATH for Oracle.


Can I run Oracle RAC 10g with Oracle RAC 11g?

Yes. The Oracle Clusterware should always run at the highest level. With Oracle Clusterware 11g, you can run both Oracle RAC 10g and Oracle RAC 11g databases. If you are using ASM for storage, you can use either Oracle Database 10g ASM or Oracle Database 11g ASM however to get the 11g features, you must be running Oracle Database 11g ASM. It is recommended to use Oracle Database 11g ASM.
Note: When you upgrade to 11g Release 2, you must upgrade both Oracle Clusterware and Automatic Storage Management to 11g Release 2. This will support Oracle Database 10g and Oracle Database 11g (both RAC and single instance).
Yes, you can run Oracle 9i RAC in the cluster as well. 9i RAC requires the clusterware that is certified with Oracle 9i RAC to be running in addition to Oracle Clusterware 11g.


Are Sun Logical Domains (ldoms) supported with RAC?

Sun Logical Domains (ldoms) are supported with Oracle Database (both single instance and RAC). Check certify for the latest version specific information.


What Application Design considerations should I be aware of when moving to Oracle RAC?

The general principals are that fundamentally no different design and coding practices are required for RAC however application flaws in execution or design have a higher impact in RAC. The performance and scalability in RAC will be more sensitive to bad plans or bad schema design. Serializing contention makes applications less scalable. If your customer uses standard SQL and schema tuning, it solves > 80% of performance problems

Some of the scalability pitfalls they should look for are:
* Serializing contention on a small set of data/index blocks
--> monotonically increasing key
--> frequent updates of small cached tables
--> segment without automatic segment space management (ASSM) or Free List Group (FLG)

* Full table scans
--> Optimization for full scans in 11g can save CPU and latency

* Frequent invalidation and parsing of cursors
--> Requires data dictionary lookups and synchronizations

* Concurrent DDL ( e.g. truncate/drop )

Look for:
* Indexes with right-growing characteristics
--> Use reverse key indexes
--> Eliminate indexes which are not needed

* Frequent updated and reads of “small” tables
--> “small”=fits into a single buffer cache
--> Use Sparse blocks ( PCTFREE 99 ) to reduce serialization

* SQL which scans large amount of data
--> Perhaps more efficient when parallelized
--> Direct reads do not need to be globally synchronized ( hence less CPU for global cache )


Should the SCSI-3 reservation bit be set for our Oracle Clusterware only installation?

Oracle Clusterware and Oracle RAC do not require neither use SCSI-3 Persistent Group Reservation (PGR). In a native Oracle RAC Stack (no third party or vendor cluster, neither Oracle Solaris Cluster) SCSI-3 PGR is not required by Oracle and should be disabled on the storage (for disks / LUNs used in the stack).

When using a third party or vendor cluster solution such as Symantec Veritas SFRAC, the third part cluster solution may require that SCSI-3 PGR is enabled on the storage, as those solutions will use SCSI-3 PGR as part of their IO fencing procedures. In general, SCSI-3 PGR is enabled at the array level; for example on EMC hypervolume level.

Additional information:

  • Enabling SCSI-3 PGR on storage level (for disks / LUNs used in the cluster stack only enables SCSI-3 PGR capabilities. If set, a cluster or application using this piece of storage may make use of SCSI-3 PGR. Oracle Solaris Cluster and Veritas Cluster by default use SCSI-3 PGR under certain circumstances, Oracle Clusterware does not.
  • As Oracle Clusterware does not use SCSI-3 PGR and if you do not use and do not plan on using any third party software that requires it, it is recommended to disable SCSI-3 PGR reservation for the disks / LUNs used for the Oracle RAC Stack. Reason: if you have PR enabled on a device, the device would have a default behavior regarding PR expecting the client that makes use of PR to call the right commands as required. However, the default behavior of a device for which PR is enabled depends on the platform, on the device driver and the PR setting and may not work for Oracle Clusterware and the RAC stack, since we do not call these commands to operate the PR behavior.
  • Example: On AIX one can set the reserve_policy for a disk on the disk driver level to enable PR. AIX supports 4 values for the reserve_policy (at the time of authoring this entry): "no_reserve", "single_path", "PR_exclusive", and "PR_shared" Oracle recommends using the first one "no_reserve" to avoid problems. If we used PR command calls "PR_shared" could be used. However, if you set "PR_shared" on a device and you do not make the right PR command calls on this device, the default might prevent concurrent access to the device, which would be bad for any device used in the RAC stack (ASM disks / Voting / OCR). This may be very particular to AIX and other platforms may be different, but in general this could be an issue.

We are using Transparent Data Encryption (TDE). We create a wallet on node 1 and copy to nodes 2 & 3. Open the wallet and we are able to select encrypted data on all three nodes. Now, we want to REKEY the MASTER KEY. What do we have to do?

After a re-key on node one, 'alter system set wallet close' on all other nodes, copy the wallet with the new master key to all other nodes, 'alter system set wallet open identified by "password"; on all other nodes to load the (obfuscated) master key into node's SGA.


I have the 11.2 Grid Infrastructure installed and now I want to install an earlier version of Oracle Database (11.1 or 10.2), is this supported ?

Yes however you need to "pin" the nodes in the cluster before trying to create a database using an earlier version of Oracle Database (IE not 11.2). The command to pin a node is crsctl pin css -n nodename. You should also apply the patch for unpublished Bug 8288940 to make DBCA work in an 11.2 cluster.


I get an error with DBCA from 10.2 or 11.1 after I have installed the 11.2 Grid Infrastructure?

You will need to apply the patch for <Bug 8288940> to your database home in order for it to recognize ASM running from the new grid infrastructure home. Also make sure you have "pinned" the nodes.

crsctl pin css -n nodename


What is Standard Edition Oracle RAC?

As of Oracle Database 10g, a customer who has purchased Standard Edition is allowed to use the Oracle RAC option within the limitations of Standard Edition(SE). For licensing restrictions you should read the Oracle Database License Doc. At a high level this means that you can have a max of 4 sockets in the cluster, you must use ASM for all database files. As of Oracle Database 11g Release 2, ASM includes ACFS (a cluster file system). ASM Cluster File System (ACFS) or a local OS file system must be used to store all non-database files including Oracle Home, Application and system files, and User files
NOTE: 3rd party clusterware and clustered file systems(other than ASM) are not supported. This includes OCFS and OCFS2.

Here is the text from the appropriate footnote in the Price List (as of Jan2010, please check price list for any changes):

Oracle Database Standard Edition can only be licensed on servers that have a maximum capacity of 4 sockets. If licensing by Named User Plus, the minimum is 5 Named User Plus licenses. Oracle Database Standard Edition, when used with Oracle Real Application Clusters, may only be licensed on a single cluster of servers supporting up to a total maximum capacity of 4 sockets.

NOTE: This means that the server capacity must meet the restriction even if the sockets are empty, since they count towards capacity.


Are jumbo frames supported for the RAC interconnect?

Yes. For details see Document 341788.1 Cluster Interconnect and Jumbo Frames


What is SCAN?

Single Client Access Name (SCAN) is a single name that allows client connections to connect to any database in an Oracle cluster independently of which node in the cluster the database (or service) is currently running. The SCAN should be used in all client connection strings and does not change when you add/remove nodes from the cluster. SCAN allows clients to use EZConnect or the this JDBC URL.

sqlplus system/PASSWORD>@:/

jdbc:oracle:thin:@:/

SCAN is defined as a single name resolving to 3 IP addresses in either the cluster's GNS or your corporate DNS.

Click here for more details on SCAN.**


How do I determine which node in the cluster is the "Master" node?

For the cluster synchronization service (CSS), the master can be found by searching ORACLE_HOME/log/nodename/cssd/ocssd.log where ORACLE_HOME is set to the Grid Infrastructure Home

For master of a enqueue resource with Oracle RAC, you can select from v$ges_resource. There should be a master_node column.


Can I have multiple public networks accessing my Oracle RAC?

Yes, you can have multiple networks however with Oracle RAC 10g and Oracle RAC 11g, the cluster can only manage a single public network with a VIP and the database can only load balance across a single network. FAN will only work on the public network with the Oracle VIPs.
Oracle RAC 11g Release 2 supports multiple public networks. You must set the new init.ora parameter LISTENER_NETWORKS so users are load balanced across their network. Services are tied to networks so users connecting with network 1 will use a different service than network 2. Each network will have its own VIP.

For more information refer to the srvctl add reference in the Oracle documentation.


Where do I find Oracle Clusterware binaries and ASM binaries with Oracle Database 11g Release 2?

With Oracle Database 11g Release 2, the binaries for Oracle Clusterware and Automatic Storage Management (ASM) are distributed in a single set of binaries called the Grid infrastructure. To install grid infrastructure, go to the grid directory on your 11g Release 2 media and run the Oracle Universal Installer). Choose the Grid Infrastructure for a Cluster. If you want to install ASM for a single instance of Oracle Database on a Standalone Server, choose the Grid Infrastructure for a Standalone Server. This installation includes Oracle Restart.


Can I run Oracle 9i RAC and Oracle RAC 10g in the same cluster?

YES. However Oracle Clusterware (CRS) will not support a Oracle 9i RAC database so you will have to leave the current configuration in place. You can install Oracle Clusterware and Oracle RAC 10g into the same cluster on different Oracle homes. On Windows and Linux, you must run the 9i Cluster Manager for the 9i Database and the Oracle Clusterware for the 10g Database. When you install Oracle Clusterware, your 9i srvconfig file will be converted to the OCR. Both Oracle 9i RAC and Oracle RAC 10g will use the OCR. Do not restart the 9i gsd after you have installed Oracle Clusterware. With Oracle Clusterware 11g Release 2, the GSD resource will be disabled by default. You only need to enable this resource if you are running Oracle 9i RAC in the cluster.
Remember to check certify for details of what vendor clusterware can be run with Oracle Clusterware.

For example on Solaris, your Oracle 9i RAC will be using Sun Cluster. You can install Oracle Clusterware and Oracle RAC 10g in the same cluster that is running Sun Cluster and Oracle 9i RAC.


If my OCR and Voting Disks are in ASM, can I shutdown the ASM instance?

No. You will have to stop the Oracle Clusterware stack on the node on which you need to stop the Oracle ASM instance. Either use "crsctl stop cluster -n node_name" or "crsctl stop crs" for this purpose.


Does Oracle support Oracle RAC in Solaris containers (a.k.a. Solaris Zones)?

YES for Oracle RAC 10g Rel. 2 onwards. While Global containers have been supported for a while, Oracle added support for local containers recently after the local containers were extended to allow direct hardware modification.

Lifting this restriction allow Oracle Clusterware to operate on hardware resources such as the network for the Oracle VIP directly, enabling Oracle RAC to run in local containers.

More information about Solaris container support can be found in Oracle Certify.


Are block devices supported for OCR, Voting Disks, and ASM devices?

Block Devices are only supported on Linux. On other Unix platforms, the directIO semantics are not applicable (or rather not implemented) for block devices.

Note: The de-support for direct use of raw or block devices is scheduled for Oracle Database 12c. The Oracle Database 10g Oracle Universal Installer does not support block devices; Oracle Clusterware and ASM do.

With Oracle RAC 11g Release 2, the Oracle Universal Installer and the Configuration Assistants do not support raw or block devices anymore. The Command Line Interfaces still support raw and block devices and hence the Oracle Clusterware files can be moved after the initial installation.

Note: Direct use of raw or block devices (for database files or Clusterware files) will be de-supported with Oracle Database 12c. Using raw or block devices under Oracle ASM will remain supported.


Is there a need to renice LMS processes in Oracle RAC 10g Release 2?

LMS processes should be running in RT by default since 10.2, so there's NO need to renice them, or otherwise mess with them.
Check with ps -efl:
0 S oracle 31191 1 0 75 0 - 270857 - 10:01 ? 00:00:00 ora_lmon_appsu01
0 S oracle 31193 1 5 75 0 - 271403 - 10:01 ? 00:00:07 ora_lmd0_appsu01
0 S oracle 31195 1 0 58 - - 271396 - 10:01 ? 00:00:00 ora_lms0_appsu01
0 S oracle 31199 1 0 58 - - 271396 - 10:01 ? 00:00:00 ora_lms1_appsu01

7th column, if it is 75 or 76 then this is Time Share, 58 is Real Time.

You can also use chrt to check:
LMS (Real Time):
$ chrt -p 31199
pid 31199's current scheduling policy: SCHED_RR
pid 31199's current scheduling priority: 1

LMD (Time Share)

$ chrt -p 31193
pid 31193's current scheduling policy: SCHED_OTHER
pid 31193's current scheduling priority: 0


I get the following error starting my Oracle RAC database, what do I do? WARNING: No cluster interconnect has been specified.

This simply means that you neither have a cluster_interconnects parameter set for the database, nor was there any cluster interconnect specification found in the OCR, so that the private interconnect is picked at random by the database, and hence the warning.

You can either set the cluster_interconnects parameter in the initialization file (spfile / pfile) of the datbase to specify a private interconnect IP, OR you can use "oifcfg setif" (type "oifcfg" for help) to classify a certain network for as the cluster interconnect network.

$ oifcfg getif
eth0 global public
eth2 global cluster_interconnect

Note that oifcfg enables you to specify "local" as well as "global" settings. With Oracle Clusterware 10g Rel. 1 and Rel. 2 as well as Oracle Clusterware 11g Rel. 1, it is, however, only supported to use global settings. If the hardware (network interface) meant to be used for the interconnect is not the same on all nodes in the cluster, the configuration needs to be changed on the hardware / OS level accordingly.


How is Oracle RAC One Node licensed and priced?

Oracle RAC One Node is an option to the Oracle Database Enterprise Edition and licensed based upon the number of CPU's in the server on which it is installed. Current list price can be checked at Oracle Public Price List site

Unlike the Oracle RAC feature, Oracle RAC One Node is not available with the Oracle Standard Edition.

The 10-day rule applies to Oracle RAC One Node, this means that it is allowed to relocate or failover a database to another node as long as it doesn't exceed ten days per year. Once the original or primary node is back up, a switch back to it must occur. If the relocated or failover database exceeds ten days in the other node, then additional
licensing fees will apply.


How does RAC One Node compare with traditional cold fail over solutions ?

How does RAC One Node compare with traditional cold
fail over solutions like HP Serviceguard, IBM HACMP, Sun Cluster and Symantec, and Veritas Cluster Server?

RAC One Node is a better high availability solution than traditional cold fail over solutions.

本人提供Oracle(OCP、OCM)、MySQL(OCP)、PostgreSQL(PGCA、PGCE、PGCM)等数据库的培训和考证业务,私聊QQ646634621或微信db_bao,谢谢!

RAC One Node operates in a cluster but only a single instance of the database is running on one node in the cluster. If that database instance has a problem, RAC One Node detects that and can attempt to restart the instance on that node. If the whole node fails, RAC One Node will detect that and will bring up that database instance on another node in the cluster. Unlike traditional cold failover solutions, Oracle Clusterware will send out notifications (FAN events) to clients to speed reconnection after failover. 3rd-party solutions may simply wait for potentially lengthy timeouts to expire.

RAC One Node goes beyond the traditional cold fail over functionality by offering administrators the ability to proactively migrate instances from one node in the cluster to another. For example, lets say you wanted to do an upgrade of the operating system on the node that the RAC One Node database is running on. The administrator would activate "OMotion," a new Oracle facility that would migrate the instance to another node in the cluster. Once the instance and all of the connections have migrated, the server can be shut down, upgraded and restarted. OMotion can then be invoked again to migrate the instance and the connections back to the now-upgraded node. This non-disruptive rolling upgrade and patching capability of RAC One Node exceeds the current functionality of the traditional cold fail over solutions.

Also, RAC One Node provides a load balancing capability that is attractive to DBAs and Sys Admins. For example, if you have two different database instances running on a RAC One Node Server and it becomes apparent that the load against these two instances is impacting performance, the DBA can invoke OMotion and migrate one of the instances to another less-used node in the cluster. RAC One Node offers this load balancing capability, something that the traditional cold fail over solutions do not.

Lastly,many 3rd-party solutions do not support ASM storage. This can slow down failover, and prevent consolidation of storage across multiple databases, increasing the management burden on the DBA.

The following table summarizes the differences between RAC One Node and 3rd-party fail over solutions:

Feature

RAC One Node

EE plus 3rd Party Clusterware

Out of the box experience

RAC One Node provides everything necessary to implement database failover.

3rd-party fail over solutions require a separate install and a separate management infrastructure.

Single Vendor

RAC One Node is 100% supported by Oracle

EE is supported by Oracle, but the customer must rely on the 3rd-party to support their clusterware.

Fast failover

RAC One Node supports FAN Events, to send notifications to clients after failovers and to speed re-connection

3rd-party fail over solutions rely on timeouts for clients to detect failover and initiate a reconnection. It could take several minutes for a client to detect there had been a failover.

Rolling DB patching, OS, Clusterware, ASM patching and upgrades

RAC One Node can migrate a database from one server to another to enable online rolling patching. Most connections should migrate with no disruption.

3rd-party solutions must be failed over from one node to another, which means all connections will be dropped and must reconnect. Some transactions will be dropped and must reconnect. Reconnection could take several minutes.

Workload Management

RAC One Node can migrate a database from one server to another while online to enable load balancing of databases across servers in the cluster. Most connections should migrate with no disruption.

3rd-party solutions must be failed over from one node to another, which means all connections will be dropped and must reconnect. Some transactions will be dropped and must reconnect. Reconnection could take several minutes.

Online scale out

Online upgrade to multi-node RAC

Complete reinstall including Oracle Grid Infrastructure is required.

Standardized tools and processes

RAC and RAC One Node use the same tools, management interfaces, and processes.

EE and RAC use different tools, management interfaces, and processes. 3rd-party clusterware requires additional interfaces.

Storage virtualization

RAC One Node supports use of ASM to virtualize and consolidate storage. Because it’s shared across nodes, it eliminates the lengthy failover of volumes and file systems

Traditional 3rd-party solutions rely on local file systems and volumes that must be failed over. Large volumes can take a long time to fail over. Dedicated storage is also more difficult to manage.


If I add or remove nodes from the cluster, how do I inform RAC One Node?

There are no separate steps required. RAC One is tightly integrated with Oracle clusterware and is therefore aware of the nodes. Customers with multiple nodes should ensure that the server_list is up to date.

Hint: Use srvctl add database -h to get additional help


How do I install the command line tools for RAC One Node?

The command line tools are installed when you install the RAC One Node patch 9004119 on top of 11.2.0.1.

Note: The patch is only required in 11.2.0.1. 11.2.0.2 and higher releases have the tools integrated with srvctl. Srvctl can be used to manage both RAC and RAC One Node databases


How do I get Oracle Real Application Clusters One Node (Oracle RAC One Node)?

Oracle RAC One Node is only available with Oracle Database 11g Release 2. Oracle Grid Infrastructure for 11g Release 2 must be installed as a prerequisite.

DBCA now provides an option to create a RAC One Node Database.


Is Oracle RAC One Node supported with 3rd party clusterware and/or 3rd party CFS?

Yes, Oracle RAC One NODE offers support for with 3rd Party Clusterware like Veritas Storage Foundation for Oracle RAC.
For further information regarding releases and version certification please visit myoracle support.

Please bear in mind that Oracle encourages to use Grid Infrastructure for all RAC Deployments as the preferred clusterware and storage management.


If a current customer has an Enterprise License Agreement (ELA), are they entitled to use Oracle RAC One Node?

Yes, assuming the existing ELA/ULA includes Oracle RAC. The license guide states that all Oracle RAC option licenses (not SE RAC) include all the features of Oracle RAC One Node. Customers with existing RAC licenses or Oracle RAC ELA's can use those licenses as Oracle RAC One Node. This amounts to "burning" a Oracle RAC license for Oracle RAC One Node, which is expensive long term. Obviously if the ELA/ULA does not include Oracle RAC, then they are not entitled to use Oracle RAC One Node.


Does Enterprise Manager Support RAC One Node?

Yes, you can use Enterprise Manager Cloud Console (and Grid Control) to manage RAC One Node databases.


How does RAC One Node compare with a single instance Oracle Database protected with Oracle Clusterware?

FeatureRAC One NodeEE plus Oracle Clusterware
Out of the box experienceRAC One Node is a complete solution that provides everything necessary to implement a database protected from failures by a failover solution.Using Oracle Clusterware to protect an EE database is possible by customizing some sample scripts we provide to work with EE. This requires custom script development by the customer, and they need to set up the environment and install the scripts manually.
SupportabilityRAC One Node is 100% supportedWhile EE is 100% supported, the scripts customized by the customer are not supported by Oracle.
DB Control supportRAC One Node fully supports failover of DB Control in a transparent mannerDB Control must be reconfigured after a failover (unless the customer scripts are modified to support DB Control failover)
Rolling DB patching, OS, Clusterware, ASM patching and upgradesRAC One Node can online migrate a database from one server to another to enable online rolling patching. Most connections should migrate with no disruptionEE must be failed over from one node to another, which means all connections will be dropped and must reconnect. Some transactions will be dropped and must reconnect. Reconnection could take several minutes.
Workload ManagementRAC One Node can online migrate a database from one server to another to enable load balancing of databases across servers in the cluster. Most connections should migrate with no disruptionEE must be failed over from one node to another, which means all connections will be dropped and must reconnect. Some transactions will be dropped and must reconnect. Reconnection could take several minutes.
Online scale outOnline upgrade to multi-node RACTake DB outage and re-link to upgrade to multi-node RAC, re-start DB.
Standardized tools and processesRAC and RAC One Node use same tools, management interfaces, and processesEE and RAC use different tools, management interfaces, and processes

How does RAC One Node compare with database DR products like Data Guard or Golden Gate?

The products are entirely complementary. RAC One Node is designed to protect a single database.

It can be used for rolling database patches, OS upgrades/patches, and grid infrastructure (ASM/Clusterware) rolling upgrades and patches. This is less disruptive than switching to a database replica. Switching to a replica for patching, or for upgrading the OS or grid infrastructure requires that you choose to run Active/Active (and deal with potential conflicts) or Active/Passive (and wait for work on the active primary database to drain before allowing work on the replica).

You need to make sure replication supports all data types you are using. You need to make sure the replica can keep up with your load. You need to figure out how to re-point your clients to the replica (not an issue with RAC One Node because it's the same database, and we use VIPs). And lastly, RAC One Node allows a spare node to be used 10 days per year without licensing.

Our recommendation is to use RAC or RAC One Node to protect from local failures and to support rolling maintenance activities. Use Data Guard or replication technology for DR, data protection, and for rolling database upgrades. Both are required as part of a comprehensive HA solution.


Are we certifying applications specifically for RAC One Node?

No. If the 3rd party application is certified for Oracle Database 11g Release 2 Enterprise Edition or onwards, it is certified for RAC One Node.


How does RAC One Node compare with virtualization solutions like VMware?

RAC One Node offers greater benefits and performance than VMware in the following ways:

- Server Consolidation: VMware offers physical server consolidation but imposes a 10%+ processing overhead to enable this consolidation and have the hypervisor control access to the systems resources. RAC One Node enables both physical server consolidation as well as database consolidation without the additional overhead of a hypervisor-based solution like VMware.

- High Availability: VMware offers the ability to fail over a failed virtual machine – everything running in that vm must be restarted and connections re-established in the event of a virtual machine failure. VMware cannot detect a failed process within the vm – just a failed virtual machine. RAC One Node offers a finer-grained, more intelligent and less disruptive high availability model. RAC One Node can monitor the health of the database within a physical or virtual server. If it fails, RAC One Node will either restart it or migrate the database instance to another server. Oftentimes, database issues or problems will manifest themselves before the whole server or virtual machine is affected. RAC One Node will discover these problems much sooner than a VMware solution and take action to correct it. Also, RAC One Node allows database and OS patches or upgrades to be made without taking a complete database outage. RAC One Node can migrate the database instance to another server, patches or upgrades can be installed on the original server and then RAC One Node will migrate the instance back. VMware offers a facility, Vmotion, that will do a memory-to-memory transfer from one virtual machine to another. This DOES NOT allow for any OS or other patches or upgrades to occur in a non-disruptive fashion (an outage must be taken). It does allow for the hardware to be dusted and vacuumed, however.

- Scalability: VMware allows you to “scale” on a single physical server by instantiating additional virtual machines – up to an 8-core limit per vm. RAC One Node allows online scaling by migrating a RAC One Node implementation from one server to another, more powerful server without taking a database outage. Additionally, RAC One Node allows further scaling by allowing the RAC One Node to be online upgraded to a full Real Application Clusters implementation by adding additional database instances to the cluster thereby gaining almost unlimited scalability.

- Operational Flexibility and Standardization: VMware only works on x86-based servers. RAC One Node will be available for all of the platforms that Oracle Real Application Clusters supports including Linux, Windows, Solaris, and AIX, HP-UX.


Does Rac One Node make sense in a stretch cluster environment?

Yes.

Please do remember that Extended Clusters can be classified in three cases:

- Nodes of the cluster are really close to each other, maybe in the same room and there are two storage areas.

- Nodes in the cluster are separated by a door in the building (different rooms) and there are two storage areas.

- Node are separated in different physical locations and there are two storage areas.

This means that having two storage areas has other impacts such as write latency, is still to be considered since ASM is still writing blocks to both sites.


Where do I find the documentation for RAC One Node?

Please refer to

Oracle RAC ONE Node Technical Brief

Installing Oracle RAC and Oracle RAC ONE Node

Administering Oracle RAC ONE Node


Is RAC One Node supported with database versions prior to 11.2?

No. RAC One Node requires at least version 11.2 of Oracle Grid Infrastructure, and the RAC One Node database must be at least 11.2. Earlier versions of the rdbms can coexist with 11.2 RAC One Node databases.


What is RAC One Node Omotion?

Omotion is a utility that is distributed as part of Oracle RAC One Node. The Omotion utility allows you to move the Oracle RAC One Node instance from one node to another in the cluster. There are several reasons you may want to move the instance such as the node is overloaded so you need to balance the workload by moving the instance, or you need to do some operating system maintenance on the node however you want to eliminate the outage for application users by moving the instance to another node in the cluster.


Can I use Oracle RAC One Node for Standard Edition Oracle RAC?

No, Oracle RAC One Node is only part of Oracle Database 11g Release 2 Enterprise Edition. It is not licensed or supported for use with any other editions.


What is Oracle Real Application Clusters One Node (RAC One Node)?

Oracle RAC One Node is an option available with Oracle Database 11g Release 2. Oracle RAC One Node is a single instance of Oracle RAC running on one node in a cluster.

This option adds to the flexibility that Oracle offers for reducing costs via consolidation. It allows customers to more easily consolidate their less mission critical, single instance databases into a single cluster, with most of the high availability benefits provided by Oracle Real Application Clusters (automatic restart/failover, rolling patches, rolling OS and clusterware upgrades), and many of the benefits of server virtualization solutions like VMware.

RAC One Node offers better high availability functionality than traditional cold failover cluster solutions because of a new Oracle technology Omotion, which is able to intelligently relocate database instances and connections to other cluster nodes for high availability and system load balancing.


Does QoS Management support admin-managed databases?

Beginning with Oracle 12.1 Grid Infrastructure, QoS Management supports admin-managed databases from 11.2 forward. The support is being phased in where 12.1.0.2 supports both measuring and monitoring admin-managed databases (CDB or Non-CDB) and 12.2 will bring full management support. Please see [Ensuring Your Oracle RAC Databases Meet Your Business Objectives at Runtime](http://database.us.oracle.com/pls/htmldb/Z?p_cat=150511&p_id=46&p_company=501318803116695&p_url=https://stbeehive.oracle.com/content/dav/st/Database Product Management/Documents/Ensuring Your Oracle RAC Databases Meet Your Business Objectives at Runtime.pptx) for details.


What are the different modes of QoS Management's operation?

QoS Management can operate in three different modes depending upon the policy that is active at the time.

  1. Measure-Only mode is usually the initial mode where one evaluates the the breakdown of resource use and wait components that make up the response time for an application. This mode provides data to set performance objectives used in the other modes.
  2. Monitor Mode is set in a policy when you add performance objectives to each performance class but still keep the class marked as Measure-Only in a policy. This permits trying out performance objectives and getting alerts when they are exceeded without actually recommending or making resource re-allocations.
  3. Management Mode is set in a policy by removing one or more of the Measure-Only check checks for Performance Classes and ranking them relative to each other. In this case what an objective is exceeded a recommendation will be made to re-allocate resources and actually performed if the policy is running in auto mode.

Please see the technical technical brief [https://stbeehive.oracle.com/content/dav/st/Database Product Management/Public Documents/12c/Best Practices for RAC Runtime Mgmt 12c WP.docx ](https://stbeehive.oracle.com/content/dav/st/Database Product Management/Public Documents/12c/Best Practices for RAC Runtime Mgmt 12c WP.docx)for more details.


What is Memory Guard and how does it work?

Memory Guard is an exclusive QoS Management feature that uses metrics from Cluster Health Monitor to evaluate the stress of each server in the cluster once a minute. Should it detect a node has over-committed memory, it will prevent new database requests from being sent to that node until the current load is relieved. It does this by turning off the services to that node transactionally at which point existing work will begin to drain off. Once the stress is no longer detected, services will automatically be started and new connections will resume. Beginning with Oracle Grid Infrastructure 12c Release 1 (12.1.0.2), Memory Guard no longer requires that QoS Management be enabled and therefore is installed and active by default.


Does QoS Management require any specific database deployment?

Oracle databases must be created as RAC or RAC One Node Policy-Managed databases. This means the databases are deployed in one or more server pools and applications and clients connect using CRS-managed database services. Each managed database must also have Resource Manager enabled and be enabled for QoS Management. It is also recommended that connection pools that support Fast Application Notification (FAN) events be used for maximum functionality and performance management. If the Oracle databases are only going to be monitored by QoS Management, then administrator-managed RAC and RAC One Node databases are supported beginning in Oracle Grid Infrastructure 12c Release 1 (12.1.0.2).


Does QoS Management Support Multitenant databases and PDBs?

In Oracle 12cR1, QoS Management supports measuring and monitoring of Oracle Multitenant databases and PDBs. It fully support standard RAC and RAC One Node 12c non-CDB databases. Full Multitenant support is planned for a future release.


What type of applications does Oracle QoS Management manage?

QoS Management is currently able to manage OLTP open workload types for database applications where clients or middle tiers connect to the Oracle database through OCI or JDBC. Open workloads are those whose demand is unaffected by increases in response time and are typical of Internet-facing applications.


What QoS Management functionality is in Oracle Enterprise Manager?

Enterprise Manger Cloud Control supports the full range of QoS Management functionality organized by task. A Policy Editor wizard presents a simple workflow that specifies the server pools to manage; defines performance classes that map to the database applications and associated SLAs or objectives, and specifies performance policies that contain performance objectives and relative ranking for each performance class and baseline server pool resource allocations. An easy to monitor dashboard presents the entire cluster performance status at a glance as well as recommended actions should resources need to be re-allocated due to performance issues. Finally a set of comprehensive graphs track the performance and metrics of each performance class.


What are Server Pools?

Server Pools are a new management entity introduced in Oracle Clusterware 11g to give IT administrators the ability to better manage their applications and datacenters along actual workload lines. Server Pools are a logical container, where like hardware and work can be organized and given importance and availability semantics. This allows administrators as well as QoS Management to actively grow and shrink these groups to meet the hour-to-hour, day-to-day application demands with optimum utilization of available resources. The use of Server Pools does not require any application code changes, re-compiling or re-linking. Server Pools also allow older non-QoS Management supported databases and middleware to co-exist in a single cluster without interfering with the management of newer supported versions.


What methods does QoS Management support for classifying applications and workloads?

QoS Management use database entry points to “tag” the application or workload with user-specified names. Database sessions are evaluated against classifiers that are sets of Boolean expressions made up of Service Name, Program, User, Module and Action.


What type of user interfaces does QoS Management support?

QoS Management is integrated into Enterprise Manager Database Control 11g Release 2 and Enterprise Manager 12c Cloud Control and is accessible from the cluster administration page.


Does QoS Management negatively affect an application’s availability?

No, the QoS Management server is not in the transaction path and only adjusts resources through already existing database and cluster infrastructure. In fact, it can improve availability by distributing workloads within and cluster and prevent node evictions caused my memory stress with its automatic Memory Guard feature.


How does QoS Management enable the Private Database Cloud?

The Private Database Cloud fundamentally depends upon shared resources. Whether deploying a database service or a separate database, both depend upon being able to deliver performance with competing workloads. QoS Management provides both the monitoring and management of these shared resources, thus complementing the flexible deployment of databases as a service to also maintain a consistent level of performance and availability.


Where can I find documentation for QoS Management?

The Oracle Database Quality of Service Management User’s Guide is the source for documentation and covers all aspects of its use. It is currently delivered as part of the Oracle Database Documentation Library starting in 11g Release 2. Collateral is also available on OTN at http://www.oracle.com/technetwork/products/clustering/overview/qosmanageent-508184.html


What is the overhead of using QoS Management?

The QoS Management Server is a set of Java MBeans that run in a single J2EE container running on one node in the cluster. Metrics are retrieved from each database once every five seconds. Workload classification and tagging only occurs at connect time or when a client changes session parameters. Therefore the overhead is minimal and is fully accounted for in the management of objectives.


Which type of consolidation does QoS Management support?

QoS Management currently controls the CPU resource across several dimensions. It manages the CPU shares of competing workloads within a database in support of schema-based consolidation. It manages the number of CPUs per database hosted on a server in support of consolidating multiple database instances. Finally, it supports managing the number of server within a server pool in support of consolidating multiple databases.


What types of resources does QoS Management manage?

Currently QoS Management manages CPU resources both within a database and between databases running on shared or dedicated servers. It also monitors wait times for I/O, Global Cache, and Other database waits.


Is this a product to be used by an IT administrator or DBA?

The primary user of QoS Management is expected to be the IT or systems administrator that will have QoS administrative privileges on the RAC cluster. As QoS Management actively manages all of the databases in a cluster it is not designed for use by the DBA unless that individual also has the cluster administration responsibility. DBA level experience is not required to be a QoS Management administrator.


What does QoS Management manage?

In datacenters where applications share databases or databases share servers, performance is made up of the sum of the time spent using and waiting to use resources. Since an application’s use of resources is controlled during development, test, and tuning it cannot be managed at runtime; however the wait for resources can. QoS Management manages resource wait times.


What is Oracle’s goal in developing QoS Management?

QoS Management is a full Oracle stack development effort to provide effective runtime management of datacenter SLAs by ensuring when there are sufficient resources to meet all objectives they are properly allocated and should demand or failures exceed capacity that the most business critical SLAs are preserved at the cost of less critical ones.


What happens should the QoS Management Server fail?

The QoS Management Server is a managed Clusterware singleton resource that is restarted or failed over to another node in the cluster should it hang or crash. Even if a failure occurs, there is no disruption to the databases and their workloads running in the cluster. Once the restart completes, QoS Management will continue managing in the exact state it was when the failure occurred.


Which versions of Oracle databases does QoS Management support?

QoS Management is supported on Oracle RAC EE and RAC One EE databases from 11g Release 2 (11.2.0.2) forward deployed on Oracle Exadata Database Machine. It is also supported in Measure-Only Mode with Memory Guard support on Oracle RAC EE and RAC One EE databases from 11g Release 2 (11.2.0.3) forward. Please consult the Oracle Database License Guide for details.


What types of performance objectives can be set?

QoS Management currently supports response time objectives. Response time objectives up to one second for database client requests are supported. Additional performance objectives are planned for future releases.


How do I use DBCA in silent mode to set up RAC and ASM?

If you already have an ASM instance/diskgroup then the following creates a RAC database on that diskgroup (run as the Oracle user):

$ORACLE_HOME/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbName $SID -sid $SID -sysPassword $PASSWORD -systemPassword $PASSWORD -sysmanPassword $PASSWORD -dbsnmpPassword $PASSWORD -emConfiguration LOCAL -storageType ASM -diskGroupName $ASMGROUPNAME -datafileJarLocation $ORACLE_HOME/assistants/dbca/templates -nodeinfo $NODE1,$NODE2 -characterset WE8ISO8859P1 -obfuscatedPasswords false -sampleSchema false -oratabLocation /etc/oratab

The following will create a ASM instance & 1 diskgroup (run as the ASM/Oracle user)

$ORA_ASM_HOME/bin/dbca -silent -configureASM -gdbName NO -sid NO -emConfiguration NONE -diskList $ASM_DISKS -diskGroupName $ASMGROUPNAME -nodeinfo $NODE1,$NODE2 -obfuscatedPasswords false -oratabLocation /etc/oratab -asmSysPassword $PASSWORD -redundancy $ASMREDUNDANCY

where ASM_DISKS = '/dev/sda1,/dev/sdb1' and ASMREDUNDANCY='NORMAL'


Is it supported to rerun root.sh from the Oracle Clusterware installation ?

Oracle RAC versions 11.2 and higher provide check point capabilities. This feature allows root.sh to be re-executed after failure. There are cases where cleanup may be required and this can be done by executing rootcrs with the -deconfig option. After clean up, root.sh can be reattempted.
For versions prior to 11.2, rootdelete script may need to be executed before attempting re-run of root.sh


Can I configure a firewall (iptables) on the cluster interconnect?

No, Please do not run iptables on private network. There are two reasons for this
(a) RAC and clusterware uses dynamic ports on the private network so running Iptables will cause problems
(b) The private network should be configured to be only accessible from the nodes of the RAC cluster and hence should satisfy the security requirements


Can I run the fixup script generated by the 11.2 OUI or CVU on a running system?

It depends on what the problem that were listed to be fixed. If the fixup scripts change system parameters that can affect database, application, then it is prudent to run it during a planned downtime.


How is the Oracle Cluster Registry (OCR) stored when I use ASM?

The OCR is stored similar to how Oracle Database files are stored. The extents are spread across all the disks in the diskgroup and the redundancy (which is at the extent leve) is based on the redundancy of the disk group. You can only have one OCR in a diskgroup. Best Practice for ASM is to have 2 diskgroups. Best Practice for OCR in ASM is to have a copy of the OCR in each diskgroup.


Is it possible to use ASM for the OCR and voting disk?

Yes. As of Oracle Real Application Clusters 11g Release 2, the OCR and Voting Disks can be stored in ASM. This is the recommended best practice for this release.

For releases prior to 11g Release 2, the OCR and voting disk must be on RAW devices or CFS (cluster filesystem).

RAW devices (or block devices on Linux) is the best practice for Oracle RAC 10g or Oracle RAC 11g Release 1.


Do I need to have user equivalence (ssh, etc...) set up after GRID/RAC is already installed?

Yes. Many assistants and scripts depend on user equivalence being set up.


Why does Oracle Clusterware use an additional 'heartbeat' via the voting disk, when other cluster software products do not?

Oracle uses this implementation because Oracle clusters always have access to a shared disk environment.

This is different from classical clustering which assumes shared nothing architectures, and changes the decision of what strategies are optimal when compared to other environments. Oracle also supports a wide variety of storage types, instead of limiting it to a specific storage type (like SCSI), allowing the customer quite a lot of flexibility in configuration.


Why does Oracle still use the voting disks when other cluster software is present?

Voting disks are still used when 3rd party vendor clusterware is present, because vendor clusterware is not able to monitor/detect all failures that matter to Oracle Clusterware and the database. For example one known case is when the vendor clusterware is set to have its heartbeat go over a different network than RAC traffic. Continuing to use the voting disks allows CSS to resolve situations which would otherwise end up in cluster hangs.


How much I/O activity should the voting disk have?

Approximately 2 read + 1 write per second per node.


I am trying to move my voting disks from one diskgroup to another and getting the error "crsctl replace votedisk – not permitted between ASM Disk Groups." Why?

You need to review the ASM and crsctl logs to see why the command is failing.
To put your voting disks in ASM, you must have the diskgroup set up properly. There must be enough failure groups to support the redundancy of the voting disks as set by the redundancy on the disk group. EG: Normal redundancy, 3 failure groups are requried, High redundancy, 5 failure groups. Note: by default each disk in a diskgroup is put in its own failure group. The compatible.asm attribute of the diskgroup must be set to 11.2 and you must be using 11.2 version of Oracle Clusterware and ASM.


I have a 2 node Oracle RAC cluster, if I pull the interconnect on node 1 to simulate a failure, why does node 2 reboot?

In case of a private network failure and in order to prevent a split brain scenario, Oracle Clusterware always tries to let the biggest sub-cluster survive.

In case of a 2-node cluster, the decision is based on the node number. The node with the lowest node number is meant to survive an eviction decision. The node with the lowest node number is typically the node that started first (between the two nodes). In this context, it needs to be noted that the decision is independent of where in the stack the private network failure occurred. Given the example phrased in the question:

I have a 2 node Oracle RAC cluster, if I pull the interconnect on node 1 to simulate a failure, why does node 2 reboot?

The first node that started (joined / created) the cluster is node 1. This node will have the lowest node number. Regardless on which node a private interconnect cable is pulled, node 1 should survive and hence node 2 is rebooted.


OCR stored in ASM - What happens, if my ASM instance fails on a node?

If an ASM instance fails on any node, the OCR becomes unavailable on this particular node, but the node remains operational.

If the (RAC) databases use ASM, too, they cannot access their data on this node anymore during the time the ASM instance is down. If a RAC database is used, access to the same data can be established from another node.

If the CRSD process running on the node affected by the ASM instance failure is the OCR writer, AND the majority of the OCR locations is stored in ASM, AND an IO is attempted on the OCR during the time the ASM instance is down on this node, THEN CRSD stops and becomes inoperable. Hence cluster management is affected on this particular node.

Under no circumstances will the failure of one ASM instance on one node affect the whole cluster.


With GNS, do ALL public addresses have to be DHCP managed (public IP, public VIP, public SCAN VIP)?

No, The choice to use DHCP for the public IPs is outside of Oracle. Oracle Clusterware and Oracle RAC will work with both static and DHCP assigned IP for the hostnames. When using GNS, Oracle Clusterware will use DHCP for all VIPs in the cluster, which means node vips and SCAN vips.


Voting Files stored in ASM - How many disks per disk group do I need?

If Voting Files are stored in ASM, the ASM disk group that hosts the Voting Files will place the appropriate number of Voting Files in accordance to the redundancy level. Once Voting Files are managed in ASM, a manual addition, deletion, or replacement of Voting Files will fail, since users are not allowed to manually manage Voting Files in ASM.

If the redundancy level of the disk group is set to "external", 1 Voting File is used.
If the redundancy level of the disk group is set to "normal", 3 Voting Files are used.
If the redundancy level of the disk group is set to "high", 5 Voting Files are used.

Note that Oracle Clusterware will store the disk within a disk group that holds the Voting Files. Oracle Clusterware does not rely on ASM to access the Voting Files.

In addition, note that there can be only one Voting File per failure group. In the above list of rules, it is assumed that each disk that is supposed to hold a Voting File resides in its own, dedicated failure group.

In other words, a disk group that is supposed to hold the above mentioned number of Voting Files needs to have the respective number of failure groups with at least one disk. (1 / 3 / 5 failure groups with at least one disk)

Consequently, a normal redundancy ASM disk group, which is supposed to hold Voting Files, requires 3 disks in separate failure groups, while a normal redundancy ASM disk group that is not used to store Voting Files requires only 2 disks in separate failure groups.


What happens if I lose my voting disk(s)?

If you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster. It doesn't threaten database corruption. Alternatively you can use external redundancy which means you are providing redundancy at the storage level using RAID.
For this reason when using Oracle for the redundancy of your voting disks, Oracle recommends that customers use 3 or more voting disks in Oracle RAC 10g Release 2. Note: For best availability, the 3 voting files should be physically separate disks. It is recommended to use an odd number as 4 disks will not be any more highly available than 3 disks, 1/2 of 3 is 1.5...rounded to 2, 1/2 of 4 is 2, once we lose 2 disks, our cluster will fail with both 4 voting disks or 3 voting disks.
Restoring corrupted voting disks is easy since there isn't any significant persistent data stored in the voting disk. See the Oracle Clusterware Admin and Deployment Guide for information on backup and restore of voting disks.


How should I test the failure of the public network (IE Oracle VIP failover) in my Oracle RAC environment?

Prior to 10.2.0.3, It was possible to test VIP failover by simply running

ifconfig <interface_name> down.

The intended behaviour was that the VIP would failover to the another node. In 10.2.0.3 this is the same behaviour on Linux, however on other operating systems the VIP will NOT failover, instead the interface will be plumbed again. To test VIP failover on platforms other than Linux, the switch can be turned off or the physical cable pulled.
The is best way to test. NOTE: if you have other DB’s that share the same IP’s then they will be affected. Your tests should simulate Production failures which are generally Switch errors or interface errors.


What do I do, I have a corrupt OCR and no valid backup?

Document 399482.1 describes how to recreate your OCR/Voting Disk which you have accidentally deleted and cannot recover from backups


I am installing Oracle Clusterware with a 3rd party vendor clusterware however in the "Specify Cluster Configuration Page" , Oracle Clusterware installer doesn't show the existing nodes. Why?

This shows that Oracle Clusterware does not detect the 3rd Party clusterware is installed. Make sure you have followed the installation instructions provided by the vendor for integrating with Oracle RAC. Make sure LD_LIBRARY_PATH is not set.
For example with Sun Cluster, make sure the libskgxn* files to the /opt/ORCLcluster directory. Check that lsnodes returns the correct list of nodes in the Sun Cluster.


I have a 2-node RAC running. I notice that it is always node2 that is evicted when I test private network failure scenario by disconnecting the private network cable. Doesn't matter whether it is node1's or node2's private network cable that is disconnected, it is always the node2 that is evicted. What happens in a 3-nodes RAC cluster if node1's cable is disconnected?

The node with the lower node number will survive(The first node to join the cluster). In case of 3 nodes, 2 nodes will survive and the one you pulled the cable will go away. 4 nodes - the sub cluster with the lower node number will survive.


How do I use multiple network interfaces to provide High Availability and/or Load Balancing for my interconnect with Oracle Clusterware?

This needs to be done externally to Oracle Clusterware usually by some OS provided nic bonding which gives Oracle Clusterware a single ip address for the interconnect but provide failover (High Availability) and/or load balancing across multiple nic cards. These solutions are provided externally to Oracle at a much lower level than the Oracle Clusterware, hence Oracle supports using them, the solutions are OS dependent and therefore the best source of information is from your OS Vendor. However, there are several articles in My Oracle Support on how to do this. For example for Sun Solaris search for IPMP (IP network MultiPathing).

Note: Customer should pay close attention to the bonding setup/configuration/features and ensure their objectives are met, since some solutions provide only failover and some only loadbalancing still others claim to provide both. As always, it's always important to test your setup to ensure it does what it was designed to do.

When bonding with Network Interfaces that connect to separate switches (for redundancy) you must test if the NIC's are configured for active/active mode. The most reliable configuration for this architecture is to configure the NIC's for Active/Passive.


What are the licensing rules for Oracle Clusterware? Can I run it without RAC?

Check the Oracle® Database Licensing Information 11g Release 1 (11.1) Part Number B28287-01 Look in the Special Use section under Oracle Database Editions.


What is Flex Cluster and how can I use it?

Oracle 12.1.0.1 introduces a new cluster topology called Oracle Flex Cluster. This topology allows loosely coupled application servers to form a cluster with tightly coupled database servers. Tightly coupled servers are HUB servers that share storage for database, OCR and Voting devices as well as peer-to-peer communication with other HUB servers in the cluster. A loosely coupled server is a LEAF server that has a loose communication association with a single HUB server in the cluster and does not require shared storage nor peer-to-peer communication with other HUB or LEAF servers in the cluster, except to communicate with the HUB to which it is associated. In 12.1.0.1, LEAF servers are designed for greater application high availability and multi-tier resource management. Please see the 12c Flex Cluster FAQ statement under High Availability.


How do I identify the voting file location ?

Run the following command from /bin
"crsctl query css votedisk"


Can I use Oracle Clusterware to monitor my EM Agent?

Check out Chapter 3 of the EM advanced configuration guide, specifically the section on active passive configuration of agents. You should be able to model those to your requirements. There is nothing special about the commands, but you do need to follow the startup/shutdown sequence to avoid any discontinuity of monitoring. The agent does start a watchdog that monitors the health of the actual monitoring process. This is done automatically at agent start. Therefore you could use Oracle Clusterware but you should not need to.


Does Oracle Clusterware have to be the same or higher release than all instances running on the cluster?

Yes - Oracle Clusterware must be the same or a higher release with regards to the RDBMS or ASM Homes.
Please refer to Note#337737.1


My customer has noticed tons of log files generated under $CRS_HOME/log//client, is there any way automated way we can setup through Oracle Clusterware to prevent/minimize/remove those aggressively generated files?

Check Note.5187351.8 You can either apply the patchset if it is available for your platform or have a cron job that removes these files until the patch is available.


Customer is hitting Bug 4462367 with an error message saying low open file descriptor, how do I work around this until the fix is released with the Oracle Clusterware Bundle for 10.2.0.3 or 10.2.0.4 is released?

The fix for "low open file descriptor" problem is to increase the ulimit for Oracle Clusterware. Please be careful when you make this type of change and make a backup copy of the init.crsd before you start! To do this, you can modify the init.crsd as follows, while you wait for the patch: 1. Stop Oracle Clusterware on the node (crsctl stop crs)
\2. copy the /etc/init.d/init.crsd
\3. Modify the file changing:
# Allow the daemon to drop a diagnostic core file/
ulimit -c unlimited
ulimit -n unlimited

to
# Allow the daemon to drop a diagnostic core file/
ulimit -c unlimited
ulimit -n 65536

\4. restart Oracle Clusterware in the node (crsctl start crs)


What is the voting disk used for?

A voting disk is a backup communications mechanism that allows CSS daemons to negotiate which sub-cluster will survive. These voting disks keep a status of who is currently alive and counts votes in case of a cluster reconfiguration. It works as follows:
a) Ensures that you cannot join the cluster if you cannot access the voting disk(s)
b) Leave the cluster if you cannot communicate with it (to ensure we do not have aberrant nodes)
c) Should multiple sub-clusters form, it will only allow one to continue. It prefers a greater number of nodes, and secondly the node with the lowest incarnation number.
d) Is kept redundant by Oracle in 10g Release 2 (you need to access a majority of existing voting disks)
At most only one sub-cluster will continue and a split brain will be avoided.


I am trying to install Oracle Clusterware (10.2) and when I run the OUI, at the Specify Cluster Configuration screen, the Add, Edit and Remove buttons are grayed out. Nothing comes up in the cluster nodes either. Why?

Check for 3rd Party Vendor clusterware (such as Sun Cluster or Veritas Cluster) that was not completely removed. IE Look for /opt/ORCLcluster directory, it should be removed.


Can I use Oracle Clusterware to provide cold failover of my single instance Oracle Databases?

Oracle does not provide the necessary wrappers to fail over single-instance databases using Oracle Clusterware. It's possible for customers to use Oracle Clusterware to wrap arbitrary applications, it'd be possible for them to wrap single-instance databases this way. A sample can be found in the DEMOs that are distributed with Oracle Database 11g.


How do I protect the OCR and Voting in case of media failure?

In Oracle Database 10g Release 1 the OCR and Voting device are not mirrored within Oracle,hence both must be mirrored via a storage vendor method, like RAID 1.

Starting with Oracle Database 10g Release 2 Oracle Clusterware will multiplex the OCR and Voting Disk (two for the OCR and three for the Voting).


How can I register the listener with Oracle Clusterware in RAC 10g Release 2?

NetCA is the only tool that configures listener and you should be always using it. It will register the listener with Oracle Clusterware. There are no other supported alternatives.


Why is the home for Oracle Clusterware / Oracle Grid Infrastructure not recommended to be a subdirectory of the Oracle base directory?

If anyone other than root has write permissions to the parent directories of the Oracle Clusterware home / Oracle Grid Infrastructure for a Cluster home, then they can give themselves root escalations. This is a security issue.

Consequently, it is strongly recommended to place the Oracle Grid Infrastructure / Oracle Clusterware home outside of the Oracle Base. The Oracle Universal Installer will confirm deviating settings during the Oracle Grid Infrastructure 11g Release 2 and later installation.

The Oracle Clusterware home itself is a mix of root and non-root permissions, as appropriate to the security requirements. Please, follow the installation guides regarding OS users and groups and how to structure the Oracle software installations on a given system.


In the course of failure testing in an extended RAC environment we find entries in the cssd logfile which indicate actions like 'diskShortTimeout set to (value)' and 'diskLongTimeout set to (value)'. Can anyone please explain the meaning of these two timeouts in addition to disktimeout?

Having a short and long disktimeout, and no longer just one disktimeout, is due to patch for unpublished Bug 4748797 (included in 10.2.0.2). The long disktimeout is 200 sec by default unless set differently via 'crsctl set css disktimeout', and applies to time outside a reconfiguration. The short disktimeout is in effect during a reconfiguration and is misscount-3s. The point is that we can tolerate a long disktimeout when all nodes are just running fine, but have to revert back to a short disktimeout if there's a reconfiguration.


How do I put my application under the control of Oracle Clusterware to achieve higher availability?

First write a control agent. It must accept 3 different parameters: start-The control agent should start the application, check-The control agent should check the application, stop-The Control agent should start the application. Secondly you must create a profile for your application using crs_profile. Thirdly you must register your application as a resource with Oracle Clusterware (crs_register). See the RAC Admin and Deployment Guide for details.


Is it supported to allow 3rd Party Clusterware to manage Oracle resources (instances, listeners, etc) and turn off Oracle Clusterware management of these?

In 10g we do not support using 3rd Party Clusterware for failover and restart of Oracle resources. Oracle Clusterware resources should not be disabled.


Does Oracle Clusterware support application vips?

Yes, with Oracle Database 10g Release 2, Oracle Clusterware now supports an "application" vip. This is to support putting applications under the control of Oracle Clusterware using the new high availability API and allow the user to use the same URL or connection string regardless of which node in the cluster the application is running on. The application vip is a new resource defined to Oracle Clusterware and is a functional vip. It is defined as a dependent resource to the application. There can be many vips defined, typically one per user application under the control of Oracle Clusterware. You must first create a profile (crs_profile), then register it with Oracle Clusterware (crs_register). The usrvip script must run as root.


Can the Network Interface Card (NIC) device names be different on the nodes in a cluster, for both public and private?

All public NICs must have the same name on all nodes in the cluster

Similarly, all private NICs must also have the same names on all nodes

Do not mix NICs with different interface types (infiniband, ethernet, hyperfabric, etc.) for the same subnet/network.


What is the High Availability API?

An application-programming interface to allow processes to be put under the High Availability infrastructure that is part of the Oracle Clusterware distributed with Oracle Database 10g. A user written script defines how Oracle Clusterware should start, stop and relocate the process when the cluster node status changes. This extends the high availability services of the cluster to any application running in the cluster. Oracle Database 10g Real Application Clusters (RAC) databases and associated Oracle processes (E.G. listener) are automatically managed by the clusterware.


What are the IP requirements for the private interconnect?

The install guide will tell you the following requirements private IP address must satisfy the following requirements:
\1. Must be separate from the public network
\2. Must be accessible on the same network interface on each node
\3. Must have a unique address on each node
\4. Must be specified in the /etc/hosts file on each node
The Best Pratices recommendation is to use the TCP/IP standard for non-routeable networks. Reserved address ranges for private (non-routed) use (see TCP/IP RFC 1918):
* 10.0.0.0 -> 10.255.255.255
* 172.16.0.0 -> 172.31.255.255
* 192.168.0.0 -> 192.168.255.255
Cluvfy will give you an error if you do not have your private interconnect in the ranges above.
You should not ignore this error. If you are using an IP address in the range used for the public network for the private network interfaces, you are pretty much messing up the IP addressing, and possibly the routing tables, for the rest of the corporation. IP addresses are a sparse commodity, use them wisely. If you use them on a non-routable network, there is nothing to prevent someone else to go and use them in the normal corporate network, and then when those RAC nodes find out that there is another path to that address range (through RIP), they just might start sending traffic to those other IP addresses instead of the interconnect. This is just a bad idea.


Can I set up failover of the VIP to another card in the same machine or what do I do if I have different network interfaces on different nodes in my cluster (I.E. eth0 on node1,2 and eth1 on node 3,4)?

With srvctl, you can modify the nodeapp for the VIP to list the NICs it can use. Then VIP will try to start on eth0 interface and if it fails, try eth1 interface.
./srvctl modify nodeapps -n -A //eth0|eth1
Note how the interfaces are a list separated by the ‘|’ symbol and how you need to quote this with a ‘\’ character or the Unix shell will interpret the character as a ‘pipe’. So on a node called with a VIP address of _vip and we want a netmask (say) of 255.255.255.0 then we have:
./srvctl modify nodeapps -n -A _vip/255.255.255.0/eth0|eth1
To check which interfaces are configured as public or private use oifcfg getif
example output:
eth0 global public
eth1 global public
eth2 global cluster_interconnect
An ifconfig on your machine will show what the hardware names for the interface cards installed.


How is the voting disk used by Oracle Clusterware?

The voting disk is accessed exclusively by CSS (one of the Oracle Clusterware daemons). This is totally different from a database file. The database looks at the database files and interacts with the CSS daemon (at a significantly higher level conceptually than any notion of "voting disk").

"Non-synchronized access" (i.e. database corruption) is prevented by ensuring that the remote node is down before reassigning its locks. The voting disk, network, and the control file are used to determine when a remote node is down, in different, parallel, independent ways that allow each to provide additional protection compared to the other. The algorithms used for each of these three things are quite different.

As far as voting disks are concerned, a node must be able to access strictly more than half of the voting disks at any time. So if you want to be able to tolerate a failure of n voting disks, you must have at least 2n+1 configured. (n=1 means 3 voting disks). You can configure up to 32 voting disks, providing protection against 15 simultaneous disk failures, however it's unlikely that any customer would have enough disk systems with statistically independent failure characteristics that such a configuration is meaningful. At any rate, configuring multiple voting disks increases the system's tolerance of disk failures (i.e. increases reliability).

Configuring a smaller number of voting disks on some kind of RAID system can allow a customer to use some other means of reliability than the CSS's multiple voting disk mechanisms. However there seem to be quite a few RAID systems that decide that 30-60 second (or 45 minutes in the case of veritas) IO latencies are acceptable. However we have to wait for at least the longest IO latency before we can declare a node dead and allow the database to reassign database blocks. So while using an independent RAID system for the voting disk may appear appealing, sometimes there are failover latency consequences.


When ct run the command 'onsctl start' receives the message "Unable to open libhasgen10.so". Any idea why the message "unable to open libhasgen10.so" ?

Most likely you are trying to start ONS from ORACLE_HOME instead of Oracle Clusterware (or Grid Infrastructure in 11.2) home. Please try to start it from the Oracle Clusterware home.


I made a mistake when I created the VIP during the install of Oracle Clusterware, can I change the VIP?

Yes The details of how to do this are described in My Oracle Support Document 276434.1


Does the hostname have to match the public name or can it be anything else?

When there is no vendor clusterware, only Oracle Clusterware, then the public node name must match the host name. When vendor clusterware is present, it determines the public node names, and the installer doesn't present an opportunity to change them. So, when you have a choice, always choose the hostname.


Which processes access the OCR ?

Oracle Cluster Registry (OCR) is used to store the cluster configuration information among other things. OCR needs to be accessible from all nodes in the cluster. If OCR became inaccessible the CSS daemon would soon fail, and take down the node. PMON never needs to write to OCR. To confirm if OCR is accessible, try ocrcheck from your ORACLE_HOME and Oracle Clusterware / GRID_HOME.


When does the Oracle node VIP fail over to another node and subsequently return to its home node?

The handling of the VIP with respect to a failover to another node and subsequent return to its home node is handled differently depending on the Oracle Clusterware version. In general, one can distinguish between Oracle Clusterware 10g & 11g Release 1 and Oracle Clusterware 11g Release 2 behavior.

For Oracle Clusterware 10g & 11g Release 1 the VIP will fail over to another node either after a network or a node failure. However, the VIP will automatically return to its home node only after a node failure and a subsequent restart of the node. Since the network is not constantly monitored in this Oracle Clusterware version, there is no way that Oracle Clusterware can detect the recovery of the network and initiate an automatic return of the node VIP to its home node.

Exception: With Oracle Patch Set 10.2.0.3 a new behavior was introduced that allowed the node VIP to return to its home node after the network recovered. The required network check was part of the database instance check. However, this new check introduced quite some side effects and hence, was disabled with subsequent bundle patches and the Oracle Patch Set 10.2.0.4

Starting with 10.2.0.4 and for Oracle Clusterware 11g Release 1 the default behavior is to avoid an automatic return of the node VIP to its home node after the network recovered. This behavior can be activated, if required, using the "ORA_RACG_VIP_FAILBACK" parameter. This parameter should only be used after reviewing support note 805969.1 (VIP does not relocate back to the original node starting from 10.2.0.4 and 11.1 even after the public network problem is resolved.)

With Oracle Clusterware 11g Release 2 the default behavior is to automatically initiate a return of the node VIP to its home node as soon as the network recovered after a failure. It needs to be noted that this behavior is not based on the parameter mentioned above and therefore does not induce the same side effects. Instead, a new network resource is used in Oracle Clusterware 11g Release 2, which monitors the network constantly, even after the network failed and the resource became "OFFLINE". This feature is called "OFFLINE resource monitoring" and is per default enabled for the network resource.


Can I change the name of my cluster after I have created it when I am using Oracle Clusterware?

No, you must properly deinstall Oracle Clusterware and then re-install. To properly de-install Oracle Clusterware, you MUST follow the directions in the Grid Infrastructure Installation Guide (chapter 6). This will ensure the ocr gets cleaned out.


What should the permissions be set to for the voting disk and ocr when doing an Oracle RAC Install?

The Oracle Real Application Clusters install guide is correct. It describes the PRE-INSTALL ownership/permission requirements for ocr and voting disk. This step is needed to make sure that the Oracle Clusterware install succeeds. Please don't use those values to determine what the ownership/permmission should be POST INSTALL. The root script will change the ownership/permission of ocr and voting disk as part of install. The POST INSTALL permissions will end up being : OCR - root:oinstall - 640 Voting Disk - oracle:oinstall - 644


How do I restore OCR from a backup? On Windows, can I use ocopy?

The only recommended way to restore an OCR from a backup is "ocrconfig -restore". The ocopy command will not be able to perform the restore action for OCR.


With Oracle Clusterware 10g, how do you backup the OCR?

There is an automatic backup mechanism for OCR. The default location is : $CLUSTERWARE_HOME\cdata\"clustername"\
To display backups :
#ocrconfig -showbackup
To restore a backup :
#ocrconfig -restore
The automatic backup mechanism keeps up to about a week old copy. So, if you want to retain a backup copy more than that, then you should copy that "backup" file to some other name.
Unfortunately with Oracle RAC 10g Release 1 there are a couple of bugs regarding backup file manipulation, and changing default backup dir on Windows. These were fixed in 10.1.0.4. OCR backup on Windows are absent. Only file in the backup directory is temp.ocr which would be the last backup. You can restore this most recent backup by using the command ocr -restore temp.ocr
With Oracle RAC 10g Release 2 or later, you can also use the export command:
#ocrconfig -export -s online, and use -import option to restore the contents back.
With Oracle RAC 11g Release 1, you can do a manual backup of the OCR with the command:
# ocrconfig -manualbackup


How to Restore a Lost Voting Disk used by Oracle Clusterware 10g

As long as you can confirm via the CSS daemon logfile that it thinks the voting disk is bad, you can restore the voting disk from backup while the cluster is online. This is the backup that you took with dd (by the manual's request) after the most recent addnode, deletenode, or install operation. If by accident you restore a voting disk that the CSS daemon thinks is NOT bad, then the entire cluster will probably go down.
crsctl add css votedisk - adds a new voting disk
crsctl delete css votedisk - removes a voting disk
Note: the cluster has to be down. You can also restore the backup via dd when the cluster is down.


Is it a requirement to have the public interface linked to ETH0 or does it only need to be on a ETH lower than the private interface?: - public on ETH1 - private on ETH2

There is no requirement for interface name ordering. You could have - public on ETH2 - private on ETH0 Just make sure you choose the correct public interface in VIPCA, and in the installer's interconnect classification screen.


How to move the OCR location ?

For Oracle RAC 10g Release 1
- stop the CRS stack on all nodes using "init.crs stop"
- Edit /var/opt/oracle/ocr.loc on all nodes and set up ocrconfig_loc=new OCR device
- Restore from one of the automatic physical backups using ocrconfig -restore.
- Run ocrcheck to verify.
- reboot to restart the CRS stack.

For Oracle RAC 10g Release 2 or later Please use the OCR command to replace the OCR with the new location:
# ocrconfig -replace ocr /dev/newocr
# ocrconfig -replace ocrmirror /dev/newocrmirror

Manual editing of ocr.loc or equivalent is not recommended, and will not work.


Can I run a 10.1.0.x database with Oracle Clusterware 10.2 ?

Yes. Oracle Clusterware 10.2 will support both 10.1 and 10.2 databases (and ASM too!). A detailed matrix is available in Document 337737.1


Is there any example to query mgmtdb directly to get information about performance issues on cluster?

No, this is not the purpose of GIMR. GIMR is to abstract the repository by its clients. All the information about the performance and diagnostics of a cluster is available to the users through the clients.


Is moving the GIMR to a different diskgroup a best practice?

Yes, moving the GIMR to a different diskgroup is a best practice. This is because the users can then control the retention time and sizing of GIMR independent of their other requirements.


How to specify the new diskgroup for the MGMTDB during the installation GUI?

There is an opportunity to create a diskgroup during the installation. This can be used to set it up for the GIMR. It will include the Clusterware files initially. These Clusterware files can later be moved out.


If GIMR is already discovered and managed in EMCC, the recommendation is to remove that from EM Cloud Control Managed Target List?

Yes, it is recommended to do so.


If GIMR is not coming up due to some problems, will cluster restart or cluster will not come up till GIMR is up and available as part of CRS restart?

No, there is no availability impact if mgmtdb does not start. It will give alerts, but none of the existing databases will not be impacted. CHM during that period will do local logging.


How to configure interval of data write to mgmtdb?

The interval of data write to mgmtdb cannot be configured by the users. Writing of data to mgmtdb depends on the clients of mgmtdb. If the clients decide to give the option of configuring data write to mgmtdb to the end users, then the end users can configure it. In case of CHM, data writing option is not given to the end users.


What does it mean that the node listener will go away from the current GIMR topology in the upcoming releases?

It simply means that the remote clients that currently come in through SCAN will now be handled by management listener instead of node listener. The management database will be registered with node VIP instead of node listener so that the public traffic can be redirected using management listener.


How to remove GIMR database if it needs to be replaced?

One should contact Oracle support for deleting a GIMR database and restoring it. No function has been provided to the users for this.


Can the data files of GIMR database be placed in any other disk group apart from its default one?

Yes, the data files of GIMR can be placed in a disk group with other databases apart from its default one. However, it is not recommended to do so as it will take up a lot of space in the disk group which contains user database and increase the storage costs for the user. Also, it should be noted that if GIMR data files are placed in another disk group, it will inherit the redundancy feature of that disk group.


While using mdbutil.pl with options - add and db, it will recreate the GIMR database. So should the GIMR database be deleted before issuing this command?

Here, the idea is to use -add option when the GIMR database is to be newly created. If the GIMR already exists, there is another option to move the GIMR database which should be used. This option will automatically move the database and its contents and recreate it at the new location.The move option with mdbutil.pl is used as follows:

mdbutil.pl --mvmgmtdb --target=+GIMR


How do I cd into the GIMR's trace or log directory?

Remember to escape the hyphen: cd ./-MGMTDB


If EMCC is not supposed to monitor it, why is it discovered as a target ?

The discovery of the GIMR will be masked in an upcoming release.


How much shared disk space does the GIMR installation require?

This is documented in the GI Installation Guide for each platform and depends on the number of nodes and redundancy. For a Linux cluster of 2-4 nodes, it is 4.5 GB with external redundancy.


Does the GIMR get configured in a Oracle Restart single server install?

No, as it does not currently have clients in that deployment type.


Do I need to regularly backup the GIMR?

It is optional at this time, as its data is regularly windowed through dropping partitions. You can use oclumon to regularly archive data.


Will I lose my GIMR data when upgrading or applying a patch?

You will lose your CHM data but not your RHP data during an upgrade. Whether it happens for a PSU will depend upon the level of GIMR patch.


Do I need to separately patch the GIMR?

No, any patches for the GIMR will be included in the GI PSU and applied during the GI patching process.


Why does the GIMR use hugepages?

The GIMR only uses a small quantity of hugepages (371) to prevent its SGA from swapping since some of its clients have timing windows.


Will I lose my cluster or database availability should the GIMR go down?

No, the GIMR clients are designed to locally cache data if the GIMR is down for a period of time. Should this happen CRS will restart or fail it over to another node.


Can I disable the GIMR?

No, it is not supported to run 12.1.0.2 clusters on Tier One platforms without the GIMR enabled and running.


Who are the clients of GIMR?

Clients of GIMR include both the clients that put their data in GIMR and the ones that retrieve the data from GIMR. Currently, GIMR clients include: (CHM), Rapid Home Provisioning (RHP), Enterprise Manager Cloud Control (EMCC) and Trace File Analyzer (TFA). CHM puts all of its OS metric data in GIMR. RHP uses GIMR to persist meta-data about each of the database homes that it is servicing. EMCC and TFA retrieve data from GIMR, predominantly CHM data.

In future releases, QoS Management and Cluster Activity Log will also use GIMR to store their data. And Cluster Health Advisor, an evolution of CHM, will use GIMR to retrieve model and diagnostic data.


Why was GIMR implemented ?

There has been a long standing requirement for a diagnostic data repository. The cluster generates a lot of data for diagnostics and performance purposes with the tools such as Cluster Health Monitor and Quality of Service Management. There is no available storage in EMCC for this data. Earlier this data was stored in local repositories of every client which used up a lot of disk space locally. This resulted in growing scarcity of local space when multiple databases were consolidated in the cluster. Also, there were increasing requirements for off-cluster data access. Due to these reasons, the data is now stored in a separate repository called GIMR.


What is Grid Infrastructure Management Repository (GIMR) ?

Grid Infrastructure Management Repository (GIMR) is a centralized infrastructure database for diagnostic and performance data and resides in Oracle GI Home. It is a single instance CDB with a single PDB and includes partitioning (for data lifecycle management).

GIMR is a license-free database which is enabled by default and is always running. It is a cluster-managed resource with CRS managing its availability, restarting it and failing it over when required. It uses ASM and by default stores its data files in ASM Cluster File Disk Group. It also uses a fixed set of resources and therefore, does not overwhelm the cluster. GIMR is an autonomous database with automatic data lifecycle management.


Is RHP supported on Exadata?

Yes. RHP is supported with Grid Infrastructure 12.1 and later, on any systems other than Windows. It is hardware agnostic.


Does the RHP Server need to be running the same OS as the targets it manages?

No. The RHP Server can be Linux, Solaris or AIX, and the targets can be any mix of the same.

Note that Windows is not supported - neither as an RHP Server nor as a target.


How is Rapid Home Provisioning licensed?

Rapid Home Provisioning (RHP) is a feature of Grid Infrastructure 12.1 and later. The architecture consists of a Server and one or more Clients. The Server may provision and patch Homes locally without any extra license needed. If Clients are configured, they require Lifecycle Management Pack licensing.


REFERENCES

NOTE:743649.1 - Will an Operating System Upgrade Affect Oracle Clusterware?
NOTE:465001.1 - Configuring raw devices (singlepath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OL5
BUG:4024251 - ALLOW DBCA TO ALLOW FOR MORE THAN 5 CHARACTERS FOR SID PREFIX

NOTE:554781.1 - RAC instabilities due to firewall (netfilter/iptables) enabled on the cluster interconnect
NOTE:1559762.1 - How to Manage Oracle Grid Infrastructure During Operating System Upgrades
NOTE:300479.1 - 10g Rolling Upgrades with Logical Standby
BUG:4462367 - CRS STARTS INSTANCE WITH TOO LOW FILE LIMIT
NOTE:139436.1 - Understanding 9i Real Application Clusters Cache Fusion
NOTE:183340.1 - Frequently Asked Questions About the CLUSTER_INTERCONNECTS Parameter in 9i.
NOTE:180608.1 - Automatic Space Segment Management in RAC Environments

NOTE:151051.1 - Init.ora Parameter "CLUSTER_INTERCONNECTS" Reference Note
NOTE:276434.1 - How to Modify Public Network Information including VIP in Oracle Clusterware

NOTE:1062983.1 - How to Restore ASM Based OCR After Complete Loss of the CRS Diskgroup on Linux/Unix Systems
NOTE:291962.1 - Setting Up Bonding in SLES 9
NOTE:454607.1 - New Partitions in Windows 2003 RAC Environments Not Visible on Remote Nodes

NOTE:238278.1 - Linux: What's OCFS or OCFS2
NOTE:405820.1 - 10.2.0.X CRS Bundle Patch Information
NOTE:414897.1 - Linux: How To Setup UDEV Rules For RAC OCR And Voting Devices On SLES10, RHEL5, OL5
NOTE:751343.1 - Oracle Clusterware and RAC Support for RDS Over Infiniband

NOTE:358156.1 - Automatic Startup of Cluster Services May Hang or Fail on Windows
NOTE:759143.1 - NTP leap second event causing Oracle Clusterware node reboot
NOTE:184875.1 - How To Check The Certification Matrix for Real Application Clusters
NOTE:805969.1 - 10g/11gR1: VIP Does Not Relocate Back to the Home Node Even After the Public Network Problem is Resolved

NOTE:298891.1 - Linux: Configuring Bonding For Oracle 10g VIP or private interconnect
NOTE:332257.1 - Using Oracle Clusterware with Vendor Clusterware FAQ
NOTE:219361.1 - Troubleshooting ORA-29740 in a RAC Environment
NOTE:337737.1 - Oracle Clusterware (CRS/GI) - ASM - Database Version Compatibility

NOTE:342443.1 - 10.2.0.x Oracle Database and Networking Patches for Microsoft Platforms
NOTE:464683.1 - Unexplained Database Slowdown Seen on Windows 2003 Service Pack 1
NOTE:458485.1 - How to find whether the one-off Patches will conflict or not?
NOTE:5187351.8 - Bug 5187351 - Many clscNN.log files created - can affect performance

NOTE:559365.1 - Pre-11.2: Using Diagwait as a diagnostic to get more information for diagnosing Oracle Clusterware Node evictions
NOTE:787420.1 - Cluster Interconnect in Oracle RAC
NOTE:726833.1 - Linux: Hangcheck-Timer Module Requirements for Oracle 9i, 10g, and 11gR1 RAC

NOTE:397460.1 - Oracle's Policy for Supporting Oracle RAC with Symantec SFRAC

NOTE:341788.1 - Recommendation for the Real Application Cluster Interconnect and Jumbo Frames
NOTE:359515.1 - Mount Options for Oracle files for RAC databases and Clusterware when used with NFS on NAS devices

BUG:4537790 - CLUSTER NODES HANG AND THEN BUGCHECK ON REBOOTS
NOTE:301138.1 - RACDDT User Guide
NOTE:249212.1 - Support Position for Oracle Products Running on VMware Virtualized Environments
NOTE:314422.1 - Remote Diagnostic Agent (RDA) - Getting Started
NOTE:296874.1 - Configuring the HP-UX Operating System for the Oracle 10g and Oracle 11g VIP
NOTE:291958.1 - Setting Up Bonding in Suse SLES8
NOTE:399482.1 - Pre-11.2: How to Recreate OCR/Voting Disk Accidentally Deleted
NOTE:732683.1 - Cannot Start Instance Using SRVCTL but SQLPLUS Can

标签:

头像

小麦苗

学习或考证,均可联系麦老师,请加微信db_bao或QQ646634621

您可能还喜欢...

发表回复

嘿,我是小麦,需要帮助随时找我哦
  • 18509239930
  • 个人微信

  • 麦老师QQ聊天
  • 个人邮箱
  • 点击加入QQ群
  • 个人微店

  • 回到顶部
返回顶部