RAC: Frequently Asked Questions (RAC FAQ) (Doc ID 220970.1)

0    219    1

Tags:

👉 本文共约70233个字,系统预计阅读时间或需265分钟。

目录
本页目录 隐藏
2) Answers

QUESTIONS AND ANSWERS

RAC - Real Application Clusters

RAC

RAC One Node

QoS - Quality of Servce Management

Clusterware

Autonomous Computing

Rapid Home Provisioning

Answers

How do I measure the bandwidth utilization of my NIC or my interconnect?

Oracle RAC depends on both (a) Latency and (b) Bandwidth
(a) Latency can be best measured by running a AWR or statspack report and reviewing the cluster section
(b) Bandwidth can be measured using OS provided utilities like iptraf, Netperf, topaz (AIX). ** Some of these utilities may not be available on All platforms.

Keep in mind that, if the network is utilized at 50% bandwidth, this means that 50% of the time it is busy and not available to potential users. In this case delays (due to Network collisions) will increase the latency even though the bandwidth might look "reasonable". So always keep an eye on both "Latency and Bandwidth"


How can I validate the scalability of my shared storage? (Tightly related to RAC / Application scalability)

RAC scalability is dependent at the storage unit's ability to process I/O's per second (throughput) in a scalable fashion, specifically from multiple sources (nodes).

Oracle recommends using ORION (Oracle I/O test tool) which simulates Oracle I/O. Note: Starting with 11.2 the orion tool is included with the RDBMS/RAC software, see ORACLE_HOME/bin. On other Unix platforms (as well as Linux) one can use IOzone, if prebuilt binary not available you should build from source, make sure to use version 3.271 or later and if testing raw/block devices add the "-I" flag.

In a basic read test you will try to demonstrate that a certain IO throughput can be maintained as nodes are added. Try to simulate your database io patterns as much as possible, i.e. blocksize, number of simultaneous readers, rates, etc. For example, on a 4 node cluster, from node 1 you measure 20MB/sec, then you start a read stream on node 2 and see another 20MB/sec while the first node shows no decrease. You then run another stream on node 3 and get another 20MB/sec, in the end you run 4 streams on 4 nodes, and get an aggregated 80MB/sec or close to that. This will prove that the shared storage is scalable. Obviously if you see poor scalability in this phase, that will be carried over and be observed or interpreted as poor RAC / Application scalability.


Is Oracle RAC supported on logical partitions (i.e. LPARs) or other virtualization technologies?

Check http://www.oracle.com/technetwork/database/virtualizationmatrix-172995.html for more details on supported virtualization and partitioning technologies.


How should voting disks be implemented in an extended cluster environment?

How should voting disks be implemented in an extended cluster environment?
Can I use standard NFS for the third site voting disk?

Standard NFS is only supported for the tie-breaking voting disk in an extended cluster environment. See platform and mount option restrictions here .Otherwise just as with database files, we only support voting files on certified NAS devices, with the appropriate mount options. Please refer to My Oracle Support Document 359515.1 for a full description of the required mount options.


What are the cdmp directories in the background_dump_dest used for?

These directories are produced by the diagnosibility daemon process (DIAG). DIAG is a database process which as one of its tasks, performs cache dumping. The DIAG process dumps out tracing to file when it discovers the death of an essential process (foreground or background) in the local instance. A dump directory named something like cdmp_ is created in the bdump or background_dump_dest directory, and all the trace dump files DIAG creates are placed in this directory.


How do I gather all relevant Oracle and OS log/trace files in an Oracle RAC cluster to provide to Support?

We recommend to install TFA in every cluster.

This is a great tool to collect the different logs across the cluster for database and cluster diagnostics. It can be run manually, automatically or at any given interval. It is included in 12.1.0.2.
If you are in 12.1.0.1 you need to download it, you can do so by going to myoracle support note 1513912.1

11.2.03 Grid Infrastructure deployments do not include TFA, but 11.2.04 deployments do include it.

TFA narrows the data for only what is relevant to the range of time you are analyzing, creates a zip file and uploads it to support.

The TFA analyzer get’s this zip file and provides an easy way to navigate the data, show relations between logs across nodes making it easier to analyze the data.


How should one review the ability to scale out to more nodes in your cluster?

Once a customer is using RAC on a two node cluster and want to see how far they can actually scale it, the following are some handy tips to follow:
\1. Ensure they are using a real enough workload that it does not have false bottlenecks.
\2. Have tuned the application so it is reasonable scalable on their current RAC environment.
\3. Make sure you are measuring a valid scalability measure. This should either be doing very large batch jobs quicker (via parallelism) or being able to support a greater number of short transactions in a shorter time.
\4. Actual scalability will vary for each application and its bottlenecks. Thus the request to do the above items. You would see similar scalability if scaling up on a SMP.
\5. For failover, you should see what happens if you lose a node. If you have 2 nodes, you lose half your power and really get into trouble or have lots of extra capacity.
\6. Measuring that load balancing is working properly. Make sure you are using RCLB and a FAN aware connection pool.
\7. Your customer should also testing using DB Services.
\8. Get familiar w/ EM GC to manage a cluster and help eliminate a lot of the complexity of many of the nodes.
\9. Why stop at 6 nodes? A maximum of 3 way messaging ensure RAC can scale much, much further.


Can I ignore 10.2 CLUVFY on Solaris warning about failed package existence checks?

Complete error is

Package existence check failed for "SUNWscucm:3.1".

Package existence check failed for "SUNWudlmr:3.1".

Package existence check failed for "SUNWudlm:3.1".

Package existence check failed for "ORCLudlm:Dev_Release_06/11/04,_64bit_3.3.4.8_reentrant".

Package existence check failed for "SUNWscr:3.1".

Package existence check failed for "SUNWscu:3.1".

Cluvfy checks all possible prerequisites and reports whether your system passed the checks or not. You should then cross reference with the install guide to see if the checks that failed are required for your type of installation. In the above case, if you are not planning on using Sun Cluster, then you can continue the install.


What is a CVU component?

CVU (Cluster Verification Utility) supports the notion of Component verification. The verifications in this category are not associated with any specific stage. The user can verify the correctness of a specific cluster component. A component can range from a basic one, like free disk space to a complex one like CRS Stack. The integrity check for CRS stack will transparently span over verification of multiple sub-components associated with CRS stack. This encapsulation of a set of tasks within specific component verification should be of a great ease to the user.


Why cluvfy reports "unknown" on a particular node?

According to the Cluster Verification Utility Reference from the Clusterware Administration and Deployment Guide from the 12c Documentation:

If a cluvfy command responds with UNKNOWN for a particular node, then this is because CVU cannot determine whether a check passed or failed. The cause could be a loss of reachability or the failure of user equivalence to that node. The cause could also be any system problem that was occurring on that node when CVU was performing a check.

The following is a list of possible causes for an UNKNOWN response:

  • The node is down
  • Executables that CVU requires are missing in Grid_home/bin or the Oracle home directory
  • The user account that ran CVU does not have privileges to run common operating system executables on the node
  • The node is missing an operating system patch or a required package
  • The node has exceeded the maximum number of processes or maximum number of open files, or there is a problem with IPC segments, such as shared memory or semaphores

What are the requirements for CVU?

According to the Cluster Verification Utility Reference from the Clusterware Administration and Deployment Guide from the 12c Documentation, CVU requirements are:

  • At least 30 MB free space for the CVU software on the node from which you run CVU

  • A work directory with at least 25 MB free space on each node. The default location of the work directory is /tmp on Linux and UNIX systems, and the value specified in the TEMP environment variable on Windows systems. You can specify a different location by setting the CV_DESTLOC environment variable.

    When using CVU, the utility attempts to copy any needed information to the CVU work directory. It checks for the existence of the work directory on each node. If it does not find one, then it attempts to create one. Make sure that the CVU work directory either exists on all nodes in your cluster or proper permissions are established on each node for the user running CVU to create that directory.

  • Java 1.4.1 on the local node


What about discovery? Does CVU discover installed components?

CVU performs system checks in preparation for installation, patch updates and/or other system changes. Checks performed by CVU can be:

  • Free disk space
  • Clusterware Integrity
  • Memory
  • Processes
  • Other important cluster components
  • All available network interfaces
  • Shared Storage
  • Clusterware home

For more information please check the Cluster Verification Utility Reference from the Clusterware Administration and Deployment Guide from the 12c Documentation.


How do I check Oracle RAC certification?

Please refer to my oracle support https://support.oracle.com/ for information regarding Certification of Oracle RAC and all other products from the stack,


Is Veritas Storage Foundation supported with Oracle RAC?

Veritas certifies Veritas Storage Foundation for Oracle RAC with each release. Check Veritas Support Matrix for the latest details. Also visit My Oracle Support for a list of certified 3rd party products with Oracle RAC.


Can I use ASM as mechanism to mirror the data in an Extended RAC cluster?

Yes, please refer to the Extended Clusters Technical Brief for more information on Extended Clusters and the RAC Stack.


What are the changes in memory requirements from moving from single instance to RAC?

If you are keeping the workload requirements per instance the same, then about 10% more buffer cache and 15% more shared pool is needed. The additional memory requirement is due to data structures for coherency management. The values are heuristic and are mostly upper bounds. Actual resource usage can be monitored by querying current and maximum columns for the gcs resource/locks and ges resource/locks entries in V$RESOURCE_LIMIT.

But in general, please take into consideration that memory requirements per instance are reduced when the same user population is distributed over multiple nodes. In this case:

Assuming the same user population N number of nodes M buffer cache for a single system then

(M / N) + ((M / N )*0.10) [ + extra memory to compensate for failed-over users ]

Thus for example with a M=2G & N=2 & no extra memory for failed-over users

=( 2G / 2 ) + (( 2G / 2 )) *0.10

=1G + 100M


What are the default values for the command line arguments?

Here are the default values and behavior for different stage and component commands:

For component nodecon:
If no -i or -a arguments is provided, then cluvfy will get into the discovery mode.

For component nodereach:
If no -srcnode is provided, then the local(node of invocation) will be used as the source node.

For components cfs, ocr, crs, space, clumgr:
If no -n argument is provided, then the local node will be used.

For components sys and admprv:
If no -n argument is provided, then the local node will be used.
If no -osdba argument is provided, then 'dba' will be used. If no -orainv argument is provided, then 'oinstall' will be used.

For component peer:
If no -osdba argument is provided, then 'dba' will be used.
If no -orainv argument is provided, then 'oinstall' will be used.

For stage -post hwos:
If no -s argument is provided, then cluvfy will get into the discovery mode.

For stage -pre clusvc:
If no -c argument is provided, then cluvfy will skip OCR related checks.
If no -q argument is provided, then cluvfy will skip voting disk related checks.
If no -osdba argument is provided, then 'dba' will be used.
If no -orainv argument is provided, then 'oinstall' will be used.

For stage -pre dbinst:
If -cfs_oh flag is not specified, then cluvfy will assume Oracle home is not on a shared file system.
If no -osdba argument is provided, then 'dba' will be used.
If no -orainv argument is provided, then 'oinstall' will be used.


Do I have to be root to use CVU?

No. CVU is intended for database and system administrators. CVU assumes the current user as Grid/Database user.


What is nodelist?

Nodelist is a comma separated list of hostnames without domain. Cluvfy will ignore any domain while processing the nodelist. If duplicate entities after removing the domain exist, cluvfy will eliminate the duplicate names while processing. Wherever supported, you can use '-n all' to check on all the cluster nodes.


How do I check minimal system requirements on the nodes?

The component verification command sys is meant for that. To check the system requirement for RAC, use '-p database' argument. To check the system requirement for CRS, use '-p crs' argument.


How do I get detail output of a check?

Cluvfy supports a verbose feature. By default, cluvfy reports in non-verbose mode and just reports the summary of a test. To get detailed output of a check, use the flag '-verbose' in the command line. This will produce detail output of individual checks and where applicable will show per-node result in a tabular fashion.


Why the peer comparison with -refnode says passed when the group or user does not exist?

Peer comparison with the -refnode feature acts like a baseline feature. It compares the system properties of other nodes against the reference node. If the value does not match( not equal to reference node value ), then it flags that as a deviation from the reference node. If a group or user does not exist on reference node as well as on the other node, it will report this as 'matched' since there is no deviation from the reference node. Similarly, it will report as 'mismatched' for a node with higher total memory than the reference node for the above reason.


At what point cluvfy is usable? Can I use cluvfy before installing Oracle Clusterware?

You can run cluvfy at any time, even before CRS installation. In fact, cluvfy is designed to assist the user as soon as the hardware and OS is up. If you invoke a command which requires CRS or RAC on local node, cluvfy will report an error if those required products are not yet installed.

Cluvfy can also be invoked after install to check if any new hardware component added after the install (like more shared disks etc) are accessible from all the nodes.


Is there a way to compare nodes?

You can use the peer comparison feature of cluvfy for this purpose. The command 'comp peer' will list the values of different nodes for several pre-selected properties. You can use the peer command with -refnode argument to compare those properties of other nodes against the reference node.


What is a stage?

CVU supports the notion of Stage verification. It identifies all the important stages in RAC deployment and provides each stage with its own entry and exit criteria. The entry criteria for a stage define a specific set of verification tasks to be performed before initiating that stage. This pre-check saves the user from entering into a stage unless its pre-requisite conditions are met. The exit criteria for a stage define another specific set of verification tasks to be performed after completion of the stage. The post-check ensures that the activities for that stage have been completed successfully. It identifies any stage specific problem before it propagates to subsequent stages; thus making it difficult to find its root cause. An example of a stage is "pre-check of database installation", which checks whether the system meets the criteria for RAC install.


How do I know about cluvfy commands? The usage text of cluvfy does not show individual commands.

Cluvfy has context sensitive help built into it. Cluvfy shows the most appropriate usage text based on the cluvfy command line arguments. If you type 'cluvfy' on the command prompt, cluvfy displays the high level generic usage text, which talks about valid stage and component syntax. If you type 'cluvfy comp -list', cluvfy will show valid components with brief description on each of them. If you type 'cluvfy comp -help', cluvfy will show detail syntax for each of the valid components. Similarly, 'cluvfy stage -list' and 'cluvfy stage -help' will list valid stages and their syntax respectively. If you type an invalid command, cluvfy will show the appropriate usage for that particular command. For example, if you type 'cluvfy stage -pre dbinst', cluvfy will show the syntax for pre-check of dbinst stage.


Can I check if the storage is shared among the nodes?

Yes, you can use 'comp ssa' command to check the sharedness of the storage. Please refer to the known issues section for the type of storage supported by cluvfy in the cluvfy help command output.


Does Database blocksize or tablespace blocksize affect how the data is passed across the interconnect?

Oracle ships database block buffers, i.e. blocks in a tablespace configured for 16K will result in a 16K data buffer shipped, blocks residing in a tablespace with base block size (8K) will be shipped as base blocks and so on; the data buffers are broken down to packets of MTU sizes.

There are optimizations in newer releases like compressing etc that are beyond the scope of this FAQ


What is Oracle's position with respect to supporting RAC on Polyserve CFS?

Please check the certification matrix available through My Oracle Support for your specific release.


How do I check network or node connectivity related issues?

Use component verifications commands like 'nodereach' or 'nodecon' for this purpose. For detail syntax of these commands, type cluvfy comp -help on the command prompt. If the 'cluvfy comp nodecon' command is invoked without -i, cluvfy will attempt to discover all the available interfaces and the corresponding IP address & subnet. Then cluvfy will try to verify the node connectivity per subnet. You can run this command in verbose mode to find out the mappings between the interfaces, IP addresses and subnets. You can check the connectivity among the nodes by specifying the interface name(s) through -i argument.


What is CVU? What are its objectives and features?

CVU brings ease to RAC users by verifying all the important components that need to be verified at different stages in a RAC environment. The wide domain of deployment of CVU ranges from initial hardware setup through fully operational cluster for RAC deployment and covers all the intermediate stages of installation and configuration of various components. The command line tool is cluvfy. Cluvfy is a non-intrusive utility and will not adversely affect the system or operations stack.


Is there a cluster file system (CFS) Available for Linux?

Yes, ACFS (ASM Cluster File System with Oracle Database 11g Release 2) and OCFS (Oracle Cluster Filesystem) are available for Linux. The following My Oracle Support document has information for obtaining the latest version of OCFS:

Document 238278.1 - How to find the current OCFS version for Linux


Is OCFS2 certified with Oracle RAC 10g?

Yes. See Certify to find out which platforms are currently certified.


How do I check the Oracle Clusterware stack and other sub-components of it?

Cluvfy provides commands to check a particular sub-component of the CRS stack as well as the whole CRS stack. You can use the 'comp ocr' command to check the integrity of OCR. Similarly, you can use 'comp crs' and 'comp clumgr' commands to check integrity of crs and cluster manager sub-components. To check the entire CRS stack, run the stage command 'clucvy stage -post crsinst'.


Where can I find the CVU trace files?

CVU log files can be found under $CV_HOME/cv/log directory. The log files are automatically rotated and the latest log file has the name cvutrace.log.0. It is a good idea to clean up unwanted log files or archive them to reclaim disk place.

In recent releases, CVU trace files are generated by default. Setting SRVM_TRACE=false before invoking cluvfy disables the trace generation for that invocation.


Can I use Oracle Clusterware for failover of the SAP Enqueue and VIP services when running SAP in a RAC environment?

Oracle has created sapctl to do this and it is available for certain platforms. SAPCTL will be available for download on SAP Services Marketplace on AIX and Linux. Please check the market place for other platforms


How do I turn on tracing?

Set the environmental variable SRVM_TRACE to true. For example, in tcsh "setenv SRVM_TRACE true" will turn on tracing. Also it may help to run cluvfy with -verbose attribute
$script run.log
$export SRVM_TRACE=TRUE
$cluvfy -blah -verbose
$exit


How do I check whether OCFS is properly configured?

You can use the cluvfy component command 'cfs' to check this. Provide the OCFS file system you want to check through the -f argument. Note that, the sharedness check for the file system is supported for OCFS version 1.0.14 or higher.


My customer is about to install 10202 clusterware on new Linux machines. He is getting "No ORACM running" error when run rootpre.sh and exited? Should he worry about this message?

It is an informational message. Generally for such scripts, you can issue echo “$?” to ensure that it returns a zero value. The message is basically saying, it did not find an oracm. If Customer were installing 10g on an existing 9i cluster (which will have oracm) then this message would have been serious. But since customer is installing this on a fresh new box, They can continue the install.


Can different releases of Oracle RAC be installed and run on the same physical Linux cluster?

Yes!!!

The details answer is broken into three categories

  • Oracle Version 10g and above only
    We only require that Oracle Clusterware version be higher than or equal to the Database release. Customer can run multiple releases on the same cluster.
  • Oracle Version 10g and higher alongside Oracle Version less than 10g
    Oracle Clusterware (CRS) will not support a Oracle 9i RAC database so you will have to leave the current configuration in place. You can install Oracle Clusterware and Oracle RAC 10g or 11g into the same cluster. On Windows and Linux, you must run the 9i Cluster Manager for the 9i Database and the Oracle Clusterware for the 10g Database. When you install Oracle Clusterware, your 9i srvconfig file will be converted to the OCR. Oracle 9i RAC, Oracle RAC 10g, and Oracle RAC 11g will use the OCR. Do not restart the 9i gsd after you have installed Oracle Clusterware. Remember to check certify for details of what vendor clusterware can be run with Oracle Clusterware. Oracle Clusterware must be the highest level (down to the patchset). IE Oracle Clusterware 11g Release 2 will support Oracle RAC 10g and Oracle RAC 11g databases. Oracle Clusterware 10g can only support Oracle RAC 10g databases.

Oracle Clusterware fails to start after a reboot due to permissions on raw devices reverting to default values. How do I fix this?

After a successful installation of Oracle Clusterware a simple reboot and Oracle Clusterware fails to start. This is because the permissions on the raw devices for the OCR and voting disks e.g. /dev/raw/raw{x} revert to their default values (root:disk) and are inaccessible to Oracle. This change of behavior started with the 2.6 kernel; in RHEL4, OEL4, RHEL5, OEL5, SLES9 and SLES10. In RHEL3 the raw devices maintained their permissions across reboots so this symptom was not seen.

The way to fix this is on RHEL4, OEL4 and SLES9 is to create /etc/udev/permission.d/40-udev.permissions (you must choose a number that's lower than 50). You can do this by copying /etc/udev/permission.d/50-udev.permissions, and removing the lines that are not needed (50-udev.permissions gets replaced with upgrades so you do not want to edit it directly, also a typo in the 50-udev.permissions can render the system non-usable). Example permissions file:
# raw devicesraw/raw[1-2]:root:oinstall:0640raw/raw[3-5]:oracle:oinstall:0660

Note that this applied to all raw device files, here just the voting and OCR devices were specified.

On RHEL5, OEL5 and SLES10 a different file is used /etc/udev/rules.d/99-raw.rules, notice that now the number must be (any number) higher than 50. Also the syntax of the rules is different than the permissions file, here's an example:

This is explained in detail in Document 414897.1 .


Customer did not load the hangcheck-timer before installing RAC, Can the customer just load the hangcheck-timer ?

YES. hangcheck timer is a kernel module that is shipped with the Linux kernel, all you have to do is load it as follows:

For more details see Document 726833.1


After installing patchset 9013 and patch_2313680 on Linux, the startup was very slow

Please carefully read the following new information about configuring Oracle Cluster Management on Linux, provided as part of the patch README:

Three parameters affect the startup time:

soft_margin (defined at watchdog module load)

-m (watchdogd startup option)

WatchdogMarginWait (defined in nmcfg.ora).

WatchdogMarginWait is calculated using the formula:

WatchdogMarginWait = soft_margin(msec) + -m + 5000(msec).

[5000(msec) is hardcoded]

Note that the soft_margin is measured in seconds, -m and WatchMarginWait are measured in milliseconds.

Based on benchmarking, it is recommended to set soft_margin between 10 and 20 seconds. Use the same value for -m (converted to milliseconds) as used for soft_margin. Here is an example:

soft_margin=10 -m=10000 WatchdogMarginWait = 10000+10000+5000=25000

If CPU utilization in your system is high and you experience unexpected node reboots, check the wdd.log file. If there are any 'ping came too late' messages, increase the value of the above parameters.


Is there a way to verify that the Oracle Clusterware is working properly before proceeding with RAC install?

Yes. You can use the post-check command for cluster services setup(-post clusvc) to verify CRS status. A more appropriate test would be to use the pre-check command for database installation(-pre dbinst). This will check whether the current state of the system is suitable for RAC install.


How do I configure my RAC Cluster to use the RDS Infiniband?

Ensure that the IB (Infiniband) Card is certified for the OS, Driver, Oracle version etc.

You may need to relink Oracle using the command

$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk ipc_rds ioracle

You can check your interconnect through the alert log at startup. Check for the string “cluster interconnect IPC version:Oracle RDS/IP (generic)” in the alert.log file.

See Document 751343.1 for more details.


Why is validateUserEquiv failing during install (or cluvfy run)?

SSH must be set up as per the pre-installation tasks. It is also necessary to have file permissions set as described below for features such as Public Key Authorization to work. If your permissions are not correct, public key authentication will fail, and will fallback to password authentication with no helpful message as to why. The following server configuration files and/or directories must be owned by the account owner or by root and GROUP and WORLD WRITE permission must be disabled.

$HOME
$HOME/.rhosts
$HOME/.shosts
$HOME/.ssh
$HOME/.ssh.authorized-keys
$HOME/.ssh/authorized-keys2 #Openssh specific for ssh2 protocol.

SSH (from OUI) will also fail if you have not connected to each machine in your cluster as per the note in the installation guide:

The first time you use SSH to connect to a node from a particular system, you may see a message similar to the following:

The authenticity of host 'node1 (140.87.152.153)' can't be established. RSA key fingerprint is 7z:ez:e7:f6:f4:f2:4f:8f:9z:79:85:62:20:90:92:z9.
Are you sure you want to continue connecting (yes/no)?

Enter |yes| at the prompt to continue. You should not see this message again when you connect from this system to that node. Answering yes to this question causes an entry to be added to a "known-hosts" file in the .ssh directory which is why subsequent connection requests do not re-ask.
This is known to work on Solaris and Linux but may work on other platforms as well.


What is Runtime Connection Load Balancing?

Runtime connection Load balancing enables the connection pool to route incoming work requests to the available database connection that will provide it with the best service. This will provide the best service times globally, and routing responds fast to changing conditions in the system. Oracle has implemented runtime connection load balancing with ODP.NET and JDBC connection pools. Runtime Connection Load Balancing is tightly integrated with the automatic workload balancing features introduced with Oracle Database 10g I.E. Services, Automatic Workload Repository, and the new Load Balancing Advisory.


How do I enable the load balancing advisory?

Load balancing advisory requires the use of services and Oracle Net connection load balancing.
To enable it, on the server: set a goal (service_time or throughput, and set CLB_GOAL=SHORT ) for the service.
For client, you must be using the connection pool.
For JDBC, enable the datasource parameter FastConnectionFailoverEnabled.
For ODP.NET enable the datasource parameter Load Balancing=true.


What are the network requirements for an extended RAC cluster?

Necessary Connections

Interconnect, SAN, and IP Networking need to be kept on separate channels, each with required redundancy. Redundant connections must not share the same Dark Fiber (if used), switch, path, or even building entrances. Keep in mind that cables can be cut.

The SAN and Interconnect connections need to be on dedicated point-to-point connections. No WAN or Shared connection allowed. Traditional cables are limited to about 10 km if you are to avoid using repeaters. Dark Fiber networks allow the communication to occur without repeaters. Since latency is limited, Dark Fiber networks allow for a greater distance in separation between the nodes. The disadvantage of Dark Fiber networks are they can cost hundreds of thousands of dollars, so generally they are only an option if they already exist between the two sites.

If direct connections are used (for short distances) this is generally done by just stringing long cables from a switch. If a DWDM or CWDM is used then then these are directly connected via a dedicated switch on either side.
Note of caution: Do not do RAC Interconnect over a WAN. This is a the same as doing it over the public network which is not supported and other uses of the network (i.e. large FTPs) can cause performance degradations or even node evictions.

For SAN networks make sure you are using SAN buffer credits if the distance is over 10km.
If Oracle Clusterware is being used, we also require that a single subnet be setup for the public connections so we can fail over VIPs from one side to another.


What is the maximum distance between nodes in an extended RAC environment?

The high impact of latency create practical limitations as to where this architecture can be deployed. While there is not fixed distance limitation, the additional latency on round trip on I/O and a one way cache fusion will have an affect on performance as distance increases. For example tests at 100km showed a 3-4 ms impact on I/O and 1 ms impact on cache fusion, thus the farther distance is the greater the impact on performance. This architecture fits best where the 2 datacenters are relatively close (<~25km) and the impact is negligible. Most customers implement under this distance w/ only a handful above and the farthest known example is at 100km. Largest distances than the commonly implemented may want to estimate or measure the performance hit on their application before implementing. Due ensure a proper setup of SAN buffer credits to limit the impact of distance at the I/O layer.


Are crossover cables supported as an interconnect with Oracle RAC on any platform?

NO. CROSS OVER CABLES ARE GENERALLY NOT SUPPORTED. The requirement is to use a switch.

The only exception is the Oracle Database Appliance (ODA), for which crossover cables are used.

Detailed Reasons:

\1) cross-cabling limits the expansion of Oracle RAC to two nodes.

本人提供Oracle(OCP、OCM)、MySQL(OCP)、PostgreSQL(PGCA、PGCE、PGCM)等数据库的培训和考证业务,私聊QQ646634621或微信db_bao,谢谢!
RAC: Frequently Asked Questions (RAC FAQ) (Doc ID 220970.1)后续精彩内容已被小麦苗无情隐藏,请输入验证码解锁本站所有文章!
验证码:
请先关注本站微信公众号,然后回复“验证码”,获取验证码。在微信里搜索“DB宝”或者“www_xmmup_com”或者微信扫描右侧二维码都可以关注本站微信公众号。

标签:

Avatar photo

小麦苗

学习或考证,均可联系麦老师,请加微信db_bao或QQ646634621

您可能还喜欢...

发表回复

嘿,我是小麦,需要帮助随时找我哦。
  • 18509239930
  • 个人微信

  • DB宝
  • 个人邮箱
  • 点击加入QQ群
  • 个人微店

  • 回到顶部