Steven Edwards' thoughts on various topics, Oracle related and not. Note: I reserve the right to delete comments that are not contributing to the overall theme of the Blog or are insulting or demeaning to anyone.
Sunday, April 19, 2009
Bug 7171446 - NUMA Issues from 10.2.0.4 Patchset
Unless the system has been specifically set up and tuned for NUMA at both OS and database level then disable Oracle NUMA optimizations by setting the following in the pfile / spfile / init.ora used to start the instances:
_enable_NUMA_optimization=FALSE
_db_block_numa=1
At some point, Oracle should release a patch for bug 7171446. Once it is released, we will install the patch and remove the hidden parameters from the parameters.
CRS Log Directory Permissions
Here is a hint, if you are having CRS issues, always first check to make sure all of the directories exist for the CRS processes to create log files.
What was totally crazy, the other node owner, group and permissions were right. Don't know if the owner, etc. got goofed up during the patchset install for 10.2.0.4. or if it was another Clusterware merge patch. Never did figure out the why of it.
11.1 CLUVFY HP-UX ia64 Issues
Different MTU values used across network interface(s).
These different values are necessary because we are using Infiniband as the private interconnect and you want the larger Maximum Transmission Unit (MTU) value for the more robust interconnect. The public interface is a standard gigabit network so a lower value makes sense. So, we basically ignored that error because changing the Infiniband to a lower MTU value is not practical just to get a clean cluvfy before installing the Clusterware. For more info on MTU and Oracle RAC see MetaLink note: 341788.1
This is a known bug discussed in metalink note 758102.1. Root cause is BUG
7493420 fixed in 11.2. This configuration is valid since the interfaces across nodes have the same MTU. As long as the interfaces across the nodes have the same MTU, you are good to go.
The next issue had to do with shared storage. With HP-UX ia64 11.31, we created shared raw ASM LUNs then created aliases to those LUNs for the ASM diskstring. This storage is shared between the two nodes using EVA8000 storage. Cluvfy does not recognize that shared storage is available and it is working correctly. The failure message you get can be ignored. Here is the message:
Shared storage check failed on nodes "xxxxx"
In the known limitations section of the cluvfy readme, it clearly states the following:
"Sharedness check of SCSI disks is currently not supported."
If these are SCSI disks, then this error is expected as cluvfy cannot handle this check. As long as these disks can be seen from both nodes and has the correct permissions and ownership, ASM should install/work fine.
As long as your storage is working correctly, you can ignore the shared storage check because cluvfy is not able to verify multipath / autopath type software like that built in to HP-UX 11.31 using virtual disk devices on EVA8000 storage.
HP-UX Async IO
Ioctl ASYNC_CONFIG error, errno = 1
After further analysis on MetaLink, and assistance from Support, we determined that asynch IO was not configured. The following are the steps that we did as root to resolve the issue:
- created /etc/privgroup. Added the following entries in the file:
- dba RTPRIO RTSCHED MLOCK
- oinstall RTPRIO RTSCHED MLOCK
- /usr/sbin/setprivgrp -f /etc/privgroup
- getprivgrp dba
- getprivgroup oinstall
- cd /dev/async
- chown oracle:dba async
- chmod 660 async
We just had to say "Isn't that interesting..."
Saturday, July 07, 2007
ASM Disk Group and Securepath
The best practice is to ignore the individual devices in the path and use the secure virtual device file instead when creating your ASM disk groups. For example /hpap/dsk/hpap??
The secure virtual device takes away the confusion of the underlying devices which are just paths to the same device. Securepath handles the rest. Also, by using the secure virtual device file, the ASM disk group automatically shows up on all instances that use the same virtual device file.
Cannot Lock /etc/mnttab
bdf: cannot lock /etc/mnttab; still trying ...
bdf: cannot lock /etc/mnttab; still trying ...
bdf: cannot lock /etc/mnttab; still trying ...
bdf: cannot lock /etc/mnttab; still trying ...
The /etc/mnttab is the mounted file system table. Our SAN was showing that 3 disks had failed in the same 7 disk group. Oracle was trying to expdp to the mount points and NetBackup was trying to read from the same mount points. We couldn't get Oracle to shutdown or NetBackup either.
HP physically went and pulled out one of the bad disks. When the bad disk was physically ejected, Oracle came down and so did NetBackup. Turns out that there was only one bad disk in the Data Replication Group. The bad disk did not eject itself cleanly and hung up the other 2 disks. So, it looked like there were 3 bad disks. Once the bad disk was replaced, the disk group began leveling as normal. The /etc/mnttab became available and all data was present. There was no data loss. We started Oracle and everything looked fine. So, what looked like a bad issue, turned out to be a disk that was not ejected cleanly from the disk group that also locked up the /etc/mnttab file.
Go figure, still not impressed with HP storage especially the EVA5000. We have had lots of bad issues with EVA5000 storage everything from strange things like above to data corruption of the Oracle database to loosing mount points.
Tuesday, October 03, 2006
Opatch with RAC
Wednesday, September 27, 2006
ORA-29701 Cluster Manager
After stopping everything on the node, the ASM instance would not start and immediately barked at me with the same ORA-29701 error. At this point I asked someone else that has more experience with Oracle Clusterware than I do.
They checked it out and found that somehow the ASM /dev/rdsks and special files for the OCR and voting disks had changed ownership. Someone with root access must have run insf -e to reinstall the special files. Oh, great!
A sys admin had already created a shell script to change the ownership of the /dev/rdsks to oracle:dba and chmod them to 660. So, all we had to do was ask one of our sysadmins to run the script. They also had to manually chown root:oinstall /dev/voting and chown oracle:oinstall /dev/ocr.
So, if you get an ORA-29701 and can't figure it out, check the owner and permissions of your Oracle devices.
Sunday, September 10, 2006
Database Control RMAN Wizard Bug
Now, without the undo tablespace, database recovery is not possible. However, the EM RMAN wizard will not allow you the option of adding the undo tablespace to the list of tablespaces to be selected for backup.
So, here is the workaround:
- Modify the RMAN script that is created and manually include the undo tablespace in your list of tablespaces to be backed up. Then submit your job.
I will be creating some RMAN backups for a 10.2 database soon and will let you know if this bug is also in 10gR2.
Saturday, September 09, 2006
Old Dog New Tricks
tnsnames.ora changes do not require restart of the listener. Kind of cool. And may favorite is "lsnrctl reload". The reload resets the listener without stopping and starting it.
Friday, September 08, 2006
HP EVA8000 Autopath Tuning
The UNIX administrators, HP, and my peers began looking into the problem. I wasn't totally engaged but want to post the information anyway. Props to my colleages. This is more system administration territory; however, as DBAs we need to know the impact of EVA tuning.
HP checked the load balancing policy on the nodes. It was set to “No Load Balancing” which greatly impacts performance. Now, how it was changed to "No Load Balancing" after the first I/O benchmarks is still a mystery.
To see the LUNs, issue the command “autopath display” as root. This will list all the LUNs and show the HP Autopath load balancing policy.
root@hostname:/ # autopath display
...
Load Balancing Policy : No Load Balancing
...
So, HP and my peers recommended that the HP Autopath load balancing policy be set to round robin for all LUNs. The autopath set_lbpolicy command sets the load balancing policy for the specified device path.
autopath set_lbpolicy <{policy name} {path}>
description: sets load balancing policy
usage: autopath set_lbpolicy <{policy name} {path}>
Policy name: The load balancing policy to set
Valid policies are
RR : Round Robin.
SST : Shortest Service Time.
SQL : Shortest Queue Length.
NLB/OFF : No load Balancing.
Path : Device Special File e.g./dev/dsk/c#t#d#
Example:
# autopath set_lbpolicy RR /dev/dsk/c0t0d0
The example above sets the policy to Round Robin.
Here is a little more information about what the policies mean:
- RR : Round Robin. - I/O is routed through the paths of the active controller in round-robin order
- SST : Shortest Service Time. - is a measurement against the average service time of the IOs on a path
- SQL : Shortest Queue Length. - is a calculation on the device queue depth when it is on a certain path.
- NLB/OFF : No load Balancing. - No consideration given to service times or queue lengths, typically impacts performance.
Friday, August 25, 2006
ASM Instance Changes
First, we had to increase the large pool to 100M. Next, we had to increase the number of processes. The number of processes should be the default (40) times the number of nodes in the Oracle cluster (40 * # of nodes)
Two MetaLink Notes - Bookmark
- 368055.1 Deployment of very large databases (10TB to PB range) with Automatic Storage Management (ASM)
- 139272.1 HP-UX: Asynchronous i/o
Both of these notes, I found useful during our recent VLDB 10g RAC HP-UX Itanium install. A friend of mine at work found the second note and told me about it. Thanks.
Tuesday, August 22, 2006
We Have An Oracle Cluster
As a DBA, after installation your tasks are to administer your RAC environment at three levels:
- Instance Administration
- Database Administration
- Cluster Administration
For administering Real Application Clusters, use the following tools to perform administrative tasks in RAC:
- Cluster Verification Utility (CVU)—Install and use CVU before you install RAC to ensure that your configuration meets the minimum RAC installation requirements. Also use the CVU for on-going administrative tasks, such as node addition and node deletion.
- Enterprise Manager—Oracle recommends that you use Enterprise Manager to perform administrative tasks whenever feasible.
- Task-specific GUIs such as the Database Configuration Assistant (DBCA) and the Virtual Internet Protocol Configuration Assistant (VIPCA)
- Command-line tools such as SQL*Plus, Server Control (SRVCTL), the Oracle Clusterware command-line interface, and the Oracle Interface Configuration tool (OIFCFG)
I've got plenty of posts saved up and will enter them either tonight or soon. It has been crazy at work. Funny, how you learn so much more while in a storm than you do when things are calm. It is like that in life as well as at work. God has a way of tempering you in the storms that you face and bringing you thru them stronger and better than before. Man, I could preach on that especially with things going on in my personal life right now.
Tuesday, August 15, 2006
ORA-00600 [KGHALO4]
Solutions:
Apply the one-off patch for bug 4414666 or set the _enable_NUMA_optimization=FALSE parameter in the init.ora as a workaround. For an ASM instance, two parameters are needed:
_enable_NUMA_optimization = FALSE _disable_instance_params_check = TRUE. Another solution is to apply the 10.2.0.2 patchset which fixes this problem.
Because this bug happens whenever you are trying to create the ASM Instance, it would be advisable to just apply the one-off patch for the bug and then install the 10.2.0.2 patchset as normal once everything is created and working.
Monday, August 14, 2006
OUI Trace
./runInstaller -J-DTRACING.ENABLED=true -J-DTRACING.LEVEL=2
We used this method to review what the OUI was doing and determine that the OUI saw ServiceGuard (SG) and SG libraries on the cluster nodes. See my previous posts for more information.
HP Autopath and Oracle
/dev/rdsk/ocr
/dev/rdsk/voting
It was all because of the HP Autopaths and virtual storage platform (EVA8000). Oracle Installer only allows entry of one path to the OCR and voting disks. Therefore, we had to create the special filenames in order to use OUI. mksf is the command. See HP portion in Note: 293819.1
We are now wondering about the creation of the ASM disk groups. Will HP autopathing cause us a problem? We should know soon.
Saturday, August 12, 2006
Cleanup Failed CRS Install on HP-UX Itanium
- srvctl stop nodeapps -n
(we didn't have to do this because our failed installs never got this far). - As root:
- rm /sbin/init.d/init.cssd
- rm /sbin/init.d/init.crs
- rm /sbin/init.d/init.crsd
- rm /sbin/init.d/evmd
- rm /sbin/rc2.d/K001init.crs
- rm /sbin/rc2.d/K960init.crs
- rm /sbin/rc3.d/K001init.crs
- rm /sbin/rc3.d/K960init.crs
- rm /sbin/rc3.d/S960init.crs
- rm -Rf /var/opt/oracle/scls_scr
- rm -Rf /var/opt/oracle/oprocd
- rm /etc/inittab.crs
- cp /etc/inittab.orig /etc/inittab
- If they are not already down, kill the EVM, CRS, and CSS processes.
- rm -Rf /var/tmp/.oracle
- rm -Rf /tmp/.oracle
- remove the ocr.loc file
- rm -Rf /* CRS Install Location */
- De-install the CRS home in the OUI
- Clean out the OCR and voting files with dd commands Example:
- dd if=/dev/zero of=/dev/rdsk/voting bs=8192 count=2560
- dd if=/dev/zero of=/dev/rdsk/ocr bs=8192 count=12800
- rm -Rf /app/oracle/oraInventory
Once those are done, you can restart the OUI install of Clusterware at the very beginning. Oracle has also just released a new cleanup utility for failed CRS installs here is the link:
http://download-west.oracle.com/otndocs/products/clustering/deinstall/clusterdeconfig.zip
The new script didn't work for us either. Of course, I didn't try it after our system admins removed ServiceGuard so it may work now.
Friday, August 11, 2006
Clusterware Install Tips HP-UX Itanium
Make sure if you are not going to use HP ServiceGuard on your RAC cluster, that all of ServiceGuard has been stopped and uninstalled. Don't leave any libraries laying around. We found out this the hard way, after spending over a week trying to figure out why the Oracle Installer was acting so crazy and bizarre.
Here are the symptoms: First, if the OUI does not give you the option to add the other nodes and you have to use a configuration file, this is a red flag that the OUI thinks that you are using some vendor cluster software (in this case HP ServiceGuard) instead of using Oracle's. Secondly, if you have some variables that are not assigned (see previous post) in the rootconfig script this indicates that it is not really trying to install rather it is trying to upgrade/update the OCR.
If for some reason, you get the clusterware services running on one of the nodes but it doesn't start on the others and locks up. It probably means that your removal of ServiceGuard was incomplete and left a few SG libraries laying around.
We found all of this out the hard way because HP installed and started ServiceGuard when they installed the HP 9000 Superdome!
Finally, here is an undocumented procedure for HP-UX Itanium Clusterware 10gR2 installation: Shutdown the VIP interface BEFORE beginning the install. If you don't, an error message will appear saying that the VIP interface is being used by another system. Then you have to shut the VIPs down before continuing the install. This is weird because the VIP must be up in order for cluvfy nodecon to work.
Friday, August 04, 2006
CVU Shared Storage Accessibility Check
ERROR>/tmp/9999//bin/lsnodes: cannot get local node number
Although Cluster Verify Utility is an excellent tool for prerequisites, there is room for improvement. Since we did not run cluvfy as thoroughly on our first attempt, we did not encounter Bug 4714708 - Cvu Cannot See Shared Drives.
It turns out that CVU currently does not work with devices other than SCSI devices.