Monday, June 19, 2006

Moving Objects To Another Tablespace

We had some tablespaces that were full and I moved some tables to another tablespace and thought it would be neat to document it here. Moving objects between tablespaces can be useful whenever you need to reorganize tables. We've seen where moving objects from one tablespace to another and then back again helps keep the number of extents down. Also, whenever a power user puts data in a tablespace they shouldn't have, then I can move it back to where it should be. If you decide to do some tablespace consolidatation, reorganization or other changes, moving the objects is also useful. Beware there are some limitations to the alter table move tablespace mainly around data types. Please consult the Oracle documentation. I will document some SQL examples below moving a table, an index, a partition, and a sub-partition. If you need to move lots of objects, then you will want to write dynamic SQL which pulls information from the DBA views.

alter table OWNER.TABLE_NAME move tablespace NEW_TBSP parallel 8 nologging;

alter index OWNER.INDEX_NAME rebuild tablespace NEW_TBSP online;

alter table OWNER.TABLE_NAME move partition XX tablespace NEW_TBSP parallel 16;

alter table OWNER.TABLE_NAME move subpartition SUB_PART_NAME tablespace NEW_TBSP parallel 8;

Note: The degree of parallelism above indicates the degree used to do the move not the default degree of parallelism for the object.

Tuesday, June 13, 2006

Aggregate NIC Ports For Better Performance and Redundancy

We are going to aggregate Network Interface Card (NIC) 1 GB ports in order to get a faster connection between servers. A better aggregate throughput is just one of the advantages. Another advantage is redundancy. If one interface goes down, the other interfaces in the aggregate will still work. This is very similar to bonding NICs. We will be using HP Auto Port Aggregation (APA) software in combination with the Cisco PAgP protocol.

From the HP document APA Support Guide at http://www.docs.hp.com :

"HP Auto Port Aggregation (APA) is a software product that creates link aggregates, often called 'trunks,' which provide a logical grouping of two or more physical ports into a single 'fat pipe.' This port arrangement provides more data bandwidth than would otherwise be available. Two additional features are automatic link failure detection and recovery; it also offers optional support for load balancing of network traffic across all of the links in the aggregation. This enables you to build large bandwidth "logical" links into the server that are highly available and completely transparent to the client and server applications."

Now, how does this help our Oracle configurations? Simple, I will use the aggregated ports as the host for a database link between servers. This way, we will be able to move data faster between Oracle databases.

Our plan is to aggregate 4 ports which will give us the capability to transfer between 3.7 and 3.9 GBbit/sec. APA supports up to 8 aggregated ports. When you go over 2 aggregated ports, you begin to see diminishing returns which is why we will not get a total throughput of 4 GBit/sec.

The Da Vinci Code

For anyone that believes the book "The Da Vinci Code" contains anything remotely true, please consider the following links:

http://www.enrichmentjournal.ag.org

http://www.enrichmentjournal.ag.org/top/DaVinciCode_article.cfm

By the way, I did read the book and it was FICTION.

Friday, June 09, 2006

Importance of Table Statistics

The importance of table statistics was driven home to me today. I've always known this, but sometimes you forget how important it is to keep fresh statistics on your objects. We are using an optimization mode of choose which resolves to the cost based optimizer. There was a large join operation that usually takes 2-3 hours which was finishing in 8 hours. So, we took a few minutes trying to figure out why the increase in runtime. The problem with analyzing this change was with the current load on the server. We were hammering the box. Our first thought was the server was overloaded which caused the job to run longer than normal.

Then I had an epiphany from God. It could be stale table statistics for both or one of the tables in the join condition. So this is what I did:

ANALYZE TABLE OWNER.TABLENAME DELETE STATISTICS;

BEGIN
DBMS_STATS.GATHER_TABLE_STATS ('OWNER', 'TABLENAME', NULL, 0.5);
END;
/

I did this for both tables in the join. Now, the runtime for the job is back to 2 hours. Now, you more experienced DBAs will comment about using DBMS_STATS to automate statistics gathering. However, we always assumed our processing cycle and the size of our databases ruled out the automatic statistics gathering option.

Yes, I do need to visit the automatic statistics gathering options available in 10g. It has been quite a few years since I looked at gathering object statistics. Expect this to be another post later on ;-)

Saturday, June 03, 2006

Don't Update Compressed Objects

We created a huge table (over 1.7 billion rows) that was hash partitioned with each partition compressed that had horrible performance during an update. Here is an explanation as to why updating a compressed table is not a good idea.

Oracle table compression is only useful for tables where data is inserted and not updated. Upon update the row is uncompressed and not re-compressed. Compression occurs when data is inserted with a bulk (direct-path) insert operation. A table can consist of compressed and uncompressed blocks transparently. Any DML operation can be applied to a table storing compressed blocks. However, conventional DML operations cause records to be stored uncompressed after the operations and the operations themselves are subject to a small performance overhead due to the nature of the table compression.

Bottom line: Consider using table compression when your data is mostly read only. Do not use table compression for tables that are updated frequently. UPDATE operations are 10-20% slower for compressed tables on average, mainly due to some complex optimizations that have been implemented for uncompressed tables, and not yet implemented for compressed tables.

In our case the update to just one of these huge partitions was running over 12 hours. So, the update was actually never finishing. IMHO, don't compress any table that may be updated later.

Friday, June 02, 2006

Using Tables in Views With a DBLink

I have dealt a lot this week with views. We were trying to create a view from a table on a distributed database joined to a table on the local database. The view creation failed and I assumed it was because we were trying to create a view using a dblink. The error message received during the view creation was ORA-01031 insufficient privileges. I was wrong.

Turns out the problem was in the way the owner of the view was given select privileges to the underlying tables. See document 271587.1 on MetaLink - "Cannot create a view on a table granted via a role". So because the owner didn't have select access to the underlying tables explicitly granted to the view creator and owner, they couldn't create the view. Once select access was granted to the user, the view creation succeeded.

So, the view creation worked. However, the users of the view kept getting an error saying table not found. They didn't have select access to the table that was used to create the view on the remote database (across the dblink). We couldn't grant select on the remote table because the user community couldn't legally see all of the data on the remote table. Some users had direct access to the remote database.

To create a view, you must meet the following requirements (From Oracle 9i Documentation):

  • To create a view in your schema, you must have the CREATE VIEW privilege. To create a view in another user's schema, you must have the CREATE ANY VIEW system privilege. You can acquire these privileges explicitly or through a role.
  • The owner of the view (whether it is you or another user) must have been explicitly granted privileges to access all objects referenced in the view definition. The owner cannot have obtained these privileges through roles. Also, the functionality of the view is dependent on the privileges of the view's owner. For example, if the owner of the view has only the INSERT privilege for Scott's emp table, the view can only be used to insert new rows into the emp table, not to SELECT, UPDATE, or DELETE rows.
  • If the owner of the view intends to grant access to the view to other users, the owner must have received the object privileges to the base objects with the GRANT OPTION or the system privileges with the ADMIN OPTION.

I think the problem was the third one. The owner of the view did not have object privs with the GRANT OPTION or ADMIN OPTION. However, I never got to test it.

Looks like this was a combination problem with views that use DBLINKS to get remote data and permissions. So, we decided to just get around it, by placing the remote table on the local database taking the dblink out of the equation. I will let you know if this worked.