DBA Unit - 4

 M.Sc. (CS)/ (SE) SY Oracle Database Administration 

UNIT 4  

Logical Database Layouts 

4.1 Logical structure of a database 

A tablespace is a logical database structure that is designed to store other logical  database structures. Oracle sees a tablespace as a large area of space into which Oracle  can place new objects. Space in tablespace is allocated in segments. A segment is an  allocation of space used to store the data of a table, index, undo segment, or  temporary object. 

When the database object runs out of space in its segment and needs to add more  data, Oracle lets it allocate more space in the form of an extent. An extent is similar to  a segment in that the extent stores information corresponding to a table, index, undo  segment, or temporary objects. 

4.1.1 Space Management in Tablespaces  

Free space management is an important task for Oracle because without it, Oracle  would not know where to put things like tables or indexes when we wanted to create  and modify them. Prior to Oracle8i, all tablespaces were created as dictionary  managed tablespaces. Dictionary managed tablespaces rely on Oracle populating data  dictionary tables housed in the SYSTEM tablespace to track free space utilization. With  Oracle8i and later, there is a new type of tablespace called the locally managed  tablespace. Locally managed tablespace use bitmaps stored within the header of the  datafile comprising a tablespace to track the space utilization of the table spaces. This  bitmap represents every block in the datafile and each bit in the map represent  whether that block is free or not. 

Within tablespace, Oracle manages free space by coalescing it into contiguous  segments. The system monitor (SMON) background process, in Oracle handles this  coalescing activity automatically. When new database objects are created, Oracle will  acquire the requested amount contiguous storage space in the form of a segment for  new object.

1 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

The amount of space SMON will use is based either on the object’s own storage clause for that tablespace, or on uniform extent allocation configured for the tablespace. 4.1.2 Permanent versus Temporary tablespace  

Regardless of the configuration for tablespace, the tablespace itself will generally be  configured to store two types of segments: Permanent & temporary. A permanent  segment is one designed to store data for a table or other object that’s going to house  data for a long time in the database. 

In contrast, come forms of data in Oracle are stored only temporarily for the duration  of a particular database operation. An example of such an operation is a disk sort. A  sort operation can require a lot of memory temporarily. Sometimes the sort with the  lack of memory, Oracle starts writing records to disk in a temporary segment.  

When Oracle finished with the sort, Oracle no longer needs the data housed in the  temporary segment, so Oracle deletes the temporary segment automatically behind  the scenes from both the user and DBA. 

4.1.3 Default Temporary Tablespace  

Because it is possible for any user in Oracle to issue a long-running query to return data  in sorted order, every user in Oracle must be assigned a tablespace where Oracle can  write temporary segments in case Oracle must perform a disk sort on behalf of that  user. Prior versions of Oracle handled this assignment when the user was created  using optional but highly recommended temporary tablespace tablespcename clause  in the create user command

SQL> create user COCSIT 

identified by COCSITLTR 

default tablespace USERS 

default temporary tablespace TEMP; 

 OR 

alter user COCSIT quota 100M on USERS;



2 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.2 Creating Tablespaces 

Previously in Oracle, the SYSTEM tablespace was the only tablespace need to be  created. This is done by explicitly specifying the location of the SYSTEM tablespace datafile in the CREATE DATABASE command. Starting with Oracle9i we can also create  two other types of tablespaces at the same time. They are DEFAULT TEMPORARY  TABLESPACE & UNDO TABLESPACE

On a typical database, we can separate different types of data into different  tablespaces. At a minimum, we typically create the following tablespaces: SYSTEM: Every database must have a SYSTEM tablespace. This is created when we  create the database. 

DATA: A DATA tablespace is used to house table data. 

INDEX: An Index tablespace is used to house indexes separate from other types of  objects. 

UNDOTBS: An UNDOTBS tablespace houses undo segments(rollback segments). These  need to be kept separate from other type’s objects due to volatility in allocating  extents. 

TEMP: A TEMP tablespace houses temporary segments. These also need be kept  separate from other types of objects due to volatility in allocating extents. TOOLS: A tools tablespace house objects that support administrative or other tools  we might use in conjunction with our database, such as Oracle Enterprise Manager.

3 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.2.1 Creating Tablespace to house permanent Segments 

The tablespaces are created using the create tablespace command after the database  has been created. Let’s look at an example, where we create a locally managed  tablespace for a database hosted in a Windows environment that we will use to hold  table data in the database. 

SQL> CREATE TABLESPACE DATA 

DATAFILE  

‘E:\demo\data\data_03.dbf’ SIZE 20M, 

‘E:\demo\data\data_04.dbf’ SIZE 20M 

AUTOEXTEND ON NEXT 10M MAXSIZE 50M 

EXTENT MANAGEMENT LOCAL 

PERMANENT ONLINE;



The creation of a tablespace can take a long time, depending on how large we specify  the tablespace datafiles to be. This is because Oracle has to physically allocate itself a  file of whatever size specified. 

While creating a tablespace we have to specify the tablespace name in the crate  tablespace command. Notice that one of our datafiles has an AUTOEXTEND clause defined for it. This feature enables the datafile to grow past its originally defined size  automatically in order to accommodate data growth. 

The next step is to specify the default storage clause to set options that will be applied  to database object creation if the object created does not have storage parameters  defined for it.  

If an object in this tablespace has its own storage clause defined, then the object’s  storage clause setting will override the Tablespaces’s default storage settings Notice that we have defined this tablespace to be locally managed using the extent  management local clause. This the default space management setting for tablespace  in Oracle9i and later releases. To specify the dictionary-managed tablespaces, we  would have to use the extent management dictionary clause. 

4 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

The PERMANENT clause specifies that the tablespace houses permanent database  objects. However Oracle assumes that the tablespace is a permanent tablespace if the  PERMANENT keyword is omitted. Finally we instruct Oracle to bring the tablespace  online after creating it using the ONLINE keyword. This default availability status of the  tablespace after creation. If we omit the ONLINE keyword from our CREATE  TABLESPACE it will still be online. 

4.2.2 Creating Tablespaces for Temporary Segments 

We need to crate Temporary Tablespaces for two reasons. 

1. To take advantage of Oracle’s more efficient usage of sort segments compared to  temporary segments. 

2. To prevent anyone from creating a permanent database objects, such as table or index  with data, in the tablespace used for temporary segments.  

To create a temporary tablespace use the TEMPORARY keyword in the CREATE TABLESPACE command as shown below 

SQL>CREATE TEMPORARY TABLESPACE LM_Temp  

TEMPFILE ‘e:\DEMO\data\lmtemp01.dbf‘  

SIZE 20M REUSE 

EXTENT MANAGEMENT LOCAL;



Another way creating the Temporary Tablespace is to include the TEMPORARY  keyword at the end of the CREATE TABLESPACE Command. Note that we use the  DATAFILE keyword rather TEMPFILE when we create temporary tablespace in this  manner.

SQL>CREATE TABLESPACE LM_Temp01  

DATAFILE ‘e:\DEMO\data\Temp02.dbf‘  

SIZE 20M  

TEMPORARY;



5 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

Like permanent tablespaces, the Temporary Tablespace will be brought Online  automatically after Oracle creates it. We can view the status of temporary tablespaces  in the data dictionary view DBA_TEMP_FILES. We can also view whether a tablespace  is temporary or not by viewing the CONTENTS column of the DBA_TABLESPACES data  dictionary view. 

4.2.3 Default Storage Options  

The default storage clause defines storage options that will be applied to newly created  database objects if the CREATE statement does not have storage parameters defined  for it.  

The initial and next options specify the size of the object’s initial segment and next  allocated extent, respectively. 

If minimum extent is defined for the tablespace we put our object in, and the value  specified for next on our database object is less than minimum extent, Oracle rounds  up to the next highest multiple of minimum extent and creates the initial or next extent  as that size. This feature can reduce the amount of fragmentation in a tablespace. 

The minextents & maxextents options specify the minimum and maximum number of  extents the object can allocate in the tablespace. If we specify minextents greater than  one and the tablespace has more than one datafile, Oracle will tend to spread extents  over multiple datafiles, which can improve performance if those datafiles are also  located on different disk resources. 

Finally pctincrease enable us to specify a percentage increase in the amount of space  allocated for the next extent in the objects. For example, if next is set to 200KB and  pctincrease is 50, the second extent would be 200KB in size, the third extent would be  300KB(50 % more than the second extent), the fourth extent would be 4500KB(50%  more than the third extent), and so on. The minimum value is o, and the default value  is 50.  

The calculated value is rounded up to the next data block, which is multiple of five  times DB_BLOCK_SIZE. To make all extents of same size, specify pctincrease to be zero.

6 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.2.4 Changing Tablespace Size 

Once a tablespace is created, there are few different ways to modify the size of a  tablespace. The first is by adding new datafiles to the tablespaces. This task is  accomplished with the alter tablespace add datafile statement. We can add as many  datafiles to the database that will exceed the physical size of our disk resources.  

For Ex. 

SQL>ALTER TABLESPACE DATA 

ADD DATAFILE ‘e:\demo\data\data05.dbf’ SIZE 50M;



The other restriction relates to the following point about maxdatafiles. If we have  added the maximum number of datafiles permitted for our database as specified by  this parameter, if we still need more room, we can increase the size of existing datafiles  using resize keyword. Resizing a datafile upward is rarely met with difficulty, unless  there is not enough space in the file system. To do so issue the following statement 

SQL> ALTER DATABASE DATAFILE 

‘e:\demo\data\data_03.dbf’ 

RESIZE 1500M;



A third way to expand the size of our existing datafiles is through the use of the  autoextend clause in Oracle. To enable automatic extensions of the datafiles execute  the following statement.

SQL> ALTER DATABASE DATAFILE 

‘e:\demo\data\data_03.dbf’ 

AUTOEXTEND ON NEXT 10M  

MAXSIZE 1500M;



7 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.3 Allocating Space for temporary Segments 

Temporary segments can be housed in both permanent tablespaces and temporary  tablespaces. This functionality is provided for backward compatibility; however, we  should always design our database so that temporary segments are housed in  temporary tablespaces and permanent segments are housed in permanent  tablespace.  

We cannot put a permanent database objects in a temporary tablespace. We can  switch a tablespace between being permanent and temporary, provided the  permanent tablespace does not contain permanent database object when we try to  switch it to temporary tablespace. As shown below

SQL> CREATE TABLESPACE TEST DATAFILE(‘D:\test\data.dbf’) size 1M default storage(  initial 10K next 10K pctincrease 0 minextents 1 maxextents 5) temporary; 

SQL>CREATE TABLE Dummy (dummy varchar2(10)) tablespace test; 

Create table Dummy (dummy varchar2(10)) tablespace test; 

error at line 1: 

ORA-02195: Attempt to create PERMANENT object in Temporary tablespace  SQL> alter tablespace test permanent; 

SQL>CREATE TABLE Dummy (dummy varchar2(10)) tablespace test; SQL> alter tablespace test temporary; 

alter tablespace test temporary 

Error at line 1: 

ORA-01662:tablespace ‘test’ is nonempty and cannot be made temporary.



8 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.3.1 Temporary Segments in Permanent Tablespaces  

A user may be assigned to either a permanent or temporary tablespace for sorting.  Users create temporary segments in a tablespace when a disk sort in required to  support their use of select statements containing the group by, order by, distinct or  union clauses, or create index statement. Users can be assigned to either permanent  or temporary tablespaces for creating segments. If the user is assigned to a permanent  tablespace for creating temporary segments, the temporary segment will be created  at the time disk sort is required. When the disk sort is complete, the SMON process  drops the temporary segment automatically to free the space for other users.  

Because this activity causes high fragmentation, it is advisable to create a separate  temporary tablespace to store the temporary segments for all users. 

4.3.2 Temporary Segments in Temporary Tablespaces 

Temporary space is managed differently in temporary tablespaces. Instead of  allocating temporary segments on-the-fly, only to have them be dropped later by  SMON, the Oracle instance allocates one sort segment for the first statement requiring  a disk sort. All subsequent users requiring disk sorts can share that segment. The sort  segment is released at instance shutdown. Management of temporary segments in this  manner improves performance in two ways. 

First, Oracle saves time by assigning transactions to temporary segments that have  been preallocated. 

Second, Oracle does not deallocate the primary temporary segment once the sorting  operation is complete. Rather, Oracle simply eliminates the extents and keeps the  primary segment available for the sort segment in a temporary tablespace is handled  in a new area of the SGA called the sort extent pool. A process needing space for disk  sorts can allocate extents based on information in this area.

9 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.4 Changing Tablespace Status 

After creating any tablespace by default it is online & available for use as soon as it is  created. We can alter the table using the ALTER TABLESPACE command. To change the  status of the tablespace we use the ALTER TABLESPACE command along with OFFLINE & ONLINE keyword. 

For Ex. 

SQL> ALTER TABLESPACE tablespace name OFFLINE; 

SQL> ALTER TABLESPACE tablespace name ONLINE;



Individual datafiles can be taken OFFLINE as well, using the following commands  

SQL> ALTER DATABASE datafile filename ONLINE;  

SQL> ALTER DATABASE datafile filename OFFLINE;



A tablespace can be taken offline with one of several priorities, including normal,  temporary, and immediate. Depending on the priority used to take the tablespace  offline, media recovery on that tablespace may be required. A tablespace taken offline with normal priority will not require media recovery, but tablespace taken offline with  immediate priority will. A tablespace taken offline with temporary priority will not  require media recovery if none of the datafiles were offline prior the tablespace was  taken offline temporarily due to read or write errors, then media recovery will be  required to bring the tablespace back online

The following code block demonstrates taking tablespace offline with each of the three  possible priorities. Note that if we leave off a priority specification, normal priority is  assumed.

SQL> ALTER TABLESPACE data OFFLINE; 

SQL> ALTER TABLESPACE data OFFLINE NORMAL; 

SQL> ALTER TABLESPACE data OFFLINE IMMEDIATE; 

SQL> ALTER TABLESPACE data OFFLINETEMPORARY;



10 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

On occasion we may need to specify tablespace to only readable. The following code  block demonstrates both the code required to make a tablespace readable, and then  to change back to being writable again: 

SQL> ALTER TABLESPACE data READ ONLY; 

SQL> ALTER TABLESPACE data READ WRITE;



4.4.1 Changing Tablespace Storage Settings 

The DEFAULT STORAGE parameter lets us to specify the storage setting at the  tablespace creation. They have no bearing on the tablespace itself, but rather are used  as default settings when users issue create table, create index or create undo segment statements that have no storage parameter settings explicitly defined. We can change  the default setting for database by issuing the ALTER TABLESPACE command, as shown  below 

SQL> ALTER TABLESPACE data  

DEFAULT STORAGE (INITIAL 2M NEXT 1M);



4.4.2 Relocating Tablespace Datafiles: 

Depending on the type of tablespace, the database administrator can move  datafiles using one of two methods: the ALTER TABLESPACE command or the ALTER  DATABASE command. When relocating or renaming the datafiles within a single  tablespace, use the ALTER TABLESPACE command and when relocating the  datafiles for many tablespaces, use the ALTER DATABASE command. In either case,  executing these commands only modifies the pointers to the datafiles as recorded  in the control file. They do not physically rename or move the files in the operating  system. The actual renaming or relocation of the datafiles has to be done on the OS  level.

11 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.4.2.1 Relocating Datafiles with ALTER DATABASE: 

To relocate datafiles with ALTER DATABASE command, execute the following steps 1. Shut down the database. 

2. Use an OS command to move the files. 

3. Mount the database. 

4. Execute the ALTER DATABASE RENAME FILE command. 

5. Open the database. 

6. Backup the database and the control file. 

For Ex. 

SQL> shutdown normal 

Using Windows OS copy/move datafiles to the desired location 

SQL>STARTUP MOUNT 

SQL>ALTER DATABASE RENAME FILE  

 ‘e:\demo\msccs\data.dbf’  

to  

‘d:\msccs\data.dbf’ 

SQL>ALTER DATABASE OPEN;



4.4.2.2 Relocating Datafiles with ALTER TABLESPACE 

Use the following process to rename a datafile with the ALTER TABLESPACE command: 1. Take the tablespace offline. 

2. Use an OS command to move or copy the files. 

3. Execute the ALTER TABLESPACE RENAME DATAFILE command. 

4. Bring the tablespace online. 

5. Backup the database and control file.

12 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

For Ex.

SQL> alter tablespace msccssy offline; 

Move the datafiles using the copy/move  

SQL> ALTER TABLESPACE MSCCSSY 

rename datafile 'e:\demo\fy.dbf'  

to  

'e:\msccs\fy.dbf'



13 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.5 Oracle Managed Files 

To create Oracle-Managed datafiles for tablespaces or tempfiles or UNDOTBS  tablespaces, we need to define the parameter DB_CREATE_FILE_DEST in the init.ora file. This parameter should point to the default location where all Oracle-managed  datafiles or tempfiles need to be stored. The following is an example of defining this  parameter in Windows: 

DB_CREATE_FILE_DEST=e:\DEMO\oradata



4.5.1 Creating OMF Datafiles for Tablespaces: 

OMF datafiles can be created for regular tablespaces and UNDOTBS tablespaces.  When creating the tablespace, the datafile clause is optional. If we include the datafile clause, then the datafile name is optional. If the datafile is omitted entirely or does  not include a filename, then datafile is created in the location specified by the  DB_CREATE_FILE_DEST parameter. The following code block contains an example of  how OMF datafiles can be created or altered. 

SQL> show parameter db_create_file_dest; 

SQL> create tablespace data datafile size 10M;



The previous example shows the creation of tablespace with no filename specification  or the location it has to be created except for the size. When we drop a tablespace  containing Oracle managed datafiles , the datafiles are automatically removed from  the underlying operating system. 

4.5.2 Creating OMF Tempfiles for Temporary Tablespaces: 

When creating the temporary tablespace with OMF, the tempfile clause is optional. If  we include the tempfile clause, then the datafile will be created at the location  specified by DB_CREATE_FILE_DEST parameter. The following code block contains an  example of how OMF tempfiles can be created or altered.

SQL> Create temporary tablespace temptbs;



14 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.6 Optimal Flexible Architecture (OFA) 

OFA is an industry standard that defines how to set up Oracle software and databases.  OFA provides maximum flexibility in supporting multiple versions of Oracle software  and a single listener process to support Oracle instances that may be running under  different versions of Oracle software. This brings up to a concept known as  ORACLE_HOME. ORACLE_HOME corresponds to the environment in which Oracle  products run.  

Some benefits of using OFA: 

1. OFA is designed to organize large amounts of database datafiles and Oracle  software on disks to improve the performance of the database and minimize I/O  contention among many disks that house the databases. 

2. OFA is designed to be flexible enough to facilitate the growth of the database. 3. OFA encourages consistent database file-naming conventions. This enables the  DBA to clearly distinguish the datafiles, control files and other files that belong  to one database from the other. OFA also helps in processing datafiles to their  corresponding tablespaces. 

4. By keeping the contents of the tablespace separate, OFA minimizes  fragmentation and I/O contention. For example the separation of tables and  indexes in different tablespaces. This gives the flexibility to move the  tablespaces to different disk drives in the event that I/O contention goes up. 

5. Oracle supports multiple ORACLE_HOME locations. This enables us to execute  multiple releases of Oracle concurrently. For e.g. we could have database  instance running on 8.1.7 while at the same time have another database  instance running on 9.0.1 without causing any disruption to each other. 

6. OFA enables us to have one listener spawning connections to databases of  multiple Oracle software home directories. 

7. OFA keeps the administration information of each database separate.

15 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

/u02/ 

oradata/ 

dbo1/ 

 system01.ctl  control01.ctl 

 redo0101.log 

dbo2/ 

 system01.ctl  control01.ctl 

 redo0101.log 

 ….. 

/u03/ 

oradata/ 

dbo1/ 

 user01.dbf  control01.ctl 

 redo0101.log 

dbo1/ 

 tools01.dbf  control01.ctl 

 redo0101.log

16 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.7 Different Segment types & their usages 

For a quick review the logical structure of our database consists of the following  components: tablespace, segments, and extents. Tablespaces are logical structures  that house Oracle database objects and are comprised of one or more datafiles.  Segments are collections of physical data blocks that are used for housing the data in  the database objects.  

When Oracle runs out of room in the segments used for housing data in that object,  Oracle acquires another set of physical data blocks to house the data being added. This  next set of physical data block is called an extent. 

Different types of objects need different types of tablespaces to store them. In  contrast, Oracle usually creates a database with only one tablespace- the SYSTEM  tablespace. This tablespace should only be used for housing Oracle data dictionary and  SYSTEM undo segments. Oracle9i permits us to create undo & temporary tablespaces  when we create the database, so at a minimum, in addition to the SYSTEM tablespace,  we will have separate tablespace for our tables, indexes, undo segments, and  temporary segments. 

In order to understand the different type of tablespaces, we must understand the  different types of objects that a tablespace may store. Every database object, such as  tables or undo segments, ultimately consists of segments & extents.  

4.7.1 Table Segments & their Usage 

The first type of segment is the table segment. Each segment contains data blocks that  store the rows for that table. The rate at which the table fills and grows is determined by the type of data for that table will support. For Ex. if a table supports an application  component that accepts large volumes of data insertions, the segments that comprise  that table will fill at a regular step and rarely, if ever, reduce in size. Therefore, the DBA  managing the tablespace that stores that segment will want to plan for regular growth.  If, however, this table is designed for storing a small amount of validation data, the  size requirements of the table may be a bit more static.

17 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

In this case the DBA may want to focus more on ensuring that the entire table fits  comfortably into one segment, reducing the potential fragmentation that extent  allocation could cause. Still another factor to consider when planning table segments  is whether we intend to use parallel processing on our database. Under those  circumstances, we would actually want our table divided into several segments and  extents or even to use partitioning. 

4.7.2 Index Segments & their usage 

Another type of segment is the index segment. As with table segments, index segment  growth is qualified by the type of role the index supports in the database. If the table  to which the index is associated is designed for large volume transactions, the index  also should be planned for growth. However the index will almost invariably be smaller  than the tables in our database, because it only houses one or a few columns from the  table in an easy-to-search format, along with the ROWID information for the  associated rows from the table. An index consists of a list of entries for a particular  column (i.e. the indexed column) that can be easily searched for the values stored in  the column. Corresponding to each value is the ROWID for the table row that contains  that column value. The principle behind index growth is the same as the growth of the  corresponding table.  

If an index associated with a table that rarely changes, the size of the index may be  relatively static. However if the index associated with a table that experiences high  insert activity, then plan the index for growth as well. Again, if we plan to use parallel  processing in our database, we might actually want or index data stored in few  segments or even use partitioning.

18 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.7.3 Undo segments & their usage 

Undo segments are different from the table & index segments. Undo segments store  data changes from transaction to provide read consistency and transaction  concurrency. The segments used to store data for tables and indexes are  generally for ongoing use, meaning that once data is added to a table or index  segment, it generally stays there for a while. Undo segments aren’t like that.  

Instead, once a user process has made its database changes and commits the  transaction, the space in the undo segments that held that user’s data is released for  reuse in support of another user’s transaction. Oracle’s undo segments architecture is  designed to allow the undo segments to reuse that space. Usually, an undo segment  has some extents allocated to it at all times to store uncommitted transaction  information. 

As the number of uncommitted transactions rises & falls, so, too, does the amount of  space used in the undo segment. Where possible undo segment will try to place  uncommitted transaction data into an extent that it already has allocated to it. For Ex.  if an undo segment consists of five extents & the entire initial extent contains old data  from committed transactions, the undo segment will reuse that extent to store data  from new or existing uncommitted transactions once it fills the fifth extent.  

However, if the undo segments fills the extent with data from a long uncommitted  transaction, and the first extent still has data from uncommitted transaction in it, the  undo segment will need to allocate a new extent. 

Various long & short running transactions on our Oracle database can cause undo  segment to allocate and deallocate dozens of extents over & over again throughout  the day, which can adversely affect the growth of other database objects because of  tablespace fragmentation. Thus, it is wise to keep undo segments by themselves in  their own undo tablespace.

19 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.7.4 Temporary Segments & their usage 

The temporary segment is allocated to store temporary data for a user transaction that  cannot all be stored in memory. One popular use for temporary segments in user  processes is for sorting data into a requested order. These segments are allocated on  the-fly & dismissed when their services are no longer required.  

Their space utilization is marked by short periods of high storage need followed by  periods of no storage need. Because we have no idea when temporary segment could  come in and use all the available space in a tablespace, we can’t make an adequate  plan to accommodate the growth of other database objects.  

We need to keep temporary segments in their own tablespace as separate from other  database objects as possible.

20 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.8 Block space utilization 

In addition to overall storage allocation for extents on objects in the database, Oracle  enables us to manage how the objects use the space inside each data block they are  given. Space usage is determined at the block level with the pctfree and pctused options. By controlling space allocation at block level we can manage how database  object utilizes the space allocated to it more effectively according to the type of data  change activity the object is likely to be subjected to. 

Database objects are subjected to data changes in different ways by different applications: for Ex. A database that supports high online transaction processing  (OLTP) activity where later data changes expand the size of existing rows by populating  columns initially set to NULL when the row was added to the table need space  preallocated at the block level to let the row grow. In contrast, a decision support  system (DSS) such as data warehouse will likely not see its data changed once rows are  loaded for query access.  

4.8.1 Leaving Extra Space for Row Growth: pctfree 

The pctfree clause is specified at the database object level. It tells Oracle how much  free space to leave in a block when that block initially gets populated with row data.  This excess space remains free in each block to accommodate the growth of existing  rows in the block. 

For Ex. if a table has pctfree is specified to be 10 percent, Oracle will stop adding new  rows to the block when there is 10 percent free space left over in the block. That  remaining space is used by existing rows when users set NULL columns to non-NULL  value. 

We should use the following general approach when deciding how to set pctfree. If  rows added to a table will be updated often & each update will add the size in bytes of  the row, and then set pctfree to a high value. Setting pctfree high prevents  performance killers such as row migration where Oracle moves an entire row to  another block because the original block doesn’t have the room to store it anymore. 

21 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

Conversely, if the rows in the block will not be updated frequently, or if the updates  that will occur will not affect the size of each row, set the value for pctfree low on that  database object. A high value for pctfree is about 20 to 25, which means that 20 to 25  percent of space in a block is left free for later updates that might increase the size of  an existing row in the block. Conversely, a low value for pctfree would be 4 to 5, which  we use for static or read-only tables. 

4.8.2 Ongoing management of free space in blocks: pctused 

The other option for managing free space in blocks is pctused. It is also define at the  database object level. The pctused option specifies the percent usage threshold by  which Oracle will determine if it is acceptable to add new rows to a block. 

Consider the situation where data is added to a table. As new rows are added, Oracle  fills the block with inserted rows until reaching the cutoff set by pctfree. Later, as data  is deleted from the table, that table’s space utilization at the block level falls. When  the space used in a data block falls below the threshold limit set by the pctused, Oracle  adds the block to a freelist maintained for that table. 

A freelist is a list of data blocks that are currently accepting new data rows. When  setting pctused, be mindful that Oracle incurs/suffers some performance overhead by  making a block free and adding it to a freelist for that database object. Thus, there is a  trade-off inherent in specifying pctused. 

To prevent the block from making its way to the freelist when only one or two rows  can be added to the block, we should set the pctused option relatively low. 4.8.3 Setting actual values for pctfree & pctused 

These two options are defined at the database object level, and the values assigned  must always be considered together. We can specify a value for each option between  0 and 100. However, when determining values for pctfree & pctused, do not assign  values for these space utilization options that exceed 100 when added together  

In fact we should not set values for these options that even approach 90, because this  causes Oracle to spend more time managing space utilization that is necessary.  Following each value is a description of the scenario in Oracle where this setting might 

22 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

be appropriate, along with an indication of whether this is considered a high or low  value either option: 

pctfree 5, pctused 40: Good for static or read-only tables, such as those loaded  periodically for data warehouse and/or query-only applications. 

pctfree 10, pctused 40: Good for all-around OLTP situations especially when existing  rows won’t be increased by update activity after the row is inserted. 

pctfree 20, pctused 40: Good for OLTP activity where existing rows will be increased  by updates after the row is inserted (20 is a high value for pctfree

4.8.4 Setting pctfree & pctused: An example 

To see pctfree & pctused in action , the following code block contains a create  table statement with pctfree & pctused specified. Notice that these clauses are  not defined as part of the storage clause – they are their own clauses in a create  table command:

SQL> create table family  

(name varvhar2(10) primary key, 

relationship varchar2(10)) 

storage (initial 100k next 100k pctincrease 0  

minextents 1 maxextents 200) 

pctfree 20 pctused 40);



23 |  

M.Sc. (CS)/ (SE) SY Oracle Database Administration 

4.9 Viewing Tablespaces in Database 

To view the information about tablespaces available in the database we can  query one of the following table/views as sysdba.

View Name 

Description

V$TABLESPACE 

Name and number of all tablespaces from the control file. 

DBA_TABLESPACES 

Descriptions of all tablespaces. 

DBA_SEGMENTS 

Information about segments within all tablespaces. 

DBA_EXTENTS 

Information about data extents within all tablespaces. 

DBA_FREE_SPACE 

Information about free extents within all tablespaces. 

V$DATAFILE 

Information about all datafiles, including tablespace number of  owning tablespace. 

V$TEMPFILE 

Information about all tempfiles, including tablespace number of  owning tablespace. 

DBA_DATA_FILES 

Shows files (datafiles) belonging to tablespaces. 

DBA_TEMP_FILES 

Shows tempfiles belonging to temporary tablespaces. 

V$TEMP_EXTENT_MAP 

Information for all extents in all locally managed temporary  tablespaces. 

V$TEMP_EXTENT_POOL 

For locally managed temporary tablespaces: the state of  temporary space cached and used for by each instance. 

V$TEMP_SPACE_HEADER 

Shows space used/free for each tempfile. 

DBA_USERS 

Default and temporary tablespaces for all users. 

DBA_TS_QUOTAS 

Lists tablespace quotas for all users. 

V$SORT_SEGMENT 

Information about every sort segment in a given instance. The  view is only updated when the tablespace is of the TEMPORARY  type. 

V$TEMPSEG_USAGE 

Describes temporary (sort) segment usage by user for temporary  or permanent tablespaces. 



24 |  




M.Sc. (CS) / (SE) SY Oracle Database Administration  

UNIT VI 

Database Security and Auditing & Database Tuning 

6.1 Security Capabilities 

Oracle makes levels of security available to the DBA 

Account security for validation of user. 

Access security for database objects. 

System-level security for managing global privileges. 

6.1.1 Account Security 

In order to access data in an Oracle database, one must have access to an account in  that database. This access can be direct – via user connections into a database or  indirect. Indirect connections include access via preset authorization within database  links. Each account must have a password associated with it. Passwords are set for a  user when the user’s account is created and may be altered after the account is  created. The database stores an encrypted version of the password in a data  dictionary table.  

As of Oracle8, passwords can expire, and the DBA can establish the conditions under  which a password can be reused. Also, we can use profiles to enforce standards for  the passwords (such as minimum length), & we can automatically lock accounts if  there are multiple consecutive failures to connect to the account. 

6.1.2 System-Level Roles and Privileges 

A system privilege is the right to perform a particular action or to perform an action  on any object of a particular type. Objects include tables, views, materialized views,  synonyms, indexes, sequences, cache groups, replication schemes and PL/SQL  functions, procedures and packages. Only the instance administrator or a user with  ADMIN privilege can grant or revoke system privileges. 

We can use roles to manage the system-level privileges available to users. These  commands include CREATE TABLE and ALTER TABLE etc. Actions against each type of  database object are authorized via separate privileges. 

1 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

CONNECT and RESOURCE roles are useful for the basic system privileges required by  end users and developers, respectively. 

6.1.3 Object Privileges 

Access to objects within a database is enabled via object privileges. These allow specific database commands to be used against specific database objects via the grant command.  

For e.g. if the user JACK owns a table called EMPLOYEE, and executes the command. 

SQL> GRANT SELECT ON EMPLOYEE TO PUBLIC;



Then all users (PUBLIC) will be able to select record from the EMPLOYEE table. We  can create ROLES - named groups of privileges - to simplify the administration of  privileges. For applications with large number of users, roles greatly reduce the  number of GRANT commands needed. Since roles can be password protected and  can be dynamically enabled or disabled, they add an additional layer of security to  the database.

2 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.2 Implementing Security 

The security capabilities in Oracle include ROLES, PROFILES and direct GRANTS of  privileges. 

Operating System Security: We cannot access a database unless one can first  access, either directly or indirectly, the server on which the database is running.  Oracle uses a number of files that its users do not require direct access to. For  e.g. Datafiles and the Online Redo Log Files are written and read via Oracle’s  background processes. Only DBAs who will be creating & dropping these files  require direct access to them at the OS level. Export Dump files & other  backup files must also be secured. 

6.2.1 Creating Users 

When creating user, our goal is to establish a secure, useful account that has  adequate privileges and proper default settings.  

We can use CREATE USER command to create new database account. When the  account is created, it will not have any capabilities, and users will not even be able  to log in until that privilege is granted. 

All the necessary settings for a user account can be specified within a single CREATE  USER command. These setting include values for all the parameters listed in the  given table.

Parameter 

Usage

username 

Name of the schema

Password 

Password for the account; may also be tied directly to the  OS host account name or authenticated via a network  authentication service. For host authentication, use  identified externally. For host-based authentication, use  identified globally as.



3 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

Parameter 

Usage

Default tablespace 

The default tablespace in which objects created in this  schema will be stored. This setting does not give the user  rights to create objects; it only sets a default value

Temporary  

tablespace

The tablespace in which temporary segments used during  sorting transactions will be stored.

Quota[on  

tablespace]

Allows the user to store objects in the specified tablespace,  up to the total size specified as the quota.

Profile 

Assigns a profile to the user. If none is specified, then the  default profile is used. Profiles are used to restrict the usage  of system resources and to enforce password management  rules.

Password 

Preexpire the password.

Account 

Sets the account to either locked or unlocked

Default role[s] 

Sets the default roles to be enabled for the user.



The following command shows a sample CREATE USER command. In this example the  user JACK is created with password COCSITLTR 

SQL> CREATE USER JACK 

IDENTIFIED BY SPAROW 

DEFAULT TABLESPACE USERS 

TEMPORARY TABLESPACE TEMP;



Since no profile is specified, the default profile for the database will be used. This is  an actual profile named DEFAULT; its initial settings are for all resource  consumption limits to be set to UNLIMITED. Since no quotas are specified, the user  cannot create objects in the database. To grant resource quotas, use the QUOTA parameter of the CREATE USER or ALTER USER command. For e.g. user JACK is  granted a quota of 100MB in the USERS tablespace.

ALTER USER JACK 

QUOTA 100M ON USERS;



4 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

The JACK user can now create up to 100MB worth of segments in the USERS  tablespace. 

6.2.2 Dropping Users 

We can drop a user from the database via the DROP USER command. The DROP  USER command has one parameter, CASCADE, which drops all objects in the user’s  schema before dropping the user. If the user owns objects, we must specify the  CASCADE in order to drop the user. A sample DROP USER command is shown below. 

SQL> DROP USER JACK CASCADE;



Any views, synonyms, procedures, functions, or packages that reference objects in  the schema of the dropped user will be marked as INVALID. If another user with the  same name is created at a later date, there will be nothing for the new user to inherit  from the previous user with that name.  

6.2.3 System-Level Privileges (Roles) 

We can use system-level roles to distribute the availability of system-level commands  used to manage the database. We can either create customized system-level roles or  use the roles that come with the database. We can use WITH ADMIN OPTION clause  of the GRANT command to pass along to the grantee the ability to grant the  privilege to other users. The following table lists system-level roles provided with  Oracle. Using these roles allows us to limit the system-level privileges granted to  database management roles.

Role Name 

Privileges Granted to Role

CONNECT 

ALTER SESSION, CREATE CLUSTER, CREATE DATABASE LINK,  CREATE SEQUENCE, CREATE SESSION, CREATE  SYNONYM,CREAT TABLE, CREATE VIEW

RESOURCE 

CREATE CLUSTER, CREAT PROCEDURE, CREATE SEQUENCE,  CREATE TABLE, CREATE TRIGGER

DBA 

All System privileges WITH ADMIN OPTION



5 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

Role Name 

Privileges Granted to Role

EXP_FULL_ 

DATABASE

SELECT ANY TABLE, BACKUP ANY TABLE, INSERT, DELETE, AND  UPDATE ON THE TABLES SYS.INCVID,SYS.INCFIL,AND  SYS.INCEXP

IMP_FULL_ 

DATABASE

BECOME USER

DELETE_CATALOG _ROLE

DELETE on all dictionary packages

EXECUTE_ 

CATALOG_ROLE

EXECUTE on all dictionary packages

SELECT_CATALOG_ ROLE

SELECT on all catalog tables and views

CREATE TYPE 

CREATE TYPE, EXECUTE, EXECUTE ANY TYPE, ADMIN OPTION,  GRANT OPTION



The CONNECT Role is typically granted to end users. Although it does have some  object creation abilities (including CREATE TABLE privilege), it does not give users any  quotas on any tablespace. Since user will not have tablespace quotas unless we grant  them to users, they will not be able to create tables. 

The RESOURCE Role is granted to developers. The RESOURCE Role gives developers  the most used application development privileges.  

The DBA role includes all 124 of the system-level privileges, with the option to grant  those privileges to other users. 

The IMP_FULL_DATABASE & EXP_FULL_DATABASE roles are used during Import Export, respectively, when we perform a full database Import or Export.  These roles are part of the DBA role; we can use these roles to grant users limited  database management privileges. 

The roles SELECT_CATALOG_ROLE, EXECUTE_CATALOG_ ROLE roles grant users  privileges to select from data dictionary tables like DBA_TABLES or execute  exportable data dictionary objects. 

6 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

The CREATE TYPE role is enabled if we use the objects option. Users who have the  CREATE TYPE role enabled can create new abstract data types. 

6.2.4 User Profiles 

We can use Profiles to place limits on the amount of system and database resources  available to a user and to manage password restrictions.  

If no profiles are created in a database, then the default profile, which specifies  unlimited resources for all users, will be used. 

The resources that can be limited via Profiles are as given

Resource 

Description

SESSIONS_PER _USER

The number of concurrent sessions a user can have in an instance.

CPU_PER_ 

SESSION

The CPU time, in hundredths of seconds that a session can use. 

CPU_PER_CALL 

The CPU time, in hundredths of seconds, that a parse, execute or  fetch can use.

CONNECT_ 

TIME

The number of minutes a session can be connected to a database.

IDLE_TIME 

The number of minutes a session can be connected to the database  without being actively used.

LOGICAL_ 

READS_PER_ 

SESSION

The number of database blocks that can be read in a session.

LOGICAL_ 

READS_PER_ 

CALL

The number of database blocks that can be read during a parse,  execute, or fetch

FAILED_LOGIN _ ATTEMPTS

The number of consecutive failed login attempts that will cause an  account to be locked.

PASSWORD_ 

LIFE_ TIME

The number of days a password can be used before it expires.



7 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

Resource 

Description

PASSWORD_ 

REUSE_ TIME

The number of days that must pass before a password can be  reused.

PASSWORD_ 

REUSE_ MAX

The number of times a password must be changed before a  password can be reused.

PASSWORD_ 

LOCK_ TIME

The number of days an account will be locked if the  FAILED_LOGIN_ATTEMPTS setting is exceeded.

PASSWORD_ 

GRACE_ TIME

The length, in days, of the grace period during which a password  can still be changed when it has reached its PASSWORD_LIFE_TIME  settings.

PASSWORD_ 

VERIFY_ 

FUNCTION

The name of a function used to evaluate the complexity of a  password; Oracle provides one that we can edit.



PASSWORD_REUSE_MAX & PASSWORD_REUSE_TIME are mutually exclusive. If one  of these resources is set to a value, the other must be set to UNLIMITED. Profiles are  created via the CREATE PROFILE command. The ALTER PROFILE is used to modify  existing profiles.  

For e.g. 

SQL> CREATE PROFILE PRACTICE 

LIMIT  

CONNECT_TIME 300 

IDLE_TIME 5 

FAILED_LOBING_ATTEMPTS 3 

PASSWORD_LOCK_TIME 1 

PASSWORD_LIFE_TIME 30 

PASSWPRD_GRACE_TIME 10 

PASSWORD_REUSE_TIME 80 

PASSWORD_REUSE_MAX UNLIMITED;



8 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.2.5 Password Management 

We can use profiles to manage the expiration, reuse, and complexity of passwords.  For example, we can limit the lifetime of a password and lock an account whose  password is too old. We can also force to be at least moderately complex and lock  any account that has repeated failed login attempts. For e.g., if we set the  FAILED_LOGIN_ATTEMPTS resource of the user’s profile to 4, then four consecutive  failed login attempts will be allowed for the account; the fifth will cause the  account to be locked. If the correct password is supplied on the fourth attempt, then  the “failed login attempt count” is reset to 0, allowing for four more consecutive  unsuccessful login attempts before the account is locked. In the following listing, the  PRACTICE_1profile is created, for use by the user JACK user: 

SQL>CREATE PROFILE PRACTICE  

LIMIT  

CONNECT_TIME 300 

IDELE_TIME 5 

FAILED_LOBING_ATTEMPTS 3 

PASSWORD_LOCK_TIME 1



SQL> CREATE USER JACK IDENTIFIED BY SPAROW 

PROFILE PRACTICE_1; 

----------------------------------------------------------------------------------------------------------------- SQL>GRANT CREATE SESSION TO JACK;



If there are three consecutive failed connects to the JACK account, the account will  be automatically locked by Oracle for 1 day. When we then use the correct password  for the JACK account, we will receive an error. 

Connect JACK/LATUR  

ERROR: ORA-2800: the account is locked



To unlock the account, use the ACCOUNT UNLOCK clause of the ALTER USER  command (from DBA account), as shown below:

9 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

ALTER USER JACK ACCOUNT UNLOCK;



Following the unlocking of the account, connections to the JACK account will once  again be allowed.  

We can manually lock an account via the ACCOUNT LOCK clause of the ALTER USER  command: 

ALTER USER JACK ACCOUNT LOCK;



If an account becomes locked due to repeated connection failures, it will  automatically become unlocked when its profile’s PASSWORD_LOCK_TIME value is  exceeded. 

For e.g. if PASSWORD_LOCK_TIME is set to 1, then the JACK account in the last  example would be locked for one day, at which point the account would be unlocked. We can establish a maximum lifetime for a password via the PASSWORD_LIFE_TIME resource within profiles. For e.g. we could force users of the LIMITED_PROFILE profile to change their passwords every 30 days: 

ALTER PROFILE PRACTICE_1  

LIMIT  

PASSWORD_LIFE_TIME 30;



In the above example, the alter profile command is used to modify the PRACTICE_1  profile. The PASSWORD_LIFE_TIME value is set to 30, so each account that uses  that profile will have its password expired after 30 days. If the password expires,  we must change it next time when we log in unless the profile has a specified grace  period for expired passwords. The grace periods parameter is called  PASSWORD_GRACE_ TIME. If the password is not changed within the grace period  the account expires

An expired account is different from a locked account. A locked account, may be  automatically unlocked by the passage of time. An expired account, however,  requires manual involvement by the DBA to be released. To re-enable an expired  account, execute the ALTER USER command as shown below.

ALTER USER JACK PASSWORD EXPIRE;



10 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

In the above example DBA first expire the account manually. Next time, when user  JACK attempts to login, after specifying the password, Oracle will immediately  prompts for a new password for the account. 

Connect JACK/SPAROW  

ERROR: ORA-28001: the account has expired 

Changing Password for JACK 

Old Password: 

New Password 

Retype new password: 

Password Changed 

Connected 

SQL> 



We can also force users to change their passwords when they first access their  accounts, via the PASSWORD EXPIRE clause of the CREATE USER command. The  CREATE USER command, does not, however, allow us to set an expiration date for  the new password set by the user; to do that we must use the  PASSWORD_LIFE_TIME profile parameter. 

To see the password expiration date of any account, query the EXPIRY_DATE column  of the DBA_USERS data dictionary view. Users who wish to see the expiration date  for their accounts can query the EXPIRY_DATE column of the USER_USERS data  dictionary view.

11 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.2.6 Preventing Password Reuse 

To prevent a password from being reused, we can use one of the two profile  parameters: PASSWORD_REUSE_MAX & PASSWORD_REUSE_TIME. These two  parameters are mutually exclusive; if we set a value for one of them, the other must  be set to UNLIMITED. The PASSWORD_REUSE_TIME parameter specifies the number of days that must pass before it can be reused. For e.g. if we set  PASSWORD_REUSE_TIME to 60, then we cannot reuse the same password within 60  days.  

The PASSWORD_REUSE_MAX parameter specifies the number of password changes  that must occur before a password can be reused. For e.g. if we attempt to reuse  the password before the limit is reached, Oracle will reject the password change. 6.2.7 Setting Password Complexity 

We can force user’s password to meet standards for complexity. For e.g. we can  require that they be of at least minimum length, that they not be simple words, and  that they contain at least one number or punctuation mark.  

The PASSWORD_VERIFY_FUNCTION parameter of the CREATE PROFILE and ALTER  PROFILE commands specifies the name of the function that will evaluate the  passwords. If a user’s password does not meet the criteria, it is not accepted. To  simplify the process of enforcing password complexity, Oracle provides a function  called VERIFY_FUNCTION. By default, this function is not created. The  VERIFY_FUNCTION function is only created if we run the utlpwdmg.sql script located  in the ORACLE_HOME/ rdbms/admin directory.

12 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.2.8 Using Password File for Authentication 

To create password file, follow these steps. 

1. Create the password file, using the ORAPWD utility. 

ORAPWD FILE=filename PASSWORD=password ENTRIES=max_users



ORAPWD is an Oracle utility that generates the password file. When we execute  ORAPWD, we specify the name of the password file to be created, along with the  password for SYS access. The ENTRIES parameter tells Oracle how many entries we will be creating in the password file. We cannot expand the file at a larger date, so  set the ENTRIES value high. If the password file entries limit is exceeded, we will  receive an error ORA-1996 error. When we re-create the password file, we will need  to re-grant the SYSDBA and SYSOPER privileges. 

2. Grant the SYSOPER and SYSDBA privileges to each user who needs to perform  database administration, as shown in the following examples. SYSDBA gives the user  DBA authority; SYSOPER lets the user perform support activities for database  operations. In order to grant a user SYSOPER or SYSDBA privilege, we must be  connected as SYSDBA. Privileged users should now be able to connect to the database  by using a command similar to the one shown below: 

SQL> CONNECT COCSIT/LATUR AS SYSDBA



We can use the revoke command to revoke SYSDBA or SYSOPER system privilege  from a user, as shown in the following example:  

SQL> REVOKE SYSDBA FROM COCSIT 



To see users who have the SYSDBA and SYSOPER system privileges, query  V$PWFILE_USERS. V$PWFILE_USERS will have a value of TRUE in its SysDBA column  if the user has the SYSDBA privilege, & a value of TRUE in its SysOper column if the  user has SYSOPER privilege.

13 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.2.9 Password Protection 

Both accounts and roles can be protected via passwords. Passwords for both are set  when they are created, and may be modified by the alter user and alter role  commands. The initial password for an account is set via the create user command as  shown in the following example. 

SQL> CREATE USER THOMAS IDENTIFIED BY CAR;



Passwords for accounts should be changed via the alter user command as shown  below 

SQL>ALTER USER THOMAS IDENTIFIED BY BIKE



We can use the SQL* PLUS command to change a user’s password. The password  command will prompts for the old password, new password and a verification of the  new password. The values entered are not echoed to the screen. To change another  user’s password, use the PASSWORD command followed by the username. 

PASSWORD JACK



Oracle prompts for JACK’s new password and verification. The PASSWORD  command is very useful for end users, since it greatly simplifies the commands they  need to use when changing passwords if they do not use the password command,  then they will need to use the following command: 

ALTER USER USERNAME IDENTIFIED BY NEWPASSWORD;



Password for roles are set at the time the role is created, via the create role  command. We do not need to set a password for a role; if one is specified, the  password must be entered when the role is enabled by the user. 

CREATE ROLE ACCOUNT_CREATOR IDENTIFIED BY BANK.



We can use the alter role command to change the password associated with roles.  Like user passwords, roles can also be identified externally, thereby enforcing a link  between the host account name and the role name. Unlike user accounts, it is  possible to have roles with no passwords (the default).We can remove the password  from a role via the not identified clause, as shown below.

14 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

ALTER ROLE ACCOUNT_CREATOR NOT IDENTIFIED;



After the execution of the above command, the Role ACCOUNT_CREATOR will not be  password protected. 

Roles can be tied to OS privileges. If this capability is available on the OS, we can  invoke it by using the identified externally clause of the alter role command. When  the role is enabled, Oracle will check OS to verify access. Altering a role to use this  security feature is shown below: 

ALTER ROLE MANAGER IDENTIFIED BY EXTERNALLY;



6.3 Auditing 

The database has the ability to audit all actions that take place within it. Audit  records may be written to either the SYS.AUD$ table or the OS’s audit trail. The  ability to use the OS’ audit trail is OS dependent. Three different types of actions  may be audited: login attempts, object accesses, and database actions. When  performing audits, the database’s default functionality is to record both successful  and unsuccessful commands, this may be modified when each audit type is set up. To  enable auditing in a database, the init.ora file for the database must obtain an entry  for the AUDIT_TRAIL parameter. The AUDIT_TRAIL values are as follows: 

NONE: Disable auditing 

DB : Enables auditing, writing to the SYS.AUD$ 

OS : Enables auditing, writing to the OS’ audit trail 

The AUDIT commands that we can use will not be activated until we set the  parameter AUDIT_TRAIL in the init.ora file. If we choose store the audit in the  SYS.AUD$ table, then that table’s records should be periodically archived, and the  table should then be truncated. Since it is in the data dictionary, this table is in the  SYSTEM tablespace and may cause space problems if its records are no periodically  cleaned out. We can grant DELETE_CATALOG_ROLE to a user to give the ability to  delete from the SYS.AUD$ table.

15 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.3.1 Login Audits 

Every attempt to connect to the database can be audited. The command to begin  auditing of login attempts is  

SQL> AUDIT SESSION 



To audit only those connection attempts that result in successes or failures, use one  of the commands shown in the following: 

AUDIT SESSION WHENEVER SUCCESSFUL

AUDIT SESSION WHENEVER NOT SUCCESSFUL;



If the audit records are stored in the SYS.AUD$ table, then they may be viewed via  the DBA_AUDIT_SESSION data dictionary view of that table. The query shown in the  following listing retrieves login audit records from the DBA_AUDIT_SESSION view. It  lists the OS account that was used (OS_Username), the Oracle account name  (Username), and the terminal ID that was used (Terminal). The Returncode column  is evaluated; it is 0, the connection attempt succeeded; otherwise, two common  error numbers are checked to determine the cause of the failure. The timestamp and  logoff times are also displayed. 

SQL> select  

OS_Username, 

Username, 

Terminal, 

DECODE(Returncode, '0', 'Connected', '1005',  

'FailedNull', '1017', 'Failed' ,Returncode), 

TO_CHAR(Timestamp, 'DD-MON-YY HH24:MI:SS'), 

TO_CHAR(Logoff_Time, 'DD-MON-YY HH24:MI:SS') 

from DBA_AUDIT_SESSION;



The error numbers that are checked are ORA-1005 and ORA-1017. These two error  codes cover most of the login errors that occur. ORA-1005 is returned when a user  enters a name but no password. ORA-1017 is returned when a user enters an invalid  password. To disable session auditing, use the NOAUDIT command, as shown below

16 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

NOAUDIT SESSION;



6.3.1 Action Audits 

Any action affecting database objects- such as a table, database link, tablespace  synonym, rollback segment, user or index can be audited. The possible actions such  as CREATE, ALTER and DROP -that can affect those objects can be grouped together  during auditing. This grouping of commands reduces the amount of administrative  effort necessary to establish and maintain the audit settings. All of the system-level  commands can be audited, and groups of commands are provided. For e.g. to Audit  all commands that affect roles, enter the following command. 

AUDIT ROLE;



To disable this setting, enter the following command 

NOAUDIT ROLE;



AUDIT ROLE command will audit CREATE ROLE, ALTER ROLE, and SET ROLE  commands. Oracle also provides the following groups of statement options 

Role Name 

Auditing Performed

Connect 

Audits Oracle logons and logoffs

DBA 

Audits commands that require DBA authority, such as grant, revoke,  audit, noaudit, create or alter tablespace; and create or drop public  synonym

RESOURCE 

Audits create & drop for tables, clusters, views indexes, tablespaces,  types & synonyms

ALL 

Audits all these commands

ALL  

PRIVILEGES

All of the preceding commands plus deletes, inserts, updates and  several other commands



Each action that can be audited is assigned a numeric code within the database.  These codes are accessible via the AUDIT_ACTIONS view. The following query will  display the available action codes for our database.

17 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

SELECT 

ACTION, /* ACTION CODE*/ 

NAME /* NAME OF THE ACTION, SUCH AS ALTER USER*/ 

FROM AUDIT_ACTIONS;



Once the action code is known we can use the DBA_AUDIT_OBJECT view to  determine how an object was affected by the action. The query shown in the  following listing retrieves login audits records from the DBA_AUDIT_OBJECT view. It  lists the OS account that was used (OS_Username), the Oracle account name (Username) and the terminal ID that was used (Terminal). The object owner (Owner)  and name (Obj_Name) are selected, along with the action code (Action_Name) for  the action performed. The Returncode column is evaluated: if it is 0, then the  connection attempt succeeded; otherwise, the error number is reported. The login  & logoff times are also displayed. 

SELECT  

OS_USERNAME, 

USERNAME, 

TERMINAL,  

OWNER, 

OBJ_NAME,  

ACTION_NAME, 

DECODE (RETURNCODE, ‘0’, ‘SUCCESS’, RETURNCODE), 

TO_CHAR (TIMESTAMP, ‘DD-MON-YYYY HH24:MI-SS’) 

FROM DBA_AUDIT_OBJECT;



We can also specify particular users to AUDIT, using the by USERNAME clause of the  AUDIT command as shown below, all update actions by the user COCSIT will be audited.

AUDIT UPDATE TABLE BY COCSIT;



18 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.3.3 Object Audits 

In addition to system-level actions on objects, data manipulation actions to objects  can be audited. These may include auditing SELECT, INSERT, UPDATE and DELETE  operations against tables. Actions of this type are audited in a manner i.e. very  similar to the ACTION AUDITS. The only difference is the addition of a new clause in  the audit command. 

6.3.3.1 Audit by Session or by Access 

The additional clause for object audits is the BY SESSION or BY ACCESS. This clause  specifies whether an audit record should be written once for each session (BY  SESSION) or once for each time an object is accessed (BY ACCESS). For e.g. if a user  executed four different update statement against the same table, auditing BY ACCESS  would result in four audit records being written-one for each table access.  

Auditing the same situation BY SESSION would result in only one audit record being  written. Auditing BY ACCESS can therefore dramatically (affectedly) increase the rate  at which audit records are written. The BY ACCESS option is generally used on a  limited basis to determine the number of separate actions taking place during a  specific time interval; when that testing is done, the auditing should be reverted to  BY SESSION status. 

Examples of these options are shown in the following listing. In the first command  insert commands against the EMPLOYEE table are audited. In the second command,  every command that affects the TIME_CARDS table is audited. In the third  command, all delete operations against the DEPARTMENT table are audited, on a  per-session basis 

SQL>AUDIT INSERT ON COCSIT.EMPLOYEE; 

SQL>AUDIT ALL ON COCSIT.TIME_CARDS; 

SQL>AUDIT DELETE ON COCSIT.DEPARTMENT BY SESSION;



The resulting audit records can be viewed via the query against the  DBA_AUDIT_OBJECT views.

19 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.3.3.2 Fine grained Object Auditing 

One failing of object auditing is that although we can see that the objects was  accessed and see who accessed it, we cannot see that what values were changed and  what the old values were. Oracle 9i provides a PL/SQL package to enable fine grained object auditing that helps us to track accessed information in our database  and how the information has been modified. To enable fine-grained object auditing,  create a SQL predicate to describe the conditions under which an audit record should  be logged. The SQL predicate defines the data access conditions that should trigger  an audit event. We use the PL/SQL package DBMS_FGA to administer the fine 

grained audit policies. The actions we can take using DBMS_FGA are 

Policy 

Description

add_policy 

Add a fine grained auditing policy to a table or view.

drop_policy 

Drop a fine-grained auditing policy from a table or view

enable_policy 

Enable a security policy for a table or view

disable_policy 

Disable a security for a table or view



6.3.4 Protecting Audit Trail 

Since the database audit trail table, SYS.AUD$ is stored within the database, any  audit records that are written there must be protected. Otherwise, a user may  attempt to delete his or her audit trail records after attempting unauthorized actions  within the database. The ability to write audit records to the OS audit trail helps to  get around this problem by storing the records external to the database. However,  this option is not available for all OS. If we are storing the audit trail information in  SYS.AUD$; then we must protect that table. Start with audit actions against the  table via the following command: 

AUDIT ALL ON SYS.AUD$ BY ACCESS;



If any actions are made against the SYS.AUD$ table (inserts generated via audits of  other table don’t count), then those actions will be recorded in the audit trail. Not  only that, but actions against SYS.AUD$ can only be deleted by users who have the  ability to CONNECT <username/password> AS SYSDBA

20 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

Any actions made while connected AS SYSDBA are automatically written to the audit  trail. Whenever possible, coordinate database auditing and OS auditing. This will  make it easier to track problems and coordinate security policies across the two  environments. Since the system managers will most likely not want to see the most  critical to audit. Our aim should be to have an audit trail in which every record is  significant. If it is not, then use the commands to modify the auditing options to  reflect the true actions of interest.

21 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.4 Tuning Databases 

Performance tuning is a part of the life of every database application. Tuning is the  final step in a four-step process: planning, implementing, monitoring must precede  it. We will see tuning activities for the following areas: 

1. Application Design 

2. SQL 

3. Memory Usage 

4. Data Storage 

5. Data Manipulation 

6. Physical Storage 

7. Logical Storage 

8. Network Traffic 

6.4.1 Tuning Application Design 

In designing an application, we can take several steps to make effective and proper  use of the available technology as shown below. 

6.4.1.1 Effective Table Design 

No matter how well designed our database is, poor table design will lead to poor  performance. Not only that, overly rigid (inflexible) adherence (faithfulness) to  relational table designs will lead to poor performance. i.e. due to the fact that while  fully relational design are logically desirable, they are physically undesirable. The  problem with such designs is that although they accurately reflect the ways in which  an application’s data is stored is related to other data, they do not reflect the normal  access paths that users will employ to access that data. Once the user’s access  requirements are evaluated, the fully relational table design will become unworkable  for many large queries. 

22 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

Typically, the first problems will occur with queries that return a large number of  columns. These columns are usually scattered among several tables, forcing the  tables to be joined together during the query. If one of the joined table is large, then  the performance of the whole query may suffer. In designing the tables for an  application, developers should therefore consider denormalizing data. For Ex.  creating small summary tables from large, static tables. But if the users frequently  request it, and the data is largely unchanging , then it makes sense to periodically  store that data in the format in which the users will ask for it.  

For Ex. some applications may store historical data and current data (Temporal  Data) in the same table. Each record may have a timestamp column, so the current  record in a set is the one with most recent timestamp

Every time a user queries the table for a current record, the user will need to  perform a subquery (where timestamp_col=(select max(timestamp_col) from emp  where emp_name=‘John’).  

If two such tables are joined there will be two subqueries. In a small database, this  may not present a performance problem, but as the number of tables and rows  increases, performance problems will follow.  

Partitioning the historical data away from the current data or storing the historical  data in a separate table will involve more works for DBAs and developers but  should improve the long term performance of the application.

23 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.4.1.2 Distribution of CPU Requirements 

The limiting factor of our database’ performance may be availability of CPU  resources. Short of purchasing additional CPU power for the available servers, we  have several options for managing the CPU resources. 

1. The CPU load should be scheduled: time long-running batch queries or update  programs to run at off-peak hours. Rather than run them at lower OS priority  while online users are performing transactions, run them at normal OS priority at  an appropriate time. Maintaining their normal priority level scheduling the jobs  approximately will minimize potential locking, rollback, and CPU conflicts. 

2. Take advantage of the opportunity to physically shift CPU requirements from one  server to another. Whenever possible, isolate the database server from the  application’s CPU requirements.  

3. Consider using Oracle’s Real Application Cluster (RAC) technology to spread the  database access requirements for a single database across multiple instances. 4. Use the Database Resource Management features introduced in Oracle8i. We  

can use the Database Resource Manager to establish resource allocation plans  and resource consumer groups. 

5. Use the Parallel Query Option (PQO) to distribute the processing requirements  of SQL statements among multiple CPUs. Parallelism can be used by almost  every SQL command, including the select, create table as select, create index,  recover and SQL * Loader Direct Path loading options. 

We can use the PARALLEL_ADAPTIVE_MULTI_USER initialization parameter to limit  the parallelism of operations in a multiuser environment.  

The PARALLEL_ADAPTIVE_MULTI_USER feature is automatically turned on if we set  the PARALLEL_ AUTOMATIC_TUNNING initialization parameter to TRUE. 

24 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.4.1.3 Effective Application Design 

First, applications should minimize the number of times they request data from the  database. Options include the use of sequences, PL/SQL blocks, and denormalization  of table. We can use distributed database objects such as snapshots and  materialized views to help reduce the number of times a database is queried. 

Second, different users of the same application should query the database in a very  similar fashion. Consistent access paths increase the likelihood that requests may be  resolved by information i.e. already available in the SGA. 

The sharing of data includes not only the tables and rows retrieved but also the  queries may already exist in the shared SQL area (See V$SGASTAT), reducing the  amount of time needed to process the query. As of Oracle9i, new cursor sharing  enhancements in the optimizer increase the likelihood of statement reuse within the  shared pool, but the application needs to be designed with statement reuse in mind. 

Third we should restrict the use of dynamic SQL. Dynamic SQL, which uses the  DBMS_SQL package, is always reparsed even if an identical query exists in the  Shared Pool. Dynamic SQL is useful feature, but it should not be used for the  majority of an application’s database accesses. 

Stored procedures are available for use in application development. When they are  used, the same code may be executed multiple times, thus taking advantage of the  Shared Pool. We can also manually compile procedures, functions packages to avoid  run-time compilation. When we create a procedure, Oracle automatically compiles  it.  

If the procedure later becomes invalid, the database must recompile it before  executing it. To avoid incurring this compilation costs at run time, use the ALTER  PROCEDURE command shown in the following listing:

ALTER PROCEDURE MY_RAISE COMPILE;



25 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.4.2 Tuning SQL 

As with application design, the tuning of SQL statement seems far removed from a  DBA’s duties. However, DBAs should be involved in reviewing the SQL i.e. written as  part of the application.  

A well-designed application may still experience performance problem if the SQL it  uses is poorly tuned. Application design and SQL problems cause most of the  performance problems, in properly designed databases. 

The key to tuning SQL is to minimize the search path that the database uses to find  the data. In most Oracle tables, each row has RowID associated with it. The RowID  contains information about the physical location of the row.  

When a query without a where clause is executed, the database will usually perform  a full table scan, reading every block from the table. During a full table scan, the  database locates the first block of the table and then reads sequentially through all  other blocks in the table. For large tables, full table scans can be very time  consuming. Running the query in parallel reduces that time.  

When specific rows are queried, the database may use an index to help speed the  retrieval of the desired rows. An index maps logical values in a table to their  RowIDs, which in turn map them to a specific physical location. Indexes may either  be unique in which case there is no more than one occurrence for each value - or  nonunique. Indexes only store RowIDs for column values in the indexed columns.  Nothing is stored in the index if the column is null for a particular row

We may index several columns together. This is called a concatenated index, and it  will be used if its leading column is used in the query’s where clause. As of Oracle9i the optimizer can also use a skip-scan approach in which a concatenated index is  used even if its leading column is not in the query’s where clause.  

Skip-scan approaches are not as efficient as those that use the leading column of the  index. Indexes must be tailored to the access path needed. Consider the case of a  three column concatenated index. As shown in the following Ex. it is created on City  State and Zip columns of the Employee table.

26 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

CREATE INDEX CITY_ST_ZIP_NDX 

ON EMPLOYEE(CITY, STATE, ZIP)  

TABLESPACE INDEXES;



If a query of the given form is executed, then the index will not be used in Oracle8i, because its leading column (City) is not used in the where clause. In Oracle9i, the  skip-scan of the optimizer allows the CITY_ST_ZIP_NDX to be used to satisfy this  query.  

If users will frequently run this type of query, then the index’s column should be  reordered with state first in order to reflect the actual usage pattern. 

SELECT * FROM EMPLOYEE 

WHERE STATE=‘MH’;



It is critical that the table’s data be as ordered as possible. If users are frequently  executing range queries –selecting those values that are within a specified range then having the data ordered may require fewer data blocks to be read while  resolving the query, thus improving performance.  

The ordered entries in the index will point to a set of neighboring blocks in the table  rather than blocks that are scattered throughout the datafile.  

For e.g. a range query of the type shown in this listing will require fewer data blocks to be read if the physical records in the EMPLOYEE table are ordered by the Empno  column.  

SELECT * FROM EMPLOYEE 

WHERE SAL BETWEEN 10000 AND 100000;



This should improve the Performance of the query.  

To guarantee that the rows are properly ordered in the table, extract the records to a  flat file; sort the records in the file, & then delete the old records & reloads them from  sorted file. As an alternative to extracting data to a flat file, we can use Oracle’s  internal sorting procedures to sort the data. Ideally we could reorder rows for a table  by creating a second table, via the create table as select command. 

27 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

Prior to Oracle9i the create table as select & insert as select commands do not allow  to specify an order by clause. To circumvent this limitation in version Prior to  Oracle8i, create a view on the base table. Views now support the order by clause, so  we can create a view and select from it to populate an ordered table. 

CREATE OR REPLACE VIEW EMP_VIEW AS 

SELECT * FROM COMPANY 

ORDER BY EMPNO;



We can now create a table selecting from EMP_VIEW; the effect will be that a  duplicate copy of EMPLOYEE will be created, with the rows properly sorted.

28 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.4.3 Tuning Memory Usage 

We can use the STATSPACK utilities and queries of the data dictionary tables to  identify problem areas in the database’s memory allocation

The Data Block Buffer Cache and the Shared Pool are managed via a Least Recently  Used (LRU) algorithm. A present area is set aside to hold values; when it fills, the  Least Recently Used data is eliminated from the memory and written back to the  disk. An adequately sized memory area keeps the most frequently accessed data in  memory; accessing less frequently used data requires physical reads. The hit ratio is  a measure of how well the data buffer cache is handling requests for data. In its  general form, it is calculated as  

HIT RATIO= (LOGICAL READS –PHYSICAL READS) /LOGICAL READS



Thus, a perfect hit ratio would have a value of 1.00. In that case, all requests for  database blocks (logical reads) would be fulfilled without requesting any data from  datafiles (physical reads); all requests would be handled by the data i.e. already in  memory. 

The overall hit ratio for an application will be lowered by its batch activity. Note that  the hit ratio in a database is cumulative, reflecting all of the processing performed  since the last time the database was started.  

A single poorly written query or long running batch job will adversely affect the hit  ratio long after it completes. To avoid this impact, we can measure the hit ratio  during specific time intervals. 

We can see the queries performing the logical & physical reads in the database via  the V$SQL view. V$SQL reports the cumulative number of logical and physical reads  performed for each query in the Shared Pool, as well as the number of times each  query was executed. The script in the following listing will show the SQL text for the  queries in the Shared Pool, with the most I/O intensive queries listed first. The query  also displays the number of logical reads per execution.

29 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

SQL> SELECT  

BUFFER_GETS,  

DISK_READS, 

EXECUTIONS, 

BUFFER_GETS/ EXECUTIONS B_E,  

SQL _TEXT 

FROM V$SQL 

ORDER BY DISK_READS DESC;



If the Shared Pool has been flushed, queries executed prior to the flush will no  longer be accessible via V$SQL. However, the impact of those queries can still be  seen, provided the users are still logged in. The V$SESS_IO view records the  cumulative logical reads and physical reads performed for each user’s session. We  can query V$SESS_IO for each session’s hit ratio, as shown below. 

SELECT  

SESS.USERNAME, 

SESS_IO.BLOCK_GETS, 

SESS_IO.CONSISTENT_GETS, 

SESS_IO.PHYSICAL _READS, 

ROUND (100 * (SESS_IO.CONSISTENT_GETS + SESS_IO.BLOCKS_GETS – SESS_IO.PHYSICAL_READS)/(DECODE(SESS_IO.CONSISTENT_GETS,0,1,  SESS_IO.CONSISTENT_GETS + SESS_IO.BLOCKS_GETS)),2) SESSION_HIT_RATIOFROM V$SESS_IO SESS_IO, V$SESSION SESS 

WHERE SESS.SID=SESS_IO.SID AND SESS.USERNAME IS NOT NULL 

ORDER BY USERNAME;



We can manipulate the actions of the LRU algorithm in the data block buffer cache  via the cache option. Normally, when a full table scan is executed, the table’s blocks  are placed on the least recently used end of the LRU list so that they will be  overwritten first

30 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

When the cache option is used, Oracle will place these blocks at the most recently  used end of the LRU list. That table’s data will still be subject to the LRU algorithms  that manage the SGA caches, but it will stay in the SGA longer than if it had been  treated normally. The cache option can be specified at a table level via the create  table and alter table commands, and can also be specified via query hints. The cache  option is most useful for frequently accessed tables that change infrequently. We can  then run queries that will reload the most-used tables into the SGA caches each time  the database is restarted. To see the objects whose blocks are currently in the data  block buffer cache, query X$BH table in SYS’s schema, as shown in the following  query. In the following listing, the SYS and SYSTEM objects are excluded from the  output so the DBA can focus on the application tables and indexes present in the  SGA

SELECT  

OBJECT_NAME, 

OBJECT_TYPE, 

COUNT (*) NUM_BUFF 

FROM SYS.X$BH A, SYS.DBA_OBJECTS B 

WHERE A.OBJ = B.OBJECT_ID  

AND  

OWNER NOT IN (‘SYS’ ,‘SYSTEM’) 

GROUP BY OBJECT_NAME, OBJECT_TYPE;



With all of the areas of the SGA- the data block buffers, the dictionary cache, and  the Shared Pool - emphasis should be on sharing data among users. Each of these  areas should be large enough to hold the most commonly requested data from the  database. We can create a Large Pool within SGA to be used when Oracle requests  large contiguous areas of memory within Shared Pool. To create the Large Pool, set  a value in bytes for the LARGE_POOL_SIZE initialization parameter. By default, the  Large Pool is not created

31 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

Setting some parameters like PARALLEL_AUTOMATIC_ TUNING=TRUE will also set  the LARGE_POOL_SIZE parameter to a nonzero value. We can reserve an area within  the Shared Pool for large objects via the SHARED_POOL_RESERVED_SIZE initialization parameter. The “reserved size” is set aside for the Shared Pool entries  of large objects. 

6.4.3.1 Specifying the size of the SGA 

To create the SGA, we should specify the values for the following initialization  parameters:

Initialization Parameter 

Description

SGA_MAX_SIZE 

As of Oracle8i, we can specify the maximum size to which the  SGA can grow. The size of the Shared Pool & data block buffer  cache can be changed dynamically

SHARED_POOL_SIZE 

The size of the Shared Pool.

DB_BLOCK_SIZE 

This will be the default database block size for the database as  established during database creation

DB_CACHE_SIZE 

This parameter replaces the DB_BLOCK_BUFFERS parameter  used in earlier versions of the Oracle RDBMS. The cache size is  specified in bytes rather than in blocks. Oracle rounds the size  to units of 4MB if the SGA_MAX_SIZE <128MB; otherwise it  will be 16MB.

DB_nK_CACHE_SIZE 

To use multiple database block sizes within a single database,  we must specify a DB_CACHE_SIZE parameter values and at  least one DB_nK_CACHE_SIZE parameter value. For Ex, if the  standard database block size is 4KB, we can also specify a  cache for the 8KB block size tablespaces via the  DB_8K_CACHE_SIZE parameter



32 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

For e.g. we may specify  

SGA_MAX _SIZE=500M 

SHARED_POOL_SIZE=80M 

DB_BLOCK_SIZE=8192 (8Kb (1024 *8)) 

DB_CACHE_SIZE=160M 

DB_4K_CACHE_SIZE=4M



Within the SGA, 4MB will be available for data queried from objects in tablespaces  with 4KB block sizes. Objects using the standard 8KB block size will use the 160MB  cache. While the database is open, we can change the SHARED_POOL_SIZE DB_BLOCK_SIZE parameter values via the alter system command.

33 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.4.4 Tuning Data Storage 

How the database actually stores data also has an effect on the performance of  queries. If the data is fragmented into multiple extents, then resolving a query may cause the database to look in several physical locations for related rows. Free space  fragmentation may slow performance when storing new records. If the free space  in a tablespace is fragmented, then the database may have to dynamically combine  neighboring free extents to create a single extent that is large enough to handle the  new space requirements. Tuning the data storage thus involves tuning both used  space and free space. Most enterprise database platforms use RAID technologies to  improve the system’s I/O performance

6.4.4.1 Defragmentation of Segments 

When a database object is created it is assigned to a tablespace via user defaults or  specific instructions. A segment is created in that tablespace to hold the data  associated with that object. The space that is allocated to the segment is never  released until the segment is dropped or truncated. 

A segment is made up of sections called extents. The extents themselves are  contiguous sets of Oracle blocks. Once the existing extents can no longer holds new  data, the segment will obtain another extent. This extension process will continue  until no more free space is available in the tablespace’s datafile, or until an internal  maximum number of extents per segment is reached.  

To simplify the management of segments, we should use a consistent set of extent  sizes. The sizes chosen should be multiples of the OS I/O size & should be multiples  of each other. For Ex. We may create all of our small tables with 1MB extent sizesmedium-sized tables with 4MB extent sizes, & large tables with 16MB extent sizes

If our extents are properly sized, then table lookups will not be impacted by the  number of extents in the table. Oracle supports two types of internal space  management: dictionary managed tablespaces and locally managed tablespaces

34 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

In dictionary managed tablespaces, the space management data is stored in the  data dictionary, in table called SYS.UET$ and SYS.FET$. In locally managed  tablespace, the space management data is stored in bitmap within the tablespace’s  datafiles

When an object allocates an extent in a dictionary manage tablespace. Oracle  updates the entries in the used extents table, SYS.UET$. At the same time, it updates  the entries in the free extent table, SYS.FET$. 

The SYS.UET$ table has one record for every extent in the database, and the  SYS.FET$ has one row for every free extent in the database. If we have a large  number of extents in a table or index, then DDL commands that updates SYS.UET$ and SYS.FET$ may impact the performance of the database. For e.g. consider a table  that has 10,000 extents. When we drop that table, Oracle will need to perform,  10,000 updates of SYS.UET$ (since the data in SYS.UET$ must always be consistent  for everyone in the database.) 

At the same time, Oracle must update SYS.FET$ and the other data dictionary tables  used for object maintenance. SYS.UET$ is not tuned to support the drops of tables  with thousands of extents.  

In a test environment, a drop table of a 5000 extent table took two minutes to  complete. In the same environment, a drop table of a 10,000 extent table took ten  minutes to complete. As more extents added, the time required to drop the table  grew exponentially worse. 

When considering the impact of the number of extents on our DDL command’s  performance, we should consider not only the tables but also the indexes, partitions,  and index partitions that will be dropped along with the table.  

If we have a table with 100 partitions, and each of the partitions has 300 extents,  and each of the partitions has a local index that in turn has 300 extents, then  dropping the table will require dropping 60,000 extents and that may cause  performance problems. 

35 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

To resolve this problem Oracle8i introduced Locally Managed Tablespaces (LMTs) in  which the extent usage information is stored in a bitmap in the file header rather  than in the data dictionary.  

The following query will retrieve the tablespace name, owner, segment name, and  segment type for each segment in the database along with the segment’s number of extents and blocks used. 

SELECT 

TABLESPACE_NAME, /* TABLESPACE NAME*/ 

OWNER, /*OWNER OF THE SEGMENT*/ 

SEGMENT_NAME, /*NAME OF SEGMENT*/ 

SEGMENT_TYPE, /*TYPE OF SEGMENT*/ 

EXTENTS, /*NUMBER OF EXTENTS IN SEGMENT*/ 

BLOCKS, /* NUMBER OF BLOCKS IN SEGMENT*/ 

BYTES /* NUMBER OF BYTES IN SEGMENT*/ 

FROM DBA_SEGMENTS;



Segment type includes TABLE, TABLE PARTITION, INDEX, INDEX PARTITION,  LOBINDEX, LOBSEGMENT, NESTED TABLE, CLUSTER, ROLLBACK, TEMPORARY,  DEFFERED ROLLBACK, TYPE2 UNDO.  

The DBA_SEGMENTS view does not list the size of the individual extents in a  segment. To see the size of each extent, query the DBA_EXTENTS view, as shown  below.

36 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

SELECT 

TABLESPACE_NAME, /* TABLESPACE NAME*/ 

OWNER, /*OWNER OF THE SEGMENT*/ 

SEGMENT_NAME, /*NAME OF SEGMENT*/ 

SEGMENT_TYPE, /*TYPE OF SEGMENT*/ 

EXTENT_ID, /* EXTENT NUMBER IN SEGMENT*/ BLOCK_ID, /*STARTING BLOCK NUMBER FOR THE SEGMENT*/ BYTES, /* NUMBER OF BYTES IN SEGMENT*/ BLOCKS 

FROM DBA_EXTENTS  

WHERE SEGMENT_NAME=‘SEGMENT_NAME’ 

ORDER BY EXTENT_ID;



The query in the preceding listing selects the extents information for a single  segment. It returns the storage information associated segment's extents including  the size & location of each data extent.

37 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.4.4.2 Locally Managed Tablespaces 

Locally managed tablespaces are available to handle extent management within  the tablespace themselves. In LMTs the tablespace manages its own space by  maintaining a bitmap in each datafile of the free and used blocks or sets of blocks in  the datafiles. Each time an extent is allocated or freed for reuse; Oracle updates the  bitmap to show the new status. When we use locally managed tablespaces, the  dictionary is not updated and rollback activity is not generated. LMTs automatically  track adjacent free space, so there is no need to coalesce extents. Within a LMT, all  extents can have the same size, or the system can automatically determine the size  of extents. To use local space management, we simply specify the local option for the  extent management clause in the create tablespace command. This is the default  type of extent management in Oracle9i. An example of the create tablespace command declaring a Locally managed tablespace in shown here:

CREATE TABLESPACE CODES_TABLES 

DATAFILES ‘E:\MSC\DATA\CODES_TABLES.DBF’ SIZE 10M,  

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K;



38 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.4.5 Tuning Data manipulation 

There are several data manipulation tasks – usually involving manipulation of large  quantities of data. We can improve the performance of database writes by creating  multiple DBWR processes. Creating multiple DBWR processes will prevent access  requests against multiple disks from causing performance bottlenecks. The number  of DBWR processes that should be created for an instance is set via the  DB_WRITER_PROCESSES parameter in the database’s initialization parameter file. If  we use a single DBWR process, we can create multiple I/O slaves for it via  DBWR_IO_SLAVES parameter. Use DB_WRITER_PROCESSES if system supports  ASYNC I/O & DBWR_IO_SLAVES if it does not. In addition to creating I/O slaves for  DBWR, we can create I/O slaves for the LGWR & ARCH processes in Oracle8 only.  

The LGWR_IO_SLAVES & ARCH_IO_SLAVES parameters are desupported as of  Oracle8i & are no longer used. We can use the LOG_ARCHIVE_MAX_PROCESSES initialization parameter starting in Oracle8i to set the number of ARCH process  initiated. 

6.4.5.1 Bulk insert: common traps & Successful tricks 

Fastest way to move data in the database is to move it from one table to another  without going out to the OS. 

When moving data from one table to another, there are four common methods for  improving the performance of the data migration: 

Tuning the Structures: removing indexes & triggers 

Disabling constraints during the data migration  

Using hints & options to improve the transaction performance

Isolating the rollback segments for the large transaction 

The first of the four tips, tuning structures, involves disabling any triggers or indexes  that are on the table into which data is being loaded. 

For Ex. If we have a row level trigger on the target table, that trigger will be  executed for every row inserted into the table. If possible, disable the triggers prior  to the data load.

39 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

If the trigger should be executed for every inserted row, then we may be able to do a  bulk operation once the rows have been inserted, rather than a repeated operation  during each insert. If properly tuned, the bulk operation will complete faster than the  repeated trigger executions. In addition to disabling triggers, we should drop the  indexes on the target table prior to starting the data load. If the indexes are left on  the table, Oracle will dynamically manage the indexes as each row is inserted. Rather  than continuously manage the index, drop it prior to the data load& recreate it when  the load has completed. 

In addition to disabling indexes, we should consider disabling constraints on the  table. If the source data is already in a table in the database, we can check that data  for its adherence (observance) to the constraints prior to loading it into target table.  Once data has been loaded, re-enable the constraints. If none of these options give  the adequate performance, we should investigate the options Oracle has introduced  for data migration tuning. 

The APPEND hint for insert commands: APPEND hint loads blocks of data into a  table, starting at the high-water mark for the table. Use of APPEND hint may increase  space usage. The nologging option: While performing a create table as select command use the nologging option to avoid writing to the redo logs during the  operation. The parallel options: The Parallel Query Option uses multiple processes  to accomplish a single task. For a create table as select, we can parallelize both the  create table portion and the query. If we use the parallel options, we should also use  the nologging option; otherwise parallel operations will have to wait due to  serialized writes to the online redo log files.  

The fourth tip for improving performance, isolating the rollback segment activity for  the transaction, may require creation of a new tablespace. For Ex we can create a  new tablespace for rollback segments & create one or more large rollback segment  within it. The datafiles associated with that tablespace should be placed on disks that  are isolated from the rest of the database.

40 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

Create one rollback segment for each of the concurrent data migration transactions. We can then use the set transaction use rollback segment command to force the  transaction to use the new rollback segment. If we cannot use this command, we  may need to take the rest of the rollback segments offline before starting the large  data migration transactions. To minimize the size of the rollback segment required,  perform commits frequently during the transaction. 

6.4.5.2 Bulk deletes: The truncate command 

Occasionally, users attempt to delete all of the records from a single table at once.  When they encounter error during this process, they complain that the rollback  segments are too small, when in fact their transaction is too large. A second problem  occurs once the records have all been deleted. Even though the segment no longer  has any records in it, it still maintains all of the space that was allocated to it. The  truncate command resolves both of these problems. It is a DDL Command not a DML  command, so it cannot be rolled back. 

Once we have used the truncate command on table, its records are gone, and none  of its delete triggers are executed in the process. However, the table retains all of its  dependent objects, such as grants indexes, & constraints. Truncate command is the  fastest way to delete large volumes of data. Since it will delete all of the records in a  table, this may force us to alter application design so that no protected records are  stored in the same table as the records to be deleted. A sample truncate command  for a table is as 

TRUNCATE TABLE EMPLOYEE DROP STORAGE;



The drop storage clause is used to deallocate the non-intial space from the table. The truncate command also works for clusters. In the next Ex. resuse storage option  is used to leave all allocated space empty within the segment that acquired it.

TRUNCATE CLUSTER DEMP_DEPT REUSE STORAGE;



41 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.4.6 Tuning Physical Storage 

The physical I/O for databases must be evenly distributed & handled correctly.  Planning file distributions involves understanding the interactions of the DBWR,  LGWR, & ARCH background processes. In addition to that level of physical storage  tuning, several other factors should be considered. The following sections address  factors that are external to the database but may have a profound impact on its  ability to address data quickly 

6.4.6.1 Using Raw Devices 

Raw devices are available with some UNIX OS. When they are used, the DBWR process bypasses the UNIX buffer cache & eliminates the file system overhead. For  I/O intensive applications, the use of raw devices may result in a performance  improvement of around 2 percent other traditional file systems. Recent file system  enhancements have largely overcome this performance difference, & the overhead  of maintaining raw devices is commonly believed to outweigh (overshadow) any  performance benefit. 

Raw devices cannot be managed with the same commands as file systems. For Ex.  the tar command cannot be used to backup individual files; instead, the dd command  must be used. This is a much less flexible command to use & limits our recovery  capabilities. Raw devices are not commonly used in environments supporting Oracle  Real Application Clusters, but with advances in disk access technologies, those  clustered environments may not require raw devices in the very near future.

42 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.4.7 Tuning Logical Storage 

From a logical standpoint, like objects should be stored together. Objects should be  grouped based on their space usage & user interaction characteristics. Based on  these groupings, tablespaces should be created that cater to specific types of objects. Materialized views can be used to aggregate data & improve query performance. A  Materialized view is identical in structure to a snapshot – it is a physical table that  holds data that would usually be read via a view. When we create materialized view,  we specify the view’s base query as well as a schedule for the refreshes of its data.  

We can then index the materialized view to enhance the performance of queries  against it. As a result we can provide data to users in the format they need, indexed  appropriately.

43 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

6.4.8 Reducing Network Traffic 

As database & the application that use them become more distributed, the network  that supports the servers may become a bottleneck in the process of delivering data  to the user.  

It is important to use the database’s capabilities to reduce the number of network  packets that are required for the data to be delivered. Reducing network traffic will  reduce dependence on the network, & thus eliminate a potential cause of  performance problems. 

7.8.1 Replication of Data 

We can manipulate & query data from remote databases. However, it is not  desirable to have large volumes of data constantly sent from one database to  another. To reduce the amount of data being sent across the network, different  replication options should be considered. 

In purely distributed environment, each data element exists in one place, as shown in  Fig 6.1 shown below. When data is required, it is accessed from remote databases via  database links. 

In example shown below in fig 6.1 the EMPLOYEE data is queried from MASTER1 database, & the DEPT data is queried from the REMOTE1 database. Both databases  are accessible via database links created within the REMOTE2 database. 

Select * from  

Database MASTER1 EMPLOYEE 

Database REMOTE1 DEPT 

emplotee@master1_linkSelect * from  emplotee@remote1_link

Database  

REMOTE2 

Fig 6.1 Sample distributed environment 

44 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

Modifying the application’s tables to improve data retrieval performance involves  denormalizing data. The denormalization process deliberately stores redundant data  in order to shorten user’s access paths to the data. 

In a distributed environment, replicating data accomplishes this goal. Rather than  force queries to cross the network to resolve user requests, selected data from the  remote servers is replicated to the local server. This can be accomplished via a  number of means, as described in following sections 

7.8.1.1 Using the copy command to Replicate Data 

In the first option, the data can be periodically copied to the local server. This is best  accomplished via the SQL * Plus copy command. The copy command allows selected  columns & rows to be replicated to each server. This is illustrated in fig 6.2. 

For example, the remote server may have a table called EMPLOYEE. The local server  would be able to replicate the data that it needs by using the COPY command, to  select records from the remote EMPLOYEE table.  

We can use the COPY command to store those selected records in a table in the local  database. The COPY command includes a query clause; thus, it is possible to return  only those records that meet the specified criteria. 

Database LOC 

EMPLOYEE 

COPY from msc / cocsistltr@loc  

CREATE EMPLOYE USING select *  

from employee 

Where state=‘MH’ 

Database REMOTE2 

EMPLOYEE

Fig 6.2 Data Replication using COPY command 

45 |  

M.Sc. (CS) / (SE) SY Oracle Database Administration  

In this example, portion of the EMPLOYEE table is copied down from the  headquarters database to a local database. A WHERE clause is used to restrict which  records are selected. 

COPY FROM MSC/ COCSITLTR@LOC 

CREATE EMPLOYEE USING SELECT * FROM EMPLOYEE  

WHERE STATE=‘MH’



The copy from clause in this example specifies the name of the remote database. In  this case, the query is told to use the database identified by the service name loc.  During the connection, a session should be started by using the MSC account, with  the password cocsitltr.

46 |  



No comments:

Post a Comment