Posted on Leave a comment

Recompile Database Objects

Identifying Invalid Objects

The DBA_OBJECTS view can be used to identify invalid objects using the following query:
COLUMN object_name FORMAT A30
SELECT owner,
object_type,
object_name,
status
FROM dba_objects
WHERE status = ‘INVALID’
ORDER BY owner, object_type, object_name;

With this information you can decide which of the following recompilation methods is suitable for you.

The Manual Approach

For small numbers of objects you may decide that a manual recompilation is sufficient. The following example shows the compile syntax for several object types:

ALTER PACKAGE my_package COMPILE;
ALTER PACKAGE my_package COMPILE BODY;
ALTER PROCEDURE my_procedure COMPILE;
ALTER FUNCTION my_function COMPILE;
ALTER TRIGGER my_trigger COMPILE;
ALTER VIEW my_view COMPILE;

Notice that the package body is compiled in the same way as the package specification, with the addition of the word “BODY” at the end of the command.

An alternative approach is to use the DBMS_DDL package to perform the recompilations:

EXEC DBMS_DDL.alter_compile(‘PACKAGE’, ‘MY_SCHEMA’, ‘MY_PACKAGE’);
EXEC DBMS_DDL.alter_compile(‘PACKAGE BODY’, ‘MY_SCHEMA’, ‘MY_PACKAGE’);
EXEC DBMS_DDL.alter_compile(‘PROCEDURE’, ‘MY_SCHEMA’, ‘MY_PROCEDURE’);
EXEC DBMS_DDL.alter_compile(‘FUNCTION’, ‘MY_SCHEMA’, ‘MY_FUNCTION’);
EXEC DBMS_DDL.alter_compile(‘TRIGGER’, ‘MY_SCHEMA’, ‘MY_TRIGGER’);

This method is limited to PL/SQL objects, so it is not applicable for views.

Custom Script

In some situations you may have to compile many invalid objects in one go. One approach is to write a custom script to identify and compile the invalid objects. The following example identifies and recompiles invalid packages and package bodies.

SET SERVEROUTPUT ON SIZE 1000000
BEGIN
FOR cur_rec IN (SELECT owner,
object_name,
object_type,
DECODE(object_type, ‘PACKAGE’, 1,
‘PACKAGE BODY’, 2, 2) AS recompile_order
FROM dba_objects
WHERE object_type IN (‘PACKAGE’, ‘PACKAGE BODY’)
AND status != ‘VALID’
ORDER BY 4)
LOOP
BEGIN
IF cur_rec.object_type = ‘PACKAGE’ THEN
EXECUTE IMMEDIATE ‘ALTER ‘ || cur_rec.object_type ||
‘ “‘ || cur_rec.owner || ‘”.”‘ || cur_rec.object_name || ‘” COMPILE’;
ElSE
EXECUTE IMMEDIATE ‘ALTER PACKAGE “‘ || cur_rec.owner ||
‘”.”‘ || cur_rec.object_name || ‘” COMPILE BODY’;
END IF;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.put_line(cur_rec.object_type || ‘ : ‘ || cur_rec.owner ||
‘ : ‘ || cur_rec.object_name);
END;
END LOOP;
END;
/

This approach is fine if you have a specific task in mind, but be aware that you may end up compiling some objects multiple times depending on the order they are compiled in. It is probably a better idea to use one of the methods provided by Oracle since they take the code dependencies into account.

DBMS_UTILITY.compile_schema

The COMPILE_SCHEMA procedure in the DBMS_UTILITY package compiles all procedures, functions, packages, and triggers in the specified schema. The example below shows how it is called from SQL*Plus:

EXEC DBMS_UTILITY.compile_schema(schema => ‘SCOTT’);

UTL_RECOMP

The UTL_RECOMP package contains two procedures used to recompile invalid objects. As the names suggest, the RECOMP_SERIAL procedure recompiles all the invalid objects one at a time, while the RECOMP_PARALLEL procedure performs the same task in parallel using the specified number of threads. Their definitions are listed below:

PROCEDURE RECOMP_SERIAL(
schema IN VARCHAR2 DEFAULT NULL,
flags IN PLS_INTEGER DEFAULT 0);
PROCEDURE RECOMP_PARALLEL(
threads IN PLS_INTEGER DEFAULT NULL,
schema IN VARCHAR2 DEFAULT NULL,
flags IN PLS_INTEGER DEFAULT 0);

The usage notes for the parameters are listed below:

schema – The schema whose invalid objects are to be recompiled. If NULL all invalid objects in the database are recompiled.
threads – The number of threads used in a parallel operation. If NULL the value of the “job_queue_processes” parameter is used. Matching the number of available CPUs is generally a good starting point for this value.
flags – Used for internal diagnostics and testing only.

The following examples show how these procedures care used:

— Schema level.
EXEC UTL_RECOMP.recomp_serial(‘SCOTT’);
EXEC UTL_RECOMP.recomp_parallel(4, ‘SCOTT’);

— Database level.
EXEC UTL_RECOMP.recomp_serial();
EXEC UTL_RECOMP.recomp_parallel(4);

— Using job_queue_processes value.
EXEC UTL_RECOMP.recomp_parallel();
EXEC UTL_RECOMP.recomp_parallel(NULL, ‘SCOTT’);

There are a number of restrictions associated with the use of this package including:

Parallel execution is perfomed using the job queue. All existing jobs are marked as disabled until the operation is complete.

The package must be run from SQL*Plus as the SYS user, or another user with SYSDBA.

The package expects the STANDARD, DBMS_STANDARD, DBMS_JOB and DBMS_RANDOM to be present and valid.
Runnig DDL operations at the same time as this package may result in deadlocks.

utlrp.sql and utlprp.sql

The utlrp.sql and utlprp.sql scripts are provided by Oracle to recompile all invalid objects in the database. They are typically run after major database changes such as upgrades or patches. They are located in the $ORACLE_HOME/rdbms/admin directory and provide a wrapper on the UTL_RECOMP package. The utlrp.sql script simply calls the utlprp.sql script with a command line parameter of “0”. The utlprp.sql accepts a single integer parameter that indicates the level of parallelism as follows:
0 – The level of parallelism is derived based on the CPU_COUNT parameter.
1 – The recompilation is run serially, one object at a time.
N – The recompilation is run in parallel with “N” number of threads.

Both scripts must be run as the SYS user, or another user with SYSDBA, to work correctly.

For further information see:
DBMS_UTILITY.compile_schema
UTL_RECOMP

http://www.oracle-base.com/articles/misc/RecompilingInvalidSchemaObjects.php

Posted on Leave a comment

Drop and Recreate PK Index

Disable PK constraint.
alter table TBL1 disable constraint PK_TBL1 ;

Delete PK index.
alter table TBL1 drop index PK_TBL1 ;

Create PK index.
create unique index “PK_TBL1” on “TBL1” (“INSPECTORID”, “DUTYID”, “INSPID”)
tablespace “TBLSPCINDX”
pctfree 10 initrans 2 maxtrans 255
storage
(
initial 64K
next 0K
minextents 1
maxextents 2147483645
pctincrease 0
freelists 1
freelist groups 1
)
nologging;

Enable PK constraint.
alter table “TBL1” enable constraint “PK_TBL1” ;

Posted on Leave a comment

Oracle Architecture

Oracle Architecture

The Oracle Instance

The memory structures and server processes that do the work in the database.

The System Global Area is a shared block of memory for Oracle’s use. At a minimum contains:

– Redo Log buffer: short term storage for redo information so it can be written to the redo logs.

– Shared Pool: further broken down into the library cache, which holds recently parsed and executed code, and the data dictionary cache, which store recently used object definitions.

– Database buffer cache: Oracle’s work area for executing SQL.

The instance also houses the processes

– System Monitor: Opening database, maintain connection between instance and database

– Database Writer: writes to the database files (writes as little as possible. Minimizing disk I/O for performance)

– Process Monitor: Monitors user sessions

– Log Writer: writes to the redo logs (writes as close to real time as possible. Ideally save all changes.)

– Checkpoint: ensure instance is synchronized with the database from time to time.

– Archiver: writes archived redo logs

The Oracle Database

The database refers to the physical files on the os that contain the data and data dictionary. At the minimum the database requires datafiles, control files, and redo logs.

Parameter File: Holds the parameters to start the instance

Password File: Encrypted file that holds the sysdba password. Allows sys to log on regardless of the state of the database.

Datafiles: Core of the database, the files that hold the data.

Control Files: Holds all the parameters used to connect the instance and database. For Example, pointers to the rest of the database (redo logs, datafiles…) and various data to maintain database integrity (scn and timestamp). Often multiplexed to allow recovery from file corruption.

Redo Log: maintains all changes made to the data over a given period of time or until the log is full. Often multiplexed to allow recovery from file corruption.

Archived Redo Logs: Copies of filled redo logs kept for recovery purposes.

(Special thanks to “Josh” for this information)

Posted on Leave a comment

Example Creating Common Role

1. Log into the oracle instance using SQL*Plus or other oracle scripting tool as SYSTEM user.

2. Create roles SCHEMA_DEVELOPER and SCHEMA_USER.

CREATE ROLE “SCHEMA_DEVELOPER” NOT IDENTIFIED;

GRANT CREATE SESSION TO “SCHEMA_DEVELOPER”;
GRANT SELECT ANY DICTIONARY TO “SCHEMA_DEVELOPER”;
GRANT ALTER SESSION TO “SCHEMA_DEVELOPER”;
GRANT CREATE CLUSTER TO “SCHEMA_DEVELOPER”;
GRANT CREATE DATABASE LINK TO “SCHEMA_DEVELOPER”;
GRANT CREATE PROCEDURE TO “SCHEMA_DEVELOPER”;
GRANT CREATE PUBLIC SYNONYM TO “SCHEMA_DEVELOPER”;
GRANT CREATE SEQUENCE TO “SCHEMA_DEVELOPER”;
GRANT CREATE TABLE TO “SCHEMA_DEVELOPER”;
GRANT CREATE TRIGGER TO “SCHEMA_DEVELOPER”;
GRANT CREATE VIEW TO “SCHEMA_DEVELOPER”;
GRANT DROP PUBLIC SYNONYM TO “SCHEMA_DEVELOPER”;

CREATE ROLE “SCHEMA_USER” NOT IDENTIFIED;

GRANT CREATE SESSION TO “SCHEMA_USER”;
GRANT SELECT ANY DICTIONARY TO “SCHEMA_USER”;

3. Create the tablespaces (make sure to update the filepath for the datafile according to the server you are working with). Variables are enclosed in curley braces {}. Use this script as a guide:

CREATE TABLESPACE “COMMONDATA”
LOGGING
DATAFILE ‘{PATH_TO_DATAFOLDER}COMMONDATA01.DBF’
SIZE 10M REUSE
AUTOEXTEND ON
NEXT 50M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;

CREATE TABLESPACE “COMMONINDX”
NOLOGGING
DATAFILE ‘{PATH_TO_DATAFOLDER}COMMONINDX01.DBF’
SIZE 5M REUSE
AUTOEXTEND ON
NEXT 10M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;

4. Create the user and role using this script as a guide. (Note: if you are using Oracle 10g or higher you will need to remove lines that set quota unlimited on temp).

CREATE USER “COMMON”
profile “DEFAULT”
identified by “{COMMON_PASSWORD}”
default tablespace “COMMONDATA”
temporary tablespace “TEMP”
quota unlimited on COMMONDATA
quota unlimited on COMMONINDX
account UNLOCK;

CREATE ROLE “COMMON_TABLE_USER” NOT IDENTIFIED;

5. Open a command prompt window and import a copy of the common schema from a .dmp file. If you dont have a .dmp file find another instance with the common schema and export it. Use these scripts as a guide:

For importing the schema:

imp common/{COMMON_PASSWORD}@{INSTANCE_NAME} file=common.dmp log=commonimp.log fromuser=common touser=common

For exporting the schema:

exp common/{COMMON_PASSWORD}@{INSTANCE_NAME} file=common.dmp log=commonexp.log

7. Run the following scripts one at a time to generate grant scripts for the roles needed.
Select the statements that were created by these scripts and run them in order to grant the correct priveleges:

SELECT ‘grant SELECT, REFERENCES on “COMMON”.’ || table_name “Grant Privileges”, ‘TO ‘ || role || ‘;’ “To Role”
FROM dba_tables
, dba_roles
WHERE owner = ‘COMMON’
AND table_name = any (select table_name from dba_tables where table_name like ‘CL_%’)
AND role = any (select role from dba_roles where role like ‘COMMON%’)
ORDER BY role, table_name;

col “Grant Privileges” format a50
SELECT ‘grant SELECT, INSERT, UPDATE on “COMMON”.’ || table_name “Grant Privileges”, ‘TO ‘ || role || ‘;’ “To Role”
FROM dba_tables
, dba_roles
WHERE owner = ‘COMMON’
AND table_name = any (select table_name from dba_tables where table_name like ‘CT_%’)
AND role = any (select role from dba_roles where role like ‘COMMON%’)
ORDER BY role, table_name;

8. Grant schema role permissions:

GRANT SCHEMA_DEVELOPER TO COMMON;
GRANT COMMON_TABLE_USER TO SCHEMA_DEVELOPER;
GRANT COMMON_TABLE_USER TO SCHEMA_USER;

Posted on Leave a comment

Scripted Backups Example 2

USE master
GO

DECLARE @theday char(1)
, @file varchar(128)

SET @theday = datepart(dw, getdate())

ALTER DATABASE dbname SET RECOVERY SIMPLE;

SET @file = 'D:BACKUPSdbname_' + @theday + '.bak';

BACKUP DATABASE dbname TO DISK = @file WITH INIT;

BACKUP LOG dbname WITH TRUNCATE_ONLY;

GO


This does a 7 day "rolling backup", overwriting the backup from last week. Set it up on a nightly job.

Posted on Leave a comment

Scripted Backups Example 1

USE master
GO
DECLARE @dbs AS TABLE ( id int identity(1,1), dbname sysname )

DECLARE @id int
, @dbname sysname
, @path varchar(128)
, @file nvarchar(255)
, @theday char(1)

SET @path = ‘D:DEV_BACKUPS’
SET @theday = datepart(dw, CURRENT_TIMESTAMP)

INSERT INTO @dbs ( dbname )
SELECT name
FROM sys.databases
WHERE database_id > 4 –not system dbs
AND state = 0 –online
ORDER BY name

SELECT @id = max(id) FROM @dbs

WHILE @id > 0
BEGIN
SELECT @dbname = dbname FROM @dbs WHERE id = @id
SET @file = @path + ” + @dbname + ‘_BAK’ + @theday + ‘.bak’
BACKUP DATABASE @dbname TO DISK = @file WITH INIT;
BACKUP LOG @dbname WITH TRUNCATE_ONLY;
SET @id = @id – 1
END
GO

Posted on Leave a comment

Default data file location

To prevent the C: drive from filling up, it is a good idea to set the database default location. To do this:

1. Open SQL Server Management Studio
2. Right click the server instance
3. Select “Properties”
4. In the Server Properties window, select “Database Settings”
5. Under “Database default locations”, specify path for “Data:” and “Log:”, for example: “D:SQLDATA”

Additionally, if space on the C: drive is limited, check the properties of the TEMPDB.

This can be found under the “Databases” –> “System Databases” branches in the server’s tree-view.

First, since the tempdb does not autoshrink, you can manually shrink it by right-clicking tempdb and selecting “Task”–>”Shrink”–>”Database”.

Next, right-click the tempdb database and select “Properties”. Then select “Files”. You can set the “tempdev.mdf” file to be restricted growth and add an additional database file that is unrestricted on another drive.