Thursday, December 23, 2010

Leveraging Metadata in Informatica Workflow-Session/Analysis

We can leverage the metadata collected in the Informatica repository for many interesting analysis, few of the scenarios where I have leveraged the Informatica Metadata are as following.
This SQL Queries can be executed in Oracle database with no changes and requires little modification with other databases.
Failed Sessions
The following query lists the failed sessions. To make it work for the last ‘n’ days, replace SYSDATE-1 with SYSDATE – n
QUERY:
SELECT SUBJECT_AREA AS FOLDER_NAME,
SESSION_NAME,
LAST_ERROR AS ERROR_MESSAGE,
DECODE (RUN_STATUS_CODE,3,’Failed’,4,’Stopped’,5,’Aborted’) AS STATUS,
ACTUAL_START AS START_TIME,
SESSION_TIMESTAMP
FROM REP_SESS_LOG
WHERE RUN_STATUS_CODE != 1
AND TRUNC(ACTUAL_START) BETWEEN TRUNC(SYSDATE -1) AND TRUNC(SYSDATE)
RESULT:
Long running Sessions
The following query lists long running sessions. To make it work for the last ‘n’ days, replace SYSDATE-1 with SYSDATE – n
QUERY:
SELECT SUBJECT_AREA AS FOLDER_NAME,
SESSION_NAME,
SUCCESSFUL_SOURCE_ROWS AS SOURCE_ROWS,
SUCCESSFUL_ROWS AS TARGET_ROWS,
ACTUAL_START AS START_TIME,
SESSION_TIMESTAMP
FROM REP_SESS_LOG
WHERE RUN_STATUS_CODE = 1
AND TRUNC(ACTUAL_START) BETWEEN TRUNC(SYSDATE -1) AND TRUNC(SYSDATE)
AND (SESSION_TIMESTAMP – ACTUAL_START) > (10/(24*60))
ORDER BY SESSION_TIMESTAMP
RESULT:
Invalid Tasks
The following query lists folder names and task name, version number, and last saved for all invalid tasks.
QUERY:
SELECT SUBJECT_AREA AS FOLDER_NAME,
DECODE(IS_REUSABLE,1,’Reusable’,’ ‘) || ‘ ‘ ||TASK_TYPE_NAME AS TASK_TYPE,
TASK_NAME AS OBJECT_NAME,
VERSION_NUMBER,
LAST_SAVED
FROM REP_ALL_TASKS
WHERE IS_VALID=0
AND IS_ENABLED=1
ORDER BY SUBJECT_AREA,TASK_NAME
RESULT:
Thanks for reading, do you have other scenarios where Workflow Metadata has been effective …wish you a very happy new year 2011.

Friday, December 17, 2010

The Central Management Server (CMS) Repository


The content of the Business Objects Enterprise (BOE) system consists of the physical files and the metadata information about the physical files.
For a Crystal Report, the physical file as well as the metadata about the file should exist in the BOE system. The Crystal report is stored as a file on File Repository Server (FRS) with an extension of .rpt . The Metadata information such as report name, type, report ID, path, etc is stored as an InfoObject in the CMS Repository.
I will discuss about the CMS repository in this article.
The CMS Repository Database Tables
The CMS metadata is physically stored on a database as InfoObjects.   There are six tables, the purpose of which is given below.
SnoPhysical Table NamePurpose
1CMS_VersionInfoContains the current version of BOE.
2CMS_InfoObjects6Each row in this table stores a single InfoObject.  This is the main table in the repository.
3CMS_Aliases6Maps the user alias(es) to the corresponding user ID. For example, a user may have both a Win NT alias and an LDAP alias. Regardless of the number of aliases a user may have, in the Business Intellengence
.Platform each user has only one user ID. The map is stored in a separate table to enable fast logins.
4CMS_IdNumbers6The CMS uses this table to generate unique Object IDs and Type IDs. It has only two rows: an Object ID row and a Type ID row. The CMSs in a cluster use this table when generating unique ID numbers.
5CMS_Relationships6Relationship tables are used to store the relations between InfoObjects. Each row in the table stores one edge in the relation. For example, the relation between a Web Intelligence document and a Universe would be stored in a row in the WebI – Universe Relation table. Each relationship table has these columns: Parent Object ID, Child Object ID, Relationship InfoObject ID, member, version, ordinal, data.
6CMS_LOCKS6This is an auxiliary table of CMS_RELATIONS6.

The Central Management Server(CMS) repository tables cannot be queried directly. Query builder is the tool to be used to retrieve Business Objects metadata information using the virtual tables such as ci_systemobjects, ci_infoobjects and ci_appobjects.
In the forthcoming posts, I will discuss on query builder and file repository server.

Thursday, December 9, 2010

Building dashboards using Xcelsius with SharePoint data


Do you know – Xcelsius dashboards can be built using SharePoint data?

Xcelsius dashboard built using SharePoint data
Figure 1: Xcelsius dashboard built using SharePoint data
Pre-requisites
  • Xcelsius for SharePoint server (XSP) installs a web part container into the Sharepoint web part library.
  • After installation and configuration, the web part container can be added to the Web Part Gallery within SharePoint and any user with the appropriate rights can add the web part container to a page within the site.
Xcelsius Web connectivity Parameters to use
  • SharePoint Parameter: Used to define parameters or properties in Crystal Xcelsius models that a user can change in the SharePoint environment
  • SharePoint Provider: Used to give data to another Web Part published in SharePoint
  • SharePoint Consumer: Used to get data from another published Web Part in SharePoint (a SharePoint List or another Crystal Xcelsius model)
  • Use the File -> Export -> SharePoint option to generate the .swf file with SharePoint functionality.
  • Once generated, you will be able to add and configure the SharePoint Web Part with the .swf file to publish the model
Building the Dashboard
  • Add the Xcelsius document to the SharePoint document list
  • Copy the shortcut to the uploaded document
  • Go the SharePoint page
  • Click on Modify Shared web part
  • Add a web part
    • From the web part-gallery list select Crystal Xcelsius web-part
    • Use Modify Shared Web Part option
    • In the Xcelsius Visualization source, specify URL (paste the shortcut you copied from the document list) and apply
  • It is possible to customize models Real-time – Change chart type, title etc when defined as SharePoint Parameter
    • Use Modify Shared web part at run time
  • Add a SharePoint List, say “Sample” as a web part in the same page
  • In the Xcelsius web part configuration menu, “connections” option is enabled once a list is added in the page
  • In the connections, specify consume list from “Sample”
  • All Sharepoint lists and Providers in the same page will be listed in the connections option.
  • In order for a Xcelsius model to consume data from Sharepoint Provider, there should be two models deployed in the same page – one for the provider and one for the consumer
  • The visuals in the swf file with the consumer will change based on the data from the provider once connection is set

Monday, November 1, 2010

Something I learned about Oracle Database 11g RMAN restore


Last weekend (it was saturday night), I needed to restore a Development database from a old backup. I never did a RMAN restore before until last saturday. As the saying goes, “necessity is the mother of invention”. Though it is not really an invention (rman is there for a long time), for me, I learned to know about RMAN restore last week.
Our DBA was not available on Saturday. I needed to test few things on the Development and for that I need to restore a backup that was taken earlier couple of months ago. So I did the follow the procedures to restore the database using RMAN.
This could be a basic thing all the DBAs know about. But for me this is not something I do everyday. So this was new to me.
First, I ran the “shutdown immediate” command to shutdown my development database. Then I followed these steps to restore the database from a older backup taken by RMAN. Database was running on the Redhat Enterprise Linux Machine and the database version was 11.1.1.6.0.
$ rman

RMAN> list backup;
List of Backup Sets
===================

…….
I got the TAG details from here.
……
RMAN>  restore datafile ‘/u02/oradata/OIMTST/system01.dbf’ from tag = ‘BEFORERECON’;

RMAN>  restore datafile ‘/u02/oradata/OIMTST/sysaux01.dbf’ from tag = ‘BEFORERECON’;
….
RMAN> restore datafile ‘/u02/oradata/OIMTST/undotbs01.dbf’ from tag = ‘BEFORERECON’;

RMAN> restore datafile ‘/u02/oradata/OIMTST/users01.dbf’ from tag = ‘BEFORERECON’;

RMAN> restore datafile ‘/u02/oradata/OIMTST/oimtst4_tspace_01.dbf’ from tag = ‘BEFORERECON’;

RMAN> list backup of controlfile;

RMAN> restore controlfile to ‘/u02/oradata/OIMTST/control01a.ctl’ from tag = ‘TAG20100820T112653′

RMAN> quit
Recovery Manager complete.

$
Copying the Control Files:
============================
cd /u02/oradata/OIMTST  # The conrol files are located here.
cp control01a.ctl control01.ctl
cp control01a.ctl control02.ctl
cp control01a.ctl control03.ctl

$ sqlplus / as sysdba….
SQL> startup
ORACLE instance started.
Total System Global Area 1073131520 bytes
Fixed Size                  2151248 bytes
Variable Size             264244400 bytes
Database Buffers          801112064 bytes
Redo Buffers                5623808 bytes
Database mounted.
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open

SQL>
SQL> alter database open resetlogs;

Database altered.
SQL>
Hurray!!!! It is success!!!
This was my first restore using RMAN. I knew the concepts earlier, but I didn’t really restore a database like this before. I thought of sharing this knowledge.
We will meet in another post. Until then

Tuesday, October 26, 2010

Impact Analysis on Source & Target Definition Changes


Changes to Source and Target definition will impact the current state of the Informatica mapping and this article list the possible changes at Source and the Target with impact.


Updating Source Definitions:When we update a source definition, the Designer propagates the changes to all mappings using that source. Some changes to source definitions can invalidate mappings.
Below table describes how the mappings get impacted when the source definition is edited:
ModificationResult  of the source after modifying the source definition
Add a column.Mappings are not invalidated.
Change a column Data type.Mappings may be invalidated. If the column is connected to an input port that uses a Data type incompatible with the new one, the mapping is invalidated.
Change a column name.Mapping may be invalidated. If you change the column name for a column you just added, the mapping remains valid. If you change the column name for an existing column, the mapping is invalidated.
Delete a column.Mappings can be invalidated if the mapping uses values from the deleted column.

Adding a new column in the existing source definition:

  • When we add a new column to a source in the Source Analyzer, all mappings using the source definition remain valid.
  • However, when we add a new column and change some of its properties, the Designer invalidates mappings using the source definition.
  • We can change the following properties for a newly added source column without invalidating a mapping: 1. Name
    2. Data type
    3. Format
    4. Usage
    5. Redefines
    6. Occurs
    7. Key type
If the changes invalidate the mapping, we must open and edit the mapping. Then click Repository > Save to save the changes to the repository. If the invalidated mapping is used in a session, we must validate the session.
Updating Target Definitions:
When we change a target definition, the Designer propagates the changes to any mapping using that target. Some changes to target definitions can invalidate mappings.
The following table describes how the mappings get impacted when we edit target definitions:
ModificationResult  of the source after modifying the target definition
Add a column.Mapping not invalidated.
Change a column Data type.Mapping may be invalidated. If the column is connected to an input port that uses a Data type that is incompatible with the new one (for example, Decimal to Date), the mapping is invalid.
Change a column name.Mapping may be invalidated. If you change the column name for a column you just added, the mapping remains valid. If you change the column name for an existing column, the mapping is invalidated.
Delete a column.Mapping may be invalidated if the mapping uses values from the deleted column.
Change the target definition type.Mapping not invalidated.

Adding a new column in the existing target definition:

  • When we add a new column to a target in the Target Designer, all mappings using the target definition remain valid.
  • However, when you add a new column and change some of its properties, the Designer invalidates mappings using the target definition.
  • We can change the following properties for a newly added target column without invalidating a mapping:
1. Name
2. Data type
3. Format
If the changes invalidate the mapping, validate the mapping and any session using the mapping. We can validate objects from the Query Results or View Dependencies window or from the Repository Navigator. We can validate multiple objects from these locations without opening them in the workspace. If we cannot validate the mapping or session from one of these locations, open the object in the workspace and edit it.

Re-importing a Relational Target Definition:
If a target table changes, such as when we change a column data type, we can edit the definition or we can re-import the target definition. When we re-import the target, we can either replace the existing target definition or rename the new target definition to avoid a naming conflict with the existing target definition.

To re-import a target definition:
  • In the Target Designer, follow the same steps to import the target definition, and select the    Target to import. The Designer notifies us that a target definition with that name already exists in the repository. If we have multiple tables to import and replace, select apply to All Tables.
  • Click Rename, Replace, Skip, or Compare.
  • If we click Rename, enter the name of the target definition and click OK.
  • If we have a relational target definition and click Replace, specify whether we want to retain primary key-foreign key information and target descriptions
The following table describes the options available in the Table Exists dialog box when re-importing and replacing a relational target definition:
OptionDescription
Apply to all TablesSelect this option to apply rename, replaces, or skips all tables in the folder.
Retain User-Defined PK-FK RelationshipsSelect this option to keep the primary key-foreign key relationships in the target definition being replaced. This option is disabled when the target definition is non-relational.
Retain User-Defined DescriptionsSelect this option to retain the target description and column and port descriptions of the target definition being replaced.

Thursday, October 14, 2010

Output Files in Informatica


The Integration Service process generates output files when we run workflows and sessions. By default, the Integration Service logs status and error messages to log event files.

Log event files are binary files that the Log Manager uses to display log events. When we run each session, the Integration Service also creates a reject file. Depending on transformation cache settings and target types, the Integration Service may create additional files as well.

The Integration Service creates the following output files:
Output Files
Output Files
Session Details/logs:
  • When we run a session, the Integration service creates session log file with the load statistics/table names/Error information/threads created etc based on the tracing level that have set in the session properties.
  • We can monitor session details in the session run properties while session running/failed/succeeded.
Workflow Log:
  • Workflow log is available in Workflow Monitor.
  • The Integration Service process creates a workflow log for each workflow it runs.
  • It writes information in the workflow log such as
    • Initialization of processes,
    • Workflow task run information,
    • Errors encountered and
    • Workflows run summary.
  • The Integration Service can also be configured to suppress writing messages to the workflow log file.
  • As with Integration Service logs and session logs, the Integration Service process enters a code number into the workflow log file message along with message text.
Performance Detail File:
  • The Integration Service process generates performance details for session runs.
  • Through the performance details file we can determine where session performance can be improved.
  • Performance details provide transformation-by-transformation information on the flow of data through the session.
Reject Files:
  • By default, the Integration Service process creates a reject file for each target in the session. The reject file contains rows of data that the writer does not write to targets.
  • The writer may reject a row in the following circumstances:
    • It is flagged for reject by an Update Strategy or Custom transformation.
    • It violates a database constraint such as primary key constraint
    • A field in the row was truncated or overflowed
    • The target database is configured to reject truncated or overflowed data.
Note: By default, the Integration Service process saves the reject file in the directory entered for the service process variable $PMBadFileDir in the Workflow Manager, and names the reject file target_table_name.bad. We can view this file name in session level.
  • Open Session – Select any of the target View the options
    • Reject File directory.
    • Reject file name.
  • If you enable row error logging, the Integration Service process does not create a reject file.
Row Error Logs:
  • When we configure a session, we can choose to log row errors in a central location.
  • When a row error occurs, the Integration Service process logs error information that allows to determine the cause and source of the error.
  • The Integration Service process logs information such as source name, row ID, current row data, transformation, timestamp, error code, error message, repository name, folder name, session name, and mapping information.
  • we enable flat file logging, by default, the Integration Service process saves the file in the directory entered for the service process variable $PMBadFileDir in the Workflow Manager.
Recovery Tables Files:
  • The Integration Service process creates recovery tables on the target database system when it runs a session enabled for recovery.
  • When you run a session in recovery mode, the Integration Service process uses information in the recovery tables to complete the session.
  • When the Integration Service process performs recovery, it restores the state of operations to recover the workflow from the point of interruption.
  • The workflow state of operations includes information such as active service requests, completed and running status, workflow variable values, running workflows and sessions, and workflow schedules.
Control File:
  • When we run a session that uses an external loader, the Integration Service process creates a control file and a target flat file.
  • The control file contains information about the target flat file such as data format and loading instructions for the external loader.
  • The control file has an extension of .ctl. The Integration Service process creates the control file and the target flat file in the Integration Service variable directory, $PMTargetFileDir, by default.
Email:
  • We can compose and send email messages by creating an Email task in the Workflow Designer or Task Developer and the Email task can be placed in a workflow, or can be associated it with a session.
  • The Email task allows to automatically communicate information about a workflow or session run to designated recipients.
  • Email tasks in the workflow send email depending on the conditional links connected to the task. For post-session email, we can create two different messages, one to be sent if the session completes successfully, the other if the session fails.
  • We can also use variables to generate information about the session name, status, and total rows loaded.
Indicator File:
  • If we use a flat file as a target, we can configure the Integration Service to create an indicator file for target row type information.
  • For each target row, the indicator file contains a number to indicate whether the row was marked for insert, update, delete, or reject.
  • The Integration Service process names this file target_name.ind and stores it in the Integration Service variable directory, $PMTargetFileDir, by default.
Target or Output File:
  • If the session writes to a target file, the Integration Service process creates the target file based on a file target definition.
  • By default, the Integration Service process names the target file based on the target definition name.
  • If a mapping contains multiple instances of the same target, the Integration Service process names the target files based on the target instance name.
  • The Integration Service process creates this file in the Integration Service variable directory, $PMTargetFileDir, by default.
Cache Files:
  • When the Integration Service process creates memory cache, it also creates cache files. The Integration Service process creates cache files for the following mapping objects:
    • Aggregator transformation
    • Joiner transformation
    • Rank transformation
    • Lookup transformation
    • Sorter transformation
    • XML target
  • By default, the DTM creates the index and data files for Aggregator, Rank, Joiner, and Lookup transformations and XML targets in the directory configured for the $PMCacheDir service process variable.

Wednesday, September 29, 2010

Checking on Oracle Fusion Applications


Sun Blogger Vijay Tatkar wrote in his blog about the eight technology innovations mentioned by Larry Ellision during his Oracle Open World Keynote speech during last week. Nearly half of them were Sun Hardware related (such as Exadata, ExaLogicSun ultraSPARC t3 etc). Here is the list:

  1. Fusion Apps
  2. Unbreakable Linux Kernel
  3. Solaris Express 11
  4. unltraSPARC t3 chip
  5. mySQL 5.5
  6. exadata
  7. exalogic
  8. Java 7 and 8
Since the beginning, I am always interested to know more about Fusion Apps, mainly out of curiosity. Oracle Fusion Applications were formally introduced during the Oracle Open World last week (during Open World 2010). As per Oracle Release, this was one of the major innovation or next big thing for Oracle. As you are aware, Fusion Middleware Products were released already. Now, it is time to talk about the Fusion Applications.
You may be already aware; I started my IT career as a Web Developer in a small web hosting company. I used to write perl CGI code and hosting them on Linux Servers running with Apache Web Server and mySQL Database. I got bored (or I wanted a change, I am not sure!) with that job and then moved into the Unix System Administration. I worked as a Sun Solaris Admin and HP Unix Admin for some time. Then I worked in both Peoplesoft System Administration for nearly 7 years and currently working in an Oracle Identity Management (which is part of Oracle Fusion Middleware products) project for nearly past one year.
So, the question is “now what?” And how can we develop the knowledge for Fusion Apps Administration.
I am not sure when Fusion Apps will be deployed full-fledged instead of the other ERP Applications like Peoplesoft. I don’t think it will be near soon, but may be after few years, Oracle may buy in customers who are going to do a new implementation of some ERP Applications.
You know what, Fusion middleware for Fusion Apps is like PeopleTools for Peoplesoft Applications. PeopleTools Technology is the abstract layer on top of which all the Peoplesoft Applications run on. PeopleTools was originally built on C and C++ and finally evolved into a Java Technology. However I still feel some of them are C++ code. While Fusion Middleware is more Java and J2EE apps, I believe Fusion Apps will be more J2EE apps than Peoplesoft. I need to spend little bit more time on implementing a Fusion Middleware and an application. As of now, I only worked on Identity Management Product Sure and little but of Oracle Portal Technologies.
For an IT Infrastructure Administrator like me (who mainly works on Oracle Server Technologies), I think understanding the Fusion Middleware Stack will be important.
Talk to you later. Until then

Tuesday, September 28, 2010

Informatica Power Center performance – Concurrent Workflow Execution


What is concurrent work flow?
A concurrent workflow is a workflow that can run as multiple instances concurrently.

What is workflow instance?
A workflow instance is a representation of a workflow.

How to configure concurrent workflow?

1) Allow concurrent workflows with the same instance name:
Configure one workflow instance to run multiple times concurrently. Each instance has the same source, target, and variables parameters.
Eg: Create a workflow that reads data from a message queue that determines the source data and targets. You can run the instance multiple times concurrently and pass different connection parameters to the workflow instances from the message queue.

2) Configure unique workflow instances to run concurrently:
Define each workflow instance name and configure a workflow parameter file for the instance. You can define different sources, targets, and variables in the parameter file.
Eg: Configure workflow instances to run a workflow with different sources and targets. For example, your organization receives sales data from three divisions. You create a workflow that reads the sales data and writes it to the database. You configure three instances of the workflow. Each instance has a different workflow parameter file that defines which sales file to process. You can run all instances of the workflow concurrently.

How concurrent workflow Works?

A concurrent workflow group’s logical sessions and tasks together, like a sequential workflow, but runs all the tasks at one time.
Advantages of Concurrent workflow?
This can reduce the load times into the warehouse, taking advantage of hardware platforms’ Symmetric Multi-Processing (SMP) architecture.
LOAD SCENARIO:
Source table records count:  150,622,276

Monday, August 30, 2010

Informatica Development Best Practice – Workflow


Workflow Manager default properties can be modified to improve the overall performance and few of them are listed below.    This properties can impact the ETL runtime directly and needs to configured based on :

i)  Source Database
ii) Target Database
iii) Data Volume


CategoryTechnique
Session PropertiesWhile loading Staging Tables for FULL LOADS,  Truncate target table option should be checked. Based on the Target database and the primary key defined, Integration Service fires TRUNCATE or DELETE statement.Database                  Primary Key Defined                   No Primary KeyDB2                             TRUNCATE                                       TRUNCATE
INFORMIX                 DELETE                                              DELETE
ODBC                         DELETE                                                DELETE
ORACLE                    DELETE UNRECOVERABLE            TRUNCATE
MSSQL                       DELETE                                               TRUNCATE
SYBASE                     TRUNCATE                                        TRUNCATE Workflow Property “Commit interval” (Default value : 10,000) should be increased for increased for Volumes more than 1 million records.  Database Rollback Segment size should also be updated, while increasing “Commit Interval”.
Insert/Update/Delete options should be set as determined by the target population method.
Target Option                                   Integration Service
Insert                                                   Uses Target update Option
Update as Update
Update as Insert
Update else Insert
Update as update                             Updates all rows as Update
Update as Insert                               Inserts all rows
Update else Insert                            Updates existing rows else Insert
Partition
Maximum number of partitions for a session should be 1.5 times the number of processes in the Informatica server. i.e. 1.5 X 4 Processors = 6 partitions.
Key Value partitions should be used only when an even Distribution of data can be obtained.  In other cases, Pass Through partitions should be used.
A Source filter should be added to evenly distribute the data between Pass through Partitions. Key Value should have ONLY numeric values. MOD(NVL(<Numeric Key Value>,0),# No of Partitions defined)  Ex: MOD(NVL(product_sys_no,0),6)
If a session contains “N” partition, increase the DTM Buffer Size to at least “N” times the value for the session with One partition
If the Source or Target database is of MPP( Massively Parallel Processing ), enable Pushdown Optimization.  By enabling this, Integration Service will push as much Transformation Logic to Source database or Target database or FULL ( both ) , based on the settings.  This property can be ignored for Conventional databases.

Thursday, August 19, 2010

Informatica Development Best Practices – Mapping


The following are generally accepted “Best Practices” for Informatica PowerCenter ETL development and if implemented, can significantly improve the overall performance.


CategoryTechniqueBenefits
Source ExtractsLoading data from Fixed-width files take less time than delimited, since delimited files require extra parsing.  Incase of Fixed width files, Integration service know the Start and End position of each columns upfront and thus reduces the processing time.Performance Improvement
Using flat files located on the server machine loads faster than a database located on the server machine.Performance Improvement
Mapping DesignerThere should be a place holder transformation (Expression) immediately after the Source and one before the target.  Data type and Data width changes are bound to happen during development phase and these place holder transformations are used to preserve the port link between transformations.Best Practices
Connect only the ports that are required in targets to subsequent transformations.  Also, active transformations that reduce the number of records should be used as early in the mapping.Code Optimization
If a join must be used in the Mapping, select appropriate driving/master table while using joins. The table with the lesser number of rows should be the driving/master table.Performance Improvement
TransformationsIf there are multiple Lookup condition, make the condition with the “=” sign first in order to optimize the lookup performance.  Also, indexes on the database table should include every column used in the lookup condition.Code Optimization
Persistent caches should be used if the lookup data is not expected to change often.  This cache files are saved and can be reused for subsequent runs, eliminating querying the database.Performance Improvement
Integration Service processes numeric operations faster than string operations. For example, if a lookup is done on a large amount of data on two columns, EMPLOYEE_NAME and EMPLOYEE_ID, configuring the lookup around EMPLOYEE_ID improves performance.Code Optimization
Replace Complex filter expression with a flag (Y/N). Complex logic should be moved to the expression transformation and the result should be stored in a port.  Filter expression should take less time to evaluate this port rather than executing the entire logic in Filter expression.Best Practices
Power Center Server automatically makes conversions between compatible data types which slowdown the performance considerably.  For example, if a mapping moves data from an Integer port to a Decimal port, then back to an Integer port, the conversion may be unnecessary.Performance Improvement
Assigning default values to a port; Transformation errors written to session log will always slow down the session performance.  Try  removing default values and eliminate transformation errors.Performance Improvement
Complex joins in Source Qualifiers should be replaced with Database views. There won’t be any performance gains, but it improves the readability a lot.  Also, any new conditions can be evaluated easily by just changing the Database view “WHERE” clause.Best Practices

Thursday, August 12, 2010

Change Data Capture in Informatica


Change data capture (CDC) is an approach or a technique to identify changes, only changes, in the source. I have seen applications that are built without CDC and later mandate to implement CDC at a higher cost. Building an ETL application without CDC is a costly miss and usually a backtracking step. In this article we can discuss different methods of implementing CDC.


Scenario #01: Change detection using timestamp on source rows
In this typical scenario the source rows have extra two columns say row_created_time & last_modified_time. Row_created_time : time at which the record was first created ; Last_modified_time: time at which the record was last modified
  1. In the mapping create mapping variable $$LAST_ETL_RUN_TIME of datetime data type
  2. Evaluate condition SetMaxVariable ($$LAST_ETL_RUN_TIME, SessionStartTime); this steps stores the time at which the Session was started to $$LAST_ETL_RUN_TIME
  3. Use $$LAST_ETL_RUN_TIME in the ‘where’ clause of the source SQL. During the first run or initial seed the mapping variable would have a default value and pull all the records from the source, like: select * from employee where last_modified_date > ’01/01/1900 00:00:000’
  4. Now let us assume the session is run on ’01/01/2010 00:00:000’ for initial seed
  5. When the session is executed on ’02/01/2010 00:00:000’, the sequel would be like : select * from employee where last_modified_date > ’01/01/2010 00:00:000’, hereby pulling records that had only got changed in between successive runs
Scenario #02: Change detection using load_id or Run_id
Under this scenario the source rows have a column say load_id, a positive running number. The load_id is updated as and when the record is updated
  1. In the mapping create mapping variable $$LAST_READ_LOAD_ID of integer data type
  2. Evaluate condition SetMaxVariable ($$LAST_READ_LOAD_ID,load_id); the maximum load_id is stored into mapping variable
  3. Use $$LAST_READ_LOAD_ID in the ‘where’ clause of the source SQL. During the first run or initial seed the mapping variable would have a default value and pull all the records from the source, like: select * from employee where load_id > 0; Assuming all records during initial seed have load_id =1, the mapping variable would store ‘1’ into the repository.
  4. Now let us assume the session is run after five load’s into the source, the sequel would be select * from employee where load_id >1 ; hereby we limit the source read only to the records that have been changed after the initial seed
  5. Consecutive runs would take care of updating the load_id & pulling the delta in sequence
In the next blog we can see how to implement CDC when reading from Salesforce.com

Peoplesoft Connectors for Oracle Identity Manager – Part I


Introduction

Couple of weeks ago, I attended an Oracle Webcast titled “Introducing Oracle Identity Management 11g”. That webcast was about introducing the remaining components of Oracle Identity Management Product Suite which is part of the Oracle Fusion Middleware 11g (we can call it as a second set of product release!).

During the first phase release of Oracle Fusion Middleware Components, Oracle released the few components such as Oracle Internet Directory (OID), Oracle Virtual Directory (OVD) etc. Along with couple of other components, following are the major software releases (as part of second release) of the new Oracle Identity Management 11g Product Suite:
  • Oracle Identity Manager
  • Oracle Access Manager
  • Oracle Identity Analytics
  • … and few others …
In Identity Management, Oracle Identity Management 11g product suite provides Identity and Access Management (IAM) functions along with compliance/security related solutions. In Oracle Identity Management 11g, as usual, more features are added such as security development platform, integration with Fusion Middleware.
In this blog series, I am going to talk more about the Oracle Identity Manager (OIM) Product. Let us first understand about the Oracle Identity Manager Product and its features, and then we will talk more about various options available for integrating Peoplesoft Systems with Oracle Identity Manager Product. I used my personal experience with the product and referred the Oracle Identity Manager 11g Release 1 documentation for these. These are various guides available as part of Oracle Fusion Middleware Documentation. If you need in-depth knowledge about this product, you need to refer these manuals. Let’s understand OIM product first.

About Oracle Identity Manager

One of the Major and important Oracle Identity Management Component is Oracle Identity Manager (OIM). Earlier this product was called Xellerate Provisioning (by a company called Thor technologies). OIM product provides a central repository to store user and group information for any organization. One of the important features of OIM is it can integrate with various target systems available (such as Peoplesoft HRMS, SAP, Active Directory, Siebel etc). Also, various other Oracle products such as JD Edwards, EBS and Oracle Retail  have connectors as well.
I like the OIM Connectors Page at the Oracle Website. You should visit once. There are connectors for most commonly used products in the market (such as Sun Java Directory, Novell eDirectory, SAP products, Databases, Siebel etc). In this post, I want to explore the Peoplesoft Connectors and how can we deploy these connectors in an enterprise implementing OIM. I am going to provide a conceptual understanding only, for more details on the Connectors; you should refer the connector documentation (Search for “oracle identity manager connector documentation” to visit the Connector Documentation page). Also, other products (that has no connectors) can be integrated with OIM using Generic Technology Connectors (GTC) which is delivered as part of OIM product. We will talk more about GTC in later posts.

Integrating Peoplesoft HRMS system with OIM

Peoplesoft HRM (or HRMS) Systems are ERP systems deployed in many enterprises across the world. Hexaware supports many such Peoplesoft HRMS systems implementation and support across the globe. There are two Peoplesoft connectors available for OIM product.  They are:
  • PSFT Employee Reconciliation Connector
  • PSFT User Management Connector
These two connectors are used for different purposes in a Peoplesoft based environment. Let’s explore the use of these connectors using an Architecture diagram. I created the following diagram to show the integration and the use of PSFT connectors.
In this High-level Architecture, I used an existing Peoplesoft HRMS System as a trusted source for OIM. OIM will play a role of central repository to store user and group information. The User Provisioning will be happening to multiple target systems mentioned in the diagram.
PSFT Employee Reconciliation Connector is used to perform trusted source reconciliation with Peoplesoft HRMS system. In this scenario, Peoplesoft HRMS system is the source for all the user or employee related information during the entire user management lifecycle (user add, user delete, user modification etc). There are two versions of the PSFT Employee Recon Connector.
  • Version 9.0.4.x
  • Version 9.1.x
If you are in Peopletools 8.48 or earlier releases, then you should opt for 9.0.4. For detailed list of supported releases, you can refer the connector documentation.
Both Version 9.0.4.x and Version 9.1.x use Integration Broker Architecture for integrating with OIM. As you are aware, the IB architecture is considerably changed starting with Peopletools 8.48. There are new features added in Peopletools 8.49. For Version 9.1.x, the Supported Peoplesoft HRMS systems are 8.9, 9.0 and 9.1 with Peopletools 8.49 and 8.50.
Let’s explore these two Peoplesoft Connectors for OIM in future posts. I really like to share and learn more about these connectors, mainly for two reasons. I worked as Peoplesoft Admin for so many years and I also learned some basics of OIM recently. Let’s meet in next post. Until then

Thursday, July 8, 2010

Marketing Automation in B2B – Separating the Wheat from the Chaff


The B2B landscape in its inherent form is a complex jar of beans primarily because the initial connection needs lots of nurturing with the right mix of appropriate communication to ensure the “best weather” for sales interaction. Marketers not only have to measure outcomes right up to revenue but also find the “sweet spot” for marketing and sales to drum up the right notes.
The year 2009 and the first half of 2010 saw a marked shift towards marketing automation worldwide. All of this has helped channelize information and reach out to prospects better, yet it is pertinent to note the today’s internet savvy prospect is also armed with qualifying information about your brand, your products and your competitors as never before. To get inside the mind of the B2B buyer, marketers not only need to understand his intent from his digital body language but also ensure that automated lead generation processes in place scale up in terms of the following pertinent factors at any point of time.
  • Are lead recycling programs in place for not-sales-ready leads?
  • Has social media, inbound marketing and marketing automation been integrated seamlessly?
  • Is your marketing communication supported by buyer-centric collaterals that help the buyer decide in your favor?
  • Has your data been data washed and scrubbed clean?
  • Does your web metrics provide actionable information for user profiling and conversion?
  • Are sales and marketing on the same latitude to proving your prospect the best buying experience?
  • Does your Social media spin influence the markets conversation about your brand effectively?
  • Is your opt-in list getting fresh brew in the form of persuasive communication and supporting newsletter value?
  • Is your marketing funnel measurable and process definitions flexible to innovation?
  • How effective is your conversation model, does it ensure that you are at top of mind when prospects decide to bite the bait?
  • Are you able to capitalize on marketing automation’s great benefit – reporting effectively and use it as a strategic tool?
  • Does your data-centric marketing plans lead the way for greater customer intelligence since value of data will not be a constant ?
The above are just a few important cogs that can make or break your lead generation wheel. As marketers brace themselves to capitalize on marketing automation to enhance pipeline opportunities, trends all point to an explosive growth in marketing automation adoption. It is highly imperative that automation vendors provide more sophisticated reporting, better sales engagement processes and social media integration.
To help marketing efficiently separate the wheat from the chaff, marketing automation should not just serve as a driver of operational efficiency but more importantly enhance continuity of dialogue with prospects throughout the decision making/buying cycle at all relevant touch points.
Ultimately it is all about the harvest – the pipeline and revenue, the executive leadership would not mind how you do it.

Tuesday, July 6, 2010

Oracle Internet Directory LDAP Relpica States in Fusion Middleware 11g


Oracle Internet Directory LDAP Relpica States in Fusion Middleware 11g (11.1.1)

In the Oracle Fusion Middleware 11g Documentation (I think I was referring to Version 11.1.1 of the doco), you can find OID Administrator’s Guide. As the name suggests, this is the top most important and valuable guide for Oracle Internet Directory Administrators. I think I have read most of this guide already. However I still refer this guide, since there is a lot of information provided in this guide (and it is a reference guide too).
I want to write about the LDAP Replica states mentioned in the Appendix D (How Replication Works) of this Guide. In Fusion middleware, Oracle provides lot of details about Oracle Internet Directory Replication. Earlier this information was scattered around the Oracle Support Website and was difficult to find. Now, I think Oracle collected most of this information in this guide.
If you are working or supporting or planning to implement an OID Replication High Availability environment, then you should be familiar with this section of the topic in the guide. This replica states information will be useful if you are running LDAP Based Replica (Just to refresh your memory, there are two types of Replication possible, ASR based and LDAP Based – ASR is based on Ddatabase Links, while LDAP based replication uses a LDAP Client process).

orclReplicaState Attribute

orclReplicaState attribute stores the Replication State for the LDAP Based Replication Replica. You can check the current Replica State of the OID using the ldapsearch command. (In a Live System that uses LDAP based replication, it will be set to the numeric value of 1 – which means it is in online state).
You need to run the following LDAPSEARCH and check the orclreplicastate attribute as shown below. Please make sure to replace values for the arguments specific to your site, I just gave an example.
ldapsearch -h localhost -p 389 -D cn=orcladmin -w password -b “orclreplicaid=local_replica_ID, cn=replication configuration” -s sub objectclass=*
You need to check the value of the orclreplicastate in the output. Alternatively, you can get the orclreplicastate attribute value directly as shown below example:
ldapsearch -h localhost -p 389 -D cn=orcladmin -w password -b “orclreplicaid=local_replica_ID, cn=replication configuration” -s sub objectclass=* orclreplicastate
The local_replica_ID is specific to your installation, normally it is machine_database. You can check the value using a ldapsearch query as shown below.
Ldapsearch Argument Description:
ArgumentDescription
-hHostname or IP Address of the LDAP Directory ServerI used localhost since I am running this command on the same server where OID is running.
-pPort Number for the LDAP Directory, default LDAP port is 389, LDAPS port is 636.If you use the port 636, then you should define the –U argument.
-DBind DN – LDAP DN for connecting to LDAP Directory
-wPassword for the Bind DN – It is site specific.
-bBase DN for the search – here it starts from the top.
-s baseSearch Scope is base (other values are sub and one)

orclReplicaState possible values in 11g

There are 9 LDAP Replica States mentioned in this guide (In 10g OID, there are only 7 LDAP Replica states, it looks like Oracle added two more LDAP Replica states in 11.1.1). As I mentioned earlier, in a normal production system which uses LDAP based replication, the orclreplicastate will be set to the value of 1 automatically during the start of the replication server first time.
Let’s list the LDAP replica states:
LDAP Replica StateDescription
0Bootstrap - This is one of the important Value. You can setup a new LDAP based consumer replica using this value. Lets talk about it in next blog
1Online – For regular replication processing.
2Offline
3Bootstrap in progress
4Bootstrap in progress + cn=orclcontext completed
5Bootstrap completed with failures
6Database based
7Sync Schema only (Not Data)
8Bootstrap without schema sync (Only Data)
In a LDAP replication with high-availability environment, it is a must that you should understand these values and their significance. Let’s talk about these values and how we can exploit this attribute and their values in the coming blogs. Until then

Wednesday, June 2, 2010

Features and Enhancements in SAP TAO 2.0 from SAP TAO 1.0

The SAP Test Acceleration and Optimization™ (SAP TAO ™) software streamlines the creation and maintenance of ERP business process testing. It helps Quality Analyst specialists to break down application into components. Assemble test cases through a simple interface using drag and drop components in Quality Center. Test script can be parameterized for flexible reuse. Maintained easily and inexpensively, even when screens, flows, or service packs change.
Features of SAP TAO ™ 1.0 version released in 2007:
Inspect: Captures the data in a screen or transaction and determines its validity. It enables you to create and maintain a list of transactions and screens.
Import/Export: Primarily runs in background mode to export and import data from the SAP Test Acceleration and Optimization™ client to the SAP Quality Center.
Consolidator: Gathers all the objects and data in an SAP Quality Center test script and creates a single component.
Connect: Connection settings for SAP and Quality Center
Enhancements in SAP TAO ™ 2.0 versions released in 2009:
PFA (Process flow Analyzer): It records user interactions and the sequence of screens to execute a business process, in the SAP TAO ™ repository. It automates inspection and creation of the test components and a parameterized draft transition test case. It automatically creates the data table spreadsheet with the DT columns and values used during the recording process.
Repository: The SAP Test Acceleration and Optimization™ repository is a part of the SAP Solution Manger system and is used to store:
User interaction and sequence of the screens in a business process.
Information specific to SAP Test Acceleration and Optimization™ that cannot be retrieved by other tools.
Change Analyzer: It helps you to analyze the impact of changes due to upgrades, SAP patches or Custom development on a test, components or consolidated component.
Read More about  SAP TAO

Thursday, May 27, 2010

Some myths and challenges faced by BPT Methodology

Is designing test for an entire business scenario a time-consuming process?
Business Process (BP) – a BP should be well-documented written at a level that can be script. The BP should be understandable by people not familiar with business. It should be sufficiently detailed. If possible we can have Mercury Screen Recordings (MSR), Navigational flow etc.

Data – The input data required for executing the BP should be reusable. If it is not reusable, sufficient set of data should provided for automating the BP. For most of the finance modules data are not reusable.

Business Process should be complete, correct and accurate. Verifying BP completeness and correctness takes a lot of time, better to start off with these steps in mind as verification time should be used to ensure the quality of the scripts instead of verifying the quality of the BP. Don’t assume anything, design the test cases so that any person can run it or execute it. Design the test cases so that any person can understand what steps are to be validated.

Generally speaking, the more user involvement a BP requires the less desirable it becomes for automation. Though scripts build during development process are executed successfully, they are liable to fail during regression execution. The common errors and solutions can be summarized as:

Wrong Data is being used: Most of the time the script fails due to a mismatch between the data displayed in application and the data stored in Data Table. The solution for this is to change the incorrect value in the Data Table.

Script Flow has changed due to a new build or a new data: if this is the case we need to modify the existing script to add/remove the necessary steps. This is not acceptable because we are automating scripts with the same process. This may represent delays in our delivery date.

The script is running with a different user: The solution is to use the same user that we were using during the development phase or give the new user the same privileges that the user utilized during development had.

Data is burned: If this is the case we need to get or create new data in order to execute the script.

Are your scripts backward compatible i.e., can they run in lower versions?
Scripts execution in lower version of QTP: The other major problem faced in scripts are not backward compatable.ie.., scripts build in upgraded version of quick test professional like 9.2,cannot be executed in 9.0 or lower versions. This problem can be overcome by using BCIE (Bushiness component Import and Export) tool. Using this tool we can import the reusable component from higher version of QC and export it to lower version of QC. Scripts are then compatible with lower versions.

Does Subject Matter Experts require technical knowledge?
Even though creating Business Processes Test Script using accelerators might seem like a simple task, there are multiple factors and caveats that need to be taken into account.
1.The initial Components base might prove complete for some BP’s, but there will surely be a need to create new components in order to complete all the scripts.
2.Though the UI Scanner automatically generates the required components, it might be required to manually modify the QTP code, or even to manually create a whole component.
3.VB Script knowledge is welcomed, since sometimes, there might even be the need to create new functions in the libraries, or modify the existing ones. It is always better to have a technical architect in a project dedicated to handle Libraries, tool installation, debug and trouble shooting. A technical architect should have considerable knowledge of library architecture and descriptive programming understanding, should understand relationship between Accelerators libraries and Business Process Testing. The responsibilities of the technical architect would be to provide the team with wrappers and

Are accelerators only for ERP applications?
Accelerator projects are not restricted to only ERP applications like SAP, PeopleSoft,Seibel and Oracle. It can be customized for application like Metavance, web based application like SAP web portal by our technically expertise development team.

When to automate?

  • Business scenario that will run with each new version of your application
  • Business scenario that uses multiple data values for the same operation
  • Business scenario that create data for additional business scenario
  • Business scenario that require low end-user decision making
  • Complex or lengthy business scenario that are often run during business day.

Monday, May 17, 2010

Fusion Middleware: New features in Oracle Internet Directory


Going forward, I am planning to write more about Fusion Middleware 10g, Fusion Middleware 11g and Oracle Database 11g. These are the areas that I am developing more interest now-a-days. I am currently working on Fusion Middleware 10g. First of all, I am learning these new software. And when I write here I feel my knowledge level increases. The first Fusion Middleware component that I am going to write about is Oracle Directory Server and Oracle Internet Directory. Both are LDAP Directories from Oracle and part of Fusion Middleware (Why two LDAP Directories as part of Fusion Middleware? – Think about it).
I worked in multiple LDAP Directories during the last few years. A LDAP Directory is software that stores information or entries or data in a tree like format for easy access, it is based on a standard. As per my experience with LDAP Directories, these are the major LDAP directories:
  • Oracle Directory Server (earlier Sun Java/Iplanet Directory)
  • Novell’s eDirectory (earlier NDS)
  • Microsoft’s Active Directory (AD)
  • Oracle Internet Directory (OID)
  • openLDAP
Among these, I like Sun Java Directory (now, Oracle Directory Server) the most. It is because I worked on it first and it was from Sun, standards based, and works well in heterogeneous environments. There are other various reasons, but we will talk more about Oracle Internet Directory here.

What is Oracle Internet Directory?

Oracle Internet Directory is a LDAP Version 3 Compliant Directory Server from Oracle Corporation. Oracle Internet Directory (OID) is used in most of the Oracle Components (such as Oracle Single Sign On) and is one of the primary components delivered as part of the Fusion Middleware.
OID is used to integrate Oracle Middleware and applications and mainly used with Oracle Applications. Oracle Internet Directory stores its data in an Oracle Database. The directory store is an Oracle Database. Oracle Database is a required component to run Oracle Internet Directory. This is one of the major differences among the remaining four major LDAP directory servers.

New Features of OID in Fusion Middleware 11g

OID is delivered by Oracle for the use of Oracle Identity Management. This was part of the Oracle Application Server “Application Infrastructure” Component. So, Oracle Internet Directory is not a new component that is delivered as part of Fusion Middleware.  It was already there in Version 10g as well.
I am currently working on Oracle Internet Directory Version 10.1.4.2.0. Fusion middleware version of Oracle Internet Directory is called 11.1.1. There are few improvements between these two versions. I noticed that the improvements lie on these lines

1. Manageability Features

Oracle Directory Services Manager and integration with Weblogic Admin Server are the major changes in the OID Version 11.1.1.  Fusion Middleware is Weblogic-Centric. So it is time to learn Weblogic again. Oracle Process Manager and Notification Server (OPMN) is still used in Fusion Middleware for managing OID, as well as other components.
ODSM (Oracle Directory Services Manager) is replacing Oracle Directory Manager (oidadmin). ODSM is a new web-based management tool for managing Oracle Internet Directory in Fusion Middleware 11g.

2. Replication Features

One of the important features that you can setup is a multi-master replication using LDAP based replica model. In earlier versions, it was not possible. Earlier you need to use ASR based replication to setup a multi-master replication. Now it is possible to setup multi-master replication using LDAP based replication.

3. Instance Configuration

There are changes in configset information. Now every instance can have a separate rootDSE information. This was one of the major issue in earlier version. I need to explore this option more. I will write more about this later.
A last important note is: why Oracle is delivering two separate LDAP Directories now as part of Fusion Middleware 11g or as part of its Directory Services Offerings. Why Oracle supports Oracle Directory Server and Oracle Internet Directory? This is because; Oracle Applications are tightly integrated with Oracle Internet Directory. For Example, Oracle Single Sign On needs Oracle Internet Directory. This is one of the reasons Oracle is unable to move to Oracle Directory Server. Let’s hope this will soon change.
Let’s talk more about OID in coming weeks.  Until then

Read More about Fusion Middleware

Thursday, April 29, 2010

Moving to Oracle Server Technologies


Believe me; life is not easy when you are working with a Vendor Company, such as Hexaware Technologies that I work for (Hexaware is an Oracle Platinum Partner as well). I have to learn all the new things with the little time you get, sometimes you have to learn lot of things in less than few hours. For a person like me, this is exactly what I want and like to do. Learn new things all the time!!! That is my motto!

One thing I like the most here is, I have the freedom to move to other IT technologies that I have little or no experience with. However that was not easy for a person like me or anybody for that matter. You have to keep learning and understand new things that come up.

As you are already aware (or if you are reading my blogs for first time), I started my IT career as the Web Developer with Apache and Perl CGI development (really old technologies!!). After couple of years, I got bored with Web Development. Then, I moved into Unix System Administration, mainly worked on Solaris and HP-UX and related hardware and software. And again, I got bored with UNIX Administration and moved to Peoplesoft Infrastructure and Administrator positions.

I was a happy person (I am still happy!!) for almost 7 years working with Peoplesoft Infrastructure for many clients. Now, I got an opportunity to work in Oracle Server Technologies here, especially Oracle Database, Oracle Identity Management and Oracle Fusion Middleware technologies.

If you are in the IT industry, you have to know one thing for sure. Keep learning. We have to develop a mentality like kids have. They are always curious to learn new things and all the time. This is an important quality you have to develop if you want to excel in IT Technical career. You have to develop curiousness to learn new things (from internet, from other blogs, from collogues, from peers, from managers and almost everywhere!).

I started working in Oracle Server Technologies (Oracle Database, Oracle Application Server, Oracle Fusion middleware, Enterprise Manager etc ) less than a year ago. However, before starting, I had an fundamental understanding of what they are and why do we need them. You cannot build this in one day. You should be aware of other technologies. One major thing that helped me was, my UNIX skills. I am able to solve almost any problems if that runs on UNIX.

Two things you have to understand in UNIX World. Everything is handled as a file and everything runs in the server is a process. If you are able to make these two simple facts, then I am sure you will be able to fix any servers, anything that runs in UNIX/Linux.
Okay, I think we are going off topic. Other than books and internet, I use two simple ways of learning.

a) Blogging
b) Teaching/Mentoring

Both of these are not easy for me. I have to really develop mastery to some level before start teaching someone. Believe me, it is not easy to teach, especially in IT industry, it is difficult with all the new things popping up almost every second. That is why I wanted to start blogging more often and conduct more mentoring classes in Hexaware.

And now, within last one year, I have got quite an expertise on Oracle Server Technologies. During this time, my experience with UNIX, Web Development and Peoplesoft really helped a lot in understanding the architecture of the Oracle Server Technologies. I am still learning new things everyday (that is why I want to write here, at least I can use them later!).

I want to use this new blog site to start sharing knowledge, write about errors or failures and how we handle them (lessons learned) etc. I will start with a new topic here soon. Until then.