Thursday, February 18, 2010

PeopleSoft HCM 9.1 Succession Planning


How important is it to an organization?

The recession storm seems to have settled down and IT companies are in a hiring spree, much to the trend seen in the Formula one racing, its Schumi’s re-entry and some good show by Force India team and Adrian. It all seems to be drilling down to the race line.. eagerly awaiting to jump on the gas pedals in this 2010 season.

Yes, the software industry too – is looking forward for a jump start and many IT organizations are preparing themselves for the race, the race which will win them new clients in new areas and the world economy seems to get back in its ‘once’ fruitful track.

As many of you are aware, Oracle has come up with the PeopleSoft 9.1 Version with some cool features, Lets see why one of the new features of Peoplesoft 9.1 seems to be interesting as it justifies the HR operations with its functionality. One of the reasons for Indian IT organizations to spend a lot of effort and cost in hiring new employees and especially high profile positions within an organization seems to be the lack of succession planning and career planning.

Succession planning enables an employer to organize its talent pools based on the employees person profile, job code, position and a plan that is unique to it. Initial and continous mapping of employees competency, performance and interests with the employer’s goals (position, jobcodes) makes it easier for the organization to plan and sustain its growth in the event of its new project wins. As stated in
Learning Management System drives a companies growth
Peoplesoft 9.1 brings in easier integration options with the Enterprise Learning Management (ELM) module thereby providing consistent career paths for the employees to pursue inorder to accomplish the succession planning. Both the HR Administrators and Managers are well equipped with the Visualization (user friendly reports) and Self Service  layer in accomplishing the tasks.

Features like tracking an employee’s successor in terms of number of years makes the job even easier for the HR to identify / probably mine through the talent pool of the organization.
Let’s see how IT organizations are quick in implementing the Succession planning feature of PeopleSoft 9.1 and Let’s assure them that its a vital part for a growing organization.

PeopleSoft 9.1 HCM Compensation and Performance Appraisal Cycle


It’s Appraisal Season, Like the leaves which change color in fall and fall off their trees – IT Sector may witness a lot of attrition during and after the appraisal cycle. The reason being the outcome of the appraisal process .

 It is vital for a growing organization to streamline their appraisal process with planning their funding channels, organizing the pay components and administering it.

Existing PeopleSoft features of Tree Manager, Approval Workflow Engine (AWE), HCM Delegation Framework and pagelet wizards are inter woven with a streamlined process to accomplish what Oracle calls PeopleSoft 9.1 Compensation. Interestingly, the interoperability aspects with ePerformance, Core HR and variable compensation adds more value and increased ROI for the organizations which would implement PeopleSoft 9.1 HCM Compensation.

The first thing that an organization eyes is their budget/Funds for an appraisal / Compensation cycle, followed by the planning and allocation of collective Compensation aspects. Embedded analytics and user friendly interface enables Compensation Administrators or Managers to build what peoplesoft terms it as “Compensation Cycle”.Initial Setup would involve defining proration rules, rounding rules, salary plan, action reason,.. and Configuring compensation matrix.

Example Appraisal – Compensation Configurable matrix:
Rating             Funding Pct       Min Percent     Max Percentage
————-     —————       —————     ——————–
6                               42                       42                            42
5                               30                       30                            30
4                               19                       18                            19
3                                8                          8                             8
2                                2                          2                             2
1                                2                          1                             2


For a compensation cycle, the funding overview for salary plans  summarizes the total number of head count which would be involved in the appraisal cycle along with the calculated total salaries, calculated amount, calculated percent, qualified headcount, qualified salaries, funded amount, funded percent, proposed amount, proposed percent.

Once the compensation cycle (01JAN2009 to 31DEC2009) is defined, the appraisal team works with the variable compensation plans and compensation rules to manage the available funding for an appraisal cycle. The Pay out periods as well as the Pay out types are also defined using the cash or available stock options. It is one of the features in which many employers tend to provide stocks instead of a cash hike. Followed by the compensation team, the manager self service of PeopleSoft is used by the managers to update appraisal info of an employee or a group (direct reports / Indirect reports) along with their review ratings. The Approval process plays a vital role in approving the planned / updated compensation details using the delivered roles of “submitter” “reviewer” and “Confirmer”.

On the other hand, the administrator is equipped with the compensation dashboard which constitutes the Process flow Build->Open->Load->Close and status history which manages the manager’s access rules (notification period, default, review period, update period)
PeopleSoft 9.1 Compensation also allows employers to handle exceptional cases during an appraisal cycle by incorporating “Key Resource Bonuses” on the Award Plans present with the compensation module.
In total, PeopleSoft 9.1 HCM Compensation seems to be quiet promising in bring in transparency and streamlined appraisal cycle within a growing organization.

Know More About: Peoplesoft 9.1 HCM

Wednesday, January 20, 2010

Raising SaaS OnBoarding Solution and Dominant PeopleSoft HR


2010 seems to mark a new begining in both the IT Services and F1 Racing. Cool,.. we are awaiting the growth in SaaS, SOA,… and some great adrenaline rush as Schumacher returns.. For People who still believe that PeopleSoft is leading the HR Services market, Lets remind them of the advent of Kronos Workforce (Application Service Provider), KMS XpressHR (Software as a Service – SaaS Solution), Open Text Recruiting Management Solution for Microsoft Sharepoint 10.0 driven by the Enterprise Content Management Solution like Livelink and Document Center respectively. The Clientele are really happy to spend for what they use unlike the age old model of fixed pricing.

Let’s see how KMS OnBoarding XpressHR SaaS Solution attracts the clientele with their functions when compared to Peoplesoft,
  • In PeopleSoft, the SSN entered while adding a new person to the system is not validated or verified with the Department of Homeland Security or Social Security Administration(SSA). This had enabled Clientele to incorporate the XpressHR product for onboarding which constitutes the e-Verify Functionality.
  • In PeopleSoft, the unavailability of the features like e-Signature & Digital content management in HR had enabled XpressHR to provide 100% Compliance with the State & Federal data related to the pre-hire process within an organization.
  • SaaS based Onboarding Process & Document Center Model would enable clientele to leverage the increased ROI.
  • Implementation effort seems to be less with the use of web services.
It’s these few aspects which drive the present day software investments, As posted in Service Oriented Computing Platform for Shared Services Model , the service providers whose verticle solutions(XpressHR) which are built on horizontal solutions (ECM) are undoubtedly growing 

Monday, December 21, 2009

Living with the Enemy

Just imagine two people who hate each other to death, being forced to live in the same house and continue inflicting pain on each other in every possible way. Do you get the picture?  That is what is happening between Oracle and SAP. To Larry Ellison, there is no other enemy more worse than SAP, and for SAP (even though they downplay it)  the number one in their enemy list is Oracle.

When SAP hates Oracle this much, SAP can not stop being major sales channel for Oracle’s database. Oracle  is still the number one database in the SAP customer base. It’s also no secret that many SAP customers would love to stop paying a premium price for a database that is functionally underused by the SAP product line.
But now there is a hope to SAP,  a new reason that SAP would like to get rid of the traditional relationship between its software and database leader Oracle. If SAP can develop its application to be less dependent on disk database, that  would be the first  step to reduce the dependency.
Speed is Money
That was what was proclaimed  by Mr. Plattner in the Sapphire conference in his key note address. He said the new world of in-memory computing is the next in-thing in the enterprise software.  In-memory database system (IMDS) is designed explicitly for real-time applications and for embedded systems such as set-top boxes, telecom equipment, consumer electronics and other connected gear. The in-memory database minimizes RAM and CPU demands and offers unmatched performance, reliability and development flexibility.
Enterprise software companies could learn from the techniques used by gaming software developing where the in-memory database  usage is already making big impact to get the maximum output from the multi-core CPUs. Mr. Plattner did not promise that SAP is developing in-memory concept into SAP product but he made it very clear that it is the way forward.
Oracle Killer
The desire to kill Oracle is not new found for SAP. As early as 2005, Shai Agassi, the then president of product technology group and member of SAP executive board, elaborated about the company’s programs to improve the in-memory capability of the software. In-memory capability is a new way to manage the data. Largest databases in the world are data warehouses, and these databases get the most complicated queries that they need to process as fast as possible. This requires enormous amount of CPU power. The testing ground for SAP’s new database strategy can be found in a product the company released few years back – the BI Accelerator or BIA. Among its many attributes, BIA is a hardware appliance that runs SAP Analytics incredibly quickly. The potential “Oracle killer” part of BIA comes from its in-memory database functionality, which processes database queries in RAM with no need to access disk-based storage — or even have a relational database at all — which means no costly database license, no massive data-storage farm, no expensive DBAs, and so on.
The idea of in-memory query management at SAP is hardly new. Back in the late 1990s, SAP unveiled LiveCache, an in-memory processor for what was then called the Business Warehouse. LiveCache was a little ahead of its time for lots of reasons, starting with the fact that CPU and memory costs were still relatively high for what SAP had in mind. In the end, LiveCache failed to live up to expectations. But it still survived as the in-memory query engine for SAP’s Advanced Planner and Optimizer (APO) supply-chain product.
LiveCache made history in SAP benchmarking, giving an indication of the response times that are possible using an in-memory database engine. Based on SAP’s own benchmarking standard — the SAP Standard Applications Benchmark — SAP’s hardware partners have had a glorious time leapfrogging each other in recent years to see which could achieve the best response times with LiveCache.
So, It is more of the question of when the killer will arrive. Someday soon, we will have a choice to choose between the status quo and new radical database approach. What will you choose if the newer approach is cheaper and faster and effective?
Read More about  IMDS

Monday, December 14, 2009

Service Oriented Computing Platform for Shared Services Model


It is the need of the hour for business entities to revisit their growth plans and to perform consistent checks on their performance and operational utilization. One of the driving factor for the transformation of Application Service Providers (http://blogs.hexaware.com/pitstop/application-service-provider-vs-software-as-a-service-asp-vs-saas.html )to Software as a service is the change in the Information Technology Trends.

 Human Computer Interaction and Interoperable applications seems to be the cloud where the future software investments would be. Let’s look into the evolving service oriented computing platform which answers a lot of the present day business needs.

The Product based companies were quick to incorporate the service orientation in their product designs by mostly following the agile software development methodology whereas the Service based companies were quick to incorporate the shared resource pool to evolve the Shared Services Support.

Initially, the shared services model was spotted within organizations as specialized groups of system and vendor specific administrators and later evolved as ESSA (http://www.hexaware.com/shared-services.htm). The proposed service oriented computing platform for such shared services model would further enhance the customer satisfaction as the service design principles are followed.

Goin by the tag “If resources are not retainable, Let’s retain their knowledge within the organization” – the service oriented computing platform goes one step further from the traditional vertical solutions(CRM,TAM,.) by providing an SOA based solution utilizing the present day horizontal solutions(Livelink ECM ). This Open technology solution is vendor independant and can accomodate the existing applications of Oracle(Peoplesoft, Siebel), Microsoft(sharepoint),SAP,Kronos Workforce Timekeeper,.. by establishing a meta data driven knowledge solution for the shared services resource pool to work on.

Few Key advantages of this service oriented solution for the shared services model are

a.Reusabilty – the reduntant support (technical/Functional) effort is minimized.

b. Service Governance

c. Discoverability

d. Less Time consumption in performing a support activity

e. Knowledge Service Composition – enabling the maintenance of organization wide service data irrespective of their source department or technology.

Like the Peoplesoft Person Model which was based out of SOA, the service oriented solution design for the shared services model would make a huge difference among the corporate cube warriors. Let’s wait for more such innovative solutions to surface soon.

Monday, September 14, 2009

Merge Rows as Columns / Transpose records


Requirement: Converting rows to columns


Customer
Product
Cost
Cust1
P1
10
Cust1
P2
20
Cust1
P3
30
Cust2
ABC
10
Cust2
P2
25
Cust2
Def
10
Customer
Product1
Cost1
Product2
Cost2
Product3
Cost3
Cust1
P1
10
P2
20
P3
30
Cust2
ABC
10
P2
25
Def
10

The above illustration would help in understanding the requirement. We had to merge multiple records into one record based on certain criteria. The design had to be reusable since each dimension within the data mart required this flattening logic.

1. Approach:
The use of aggregator transformation would group the records by a key, but retrieval of the values for a particular column as individual columns is a challenge, hence designed a component ‘Flattener’ based on expression transformation.
Flattener is a reusable component, a mapplet that performs the function of flattening records.
Flattener consists of an Expression and a Filter transformation. The expression is used to club each incoming record based on certain logic. Decision to write the record to target is taken using the Filter transformation.

2. Design:
The mapplet can receive up to five inputs, of the following data types:
i_Col1 (string),  Customer
i_Col2 (string), Product
i_Col3 (decimal), Cost
i_Col4 (decimal) and
i_Col5 (date/time)
Have kept the names generic trying to accept different data types, so that the mapplet can be used in any scenario where there is a need for flattening records.
The mapplet gives out 15×5 sets of output, in the following manner:
o_F1_1 (string), Customer
o_F2_1 (string), Product1
o_F3_1 (decimal), Cost1
o_F4_1 (decimal) and
o_F5_1 (date/time)
o_F1_2 (string), Customer
o_F2_2 (string), Product2
o_F3_2 (decimal), Cost2
o_F4_2 (decimal) and
o_F5_2 (date/time)
… … and so on
The output record is going to have repetitive sets of 5 columns each (Each set would refer to one incoming row). Based on the requirement the number of occurrence of these sets can be increased. The required fields alone can be used / mapped. For the above example we use just 2 strings and one decimal for mapping Customer, Product and Cost.
The mapplet receives records from its parent mapping. The Expression would initially save each incoming value to a variable and compare it with its counterpart that came in earlier and is held in its cache as long as the condition to flatten is satisfied.
Syntax to store current and previous values:
i_Col2 string i
prv_Col2 string v curr_Col2
curr_Col2 string v i_Col2
The condition/logic to flatten records is parameterized and decided before mapping is called thus increasing codes’ scalability. The parameterized logic is passed to the Expression transformation via a Mapplet parameter. The value is used as an expression to perform the evaluation and the result is a flag value either ‘1’ or ‘2’.
Syntax for port – flag
Flag integer v $$Expr_compare
An example for parameterized expression
$$Expr_compare = iif (curr_Col1 = prv_Col1 AND curr_Col2 !=
prv_Col2, 1, iif (curr_Col1 != prv_Col1,2))
A variable port named “rec_count” is incremented, based on the flag.
Syntax for port – rec_count
rec_count integer v iif (flag=2,0, iif (flag=1,rec_count + 1,rec_count))
The expression transformation now uses the value in ports “flag” and “rec_count” to decide the place holder for each incoming input value, i.e. the column in target table where this data would move into ultimately. This process is an iterative one and goes on until the comparison logic ($$Expr_compare) holds good, i.e. until all records get flattened per the logic. An example of the place holder expression is shown below:
v_Field1 data type v iif(flag=2 AND rec_count=0,curr_Col1, v_Field1)
Port “write_flag_1” is set to 1 when the comparison logic fails (meaning flattening is complete). Filter transformation filters out the record once it is completely transposed.
Filter condition:
write_flag_1 integer v iif (flag=2 AND write_flag>1 ,1,0)

3. Outcome:
After developing the code and implementing the same we found it to be a useful utility, so thought of sharing it and would like to hear suggestions from readers on performing the same functionality in a different way. Please do share your views.

Wednesday, September 2, 2009

Process Control / Audit of Workflows in Informatica


1. Process Control – Definition
Process control or Auditing of a workflow in an Informatica is capturing the job information like start time, end time, read count, insert count, update count and delete count. This information is captured and written into table as the workflow executes

2. Structure of Process Control/Audit table
The table structure of process control table is given below,
Table 1: Process Control structure
PROCESS_RUN_IDNumber(p,s)11A unique number used to identify a specific process run.
PROCESS_NMEVarchar2120The name of the process (this column will be populated with the names of the informatica mappings.)
START_TMSTDate19The date/time when the process started.
END_TMSTDate19The date/time when the process ended.
ROW_READ_CNTNumber(p,s)16The number of rows read by the process.
ROW_INSERT_CNTNumber(p,s)16The number of rows inserted by the process.
ROW_UPDATE_CNTNumber(p,s)16The number of rows updated by the process.
ROW_DELETE_CNTNumber(p,s)16The number of rows deleted by the process
ROW_REJECT_CNTNumber(p,s)16The number of rows rejected by the process.
USER_IDVarchar232The etl user identifier associated with the process.
3.  Mapping Logic and Build Steps
The process control flow has two data flows, one is an insert flow and the other is an update flow. The insert flow runs before the main mapping and update flows runs after the main mapping, this option is chosen in “Target Load Plan”. The source for both the flows could be a dummy source which will return one record as output, for example select ‘process’ from dual or select count(1) from Table_A. The following list of mapping variable is to be created,
Table 2: Mapping Parameter and variables
$$PROCESS_ID
$$PROCESS_NAME
$$INSERT_COUNT
$$UPDATE_COUNT
$$DELETE_COUNT
$$REJECT_COUNT
Steps to create Insert flow:
  • 1. Have “select ‘process’ from dual” as Sequel in source qualifier
  • 2. Have a sequence generator to create running process_run_Id ’s
  • 3. In an expression SetVariable ($$PROCESS_RUN_ID,NEXTVAL), $$PROCESS_NAME to o_process_name, a output only field
  • 4. In an expression assign $$SessionStarttime to o_Starttime, an output only field
  • 5. In an expression accept the sequence id from sequence generator
  • 6. Insert into target’ process control table’ with all the above three values
Table 3: Process Control Image after Insert flow

PROCESS_RUN_ID1
PROCESS_NMEVENDOR_DIM_LOAD
START_TMST8/23/2009 12:23
END_TMST
ROW_READ_CNT
ROW_INSERT_CNT
ROW_UPDATE_CNT
ROW_DELETE_CNT
ROW_REJECT_CNT
USER_IDINFA8USER
Steps in main mapping,
  • 1. After the source qualifier, increment the read count in a variable (v_read_count) for each record been read in an expression and SetMaxVariable ($$READ_COUNT,v_read_count)
  • 2. Before the update strategy of target instances, do the same for Insert/Update/Delete counts; all the variables are now set with all their respective counts
Steps to create Update flow:
  • 1. Have “select ‘process’ from dual” as Sequel in source qualifier
  • 2. Use SetMaxvariable to get the process_run_id created in insert flow
  • 3. In an expression assign $$INSERT_COUNT to an o_insert_count, a output only field, assign all the counts in the same way
  • 4. In an expression assign $$SessionEndtime to o_Endtime, an output only field
  • 5. Update the target ‘Process Control Table’ with all the above three values where process_run_id equals the process_run_id generated in Insert flow
Table 4: Process Control Image after Update flow
PROCESS_RUN_ID1
PROCESS_NMEVENDOR_DIM_LOAD
START_TMST8/23/2009 12:23
END_TMST8/23/2009 12:30
ROW_READ_CNT1000
ROW_INSERT_CNT900
ROW_UPDATE_CNT60
ROW_DELETE_CNT40
ROW_REJECT_CNT0
USER_IDINFA8USER

4. Merits over Informatica Metadata
This information is also available in Informatica metadata, however maintaining this within our system has following benefits,
  • Need not write complex query to bring in the data from metadata tables
  • Job names need not be mapping names and can be user friendly names
  • Insert/Delete/Update counts of all as well as individual target can be audited
  • This audit information can be maintained outside the metadata security level and can be used by other mappings in their transformations
  • Can be used by mappings that build parameter files
  • Can be used by mappings that govern data volume
  • Can be used by Production support to find out the quick status of load