Wednesday, December 19, 2007

How to execute OS Commands in SQR?

Most of the projects have the below mentioned scenarios…..
  1. FTPing the file from one server location to another
  2. Deleting the file
We can simply execute those in SQR by using CALL SYSTEM built-in…Here is the syntax
CALL SYSTEM Using $del_file #del_status Wait
if edit(#del_status,’9′) = ‘0′
show ‘intfiles.txt was deleted sucessfully’
end-if
$del_file – Specifies the operating system command to execute. The command can be a quoted string, string variable, or column.
#del_status – Represents the status of execution of OS Command. This is always going to be Numeric variable
UNIX/Linux: Zero (0) indicates success. Any other value is the system error code.
PC/ Microsoft Windows: A value less than 32 indicates an error.
Wait – This flag is used to enable to do multitask processing.
(Microsoft Windows only): WAIT specifies that SQR suspend its execution until the CALL SYSTEM command has finished processing. NOWAIT specifies that SQR start the CALL SYSTEM command but continue its own processing while that command is in progress.
For Microsoft Windows, the default is NOWAIT. On UNIX\Linux operating systems the behavior is always WAIT.
Note : While executing OS Commands, we should append the value of Command Environment Settings along with OS Command. This is done by !Command Environment Settings
let $comspec = getenv(‘COMSPEC’) /* For Window OS */
let $comspec = getenv(‘SHELL’) /* For UNIX */
let $del_file = $comspec||’ /c del’||$int_file_name

Read More About  How To Execute OS Commands

Tuesday, December 18, 2007

Data Integration Challenge – Building Dynamic DI Systems – II


Following are the design aspects towards getting a DI system dynamic
  1. Avoiding hard references, usage of parameter variables

  2. Usage of lookup tables for code conversion

  3. Setting and managing threshold value through tables

  4. Segregating data processing logics into common reusable components

  5. Ensuring that the required processes are controllable by the Business team with the required checks built in

We had defined the first two aspects in the earlier writing, let us look at the scenarios and approach for the other three items
Setting and managing threshold values through tables
In data validation process we also perform verification on the incoming data in terms of count or sum of a variable, in this case the validity of the count or sum derived is verified against a pre defined number usually called the ‘Threshold Value’. Some of the typical such validation are listed below
  1. The number of new accounts created should not be more than 10% (Threshold Value) of the total records

  2. The number of records received today and the number of records received yesterday can not vary by more than 250 records

  3. The sum of the credit amount should not be greater than the 100000

This threshold value differs across data sources but in many cases the metric to be derived would be similar across the data sources. We can get these ‘threshold values’ into a relational table and integrate this ‘threshold’ table into the Data Integration Challenge process as a lookup table, this enables the same threshold based data validation code to implemented across different data sources and also apply the specific data source threshold value.
Segregating Data Processing Logics into Common Reusable Components
Having many reusable components in a system by itself makes a DI system dynamic or adaptable, the reason being that reusable components work on the basic aspect of parameterization of inputs and outputs of an existing process and parameterization is a key component to get a DI system dynamic. Some of the key characteristics to look for in a DI system that would help carve out a reusable component are
  1. Multiple data sources providing data for a particular subject area like HR data coming from different HR systems

  2. Same set of data being shared with multiple downstream systems or a data hub system

  3. Existence of an industry standard format like SWIFT, HIPPA either as source or target

  4. Integration with third party systems or their data like D&B, FairIsaac

  5. Changing data layouts of the incoming data structure

  6. Systems that capture survey data

Ensuring that the required processes are controllable by the Business team with the required checks built in
In many situations we are now seeing requirements where in the business would be providing regular inputs to the IT team of the DI systems, these are the situations where we can design and place the portions of the DI system parameters under the business control. Typical examples of such scenarios are
  • In ‘threshold value’ based data validation, these values would be business driven i.e., ‘threshold table’ can be managed by the business team and they would be able to make changes to the threshold table without code changes and without IT support
  • In many scenarios the invalid data would under go multiple passes and be need to be validated at different passes by the business in terms of starting a BI session, the input from the business could be just starting the process or as well providing input data
  • The data to be pulled out from a warehouse based on a feed from an online application; a typical web service problem-solution
The need for the business team to control or feed the DI systems is common with companies that handle more external data as with market research firms and Software As A Service (SAAS) firms. The web service support from the leading Data Integration vendors plays a major role in full filing these needs.

Monday, December 17, 2007

How to trace AE and determine which AE has trace enabled?

In this blog, I will answer two questions.

  1. How to trace AE?
  2.  
There are a few ways to trace an AE program. You can enable the trace using some of the below methods:
  1. In the process scheduler configuration file psprcs.cfg
  2. You can enable the trace in the“Process Definition”.
  3. Grab the Oracle session information and generate SQL Trace
  4.  
It is not advised to enable the trace using psprcs.cfg because it will enable trace for ALL AE programs using that scheduler. However, this might be the best choice in an environment where you want to trace multiple AE programs or you do not have access to modify the process definition. Exercise this option with caution knowing that it might generate trace files for all AE programs.
To enable trace using “Process Definition”, make the below change. You can use the “-TRACE” or “-TOOLSTRACESQL” or “-TOOLSTRACEPC” option depending on what information you require.

Ae_trace_1
  1.  
  2. Which AE program has trace enabled in its process definition?
  3.  
The below SQL will help you determine all the AE programs that have TRACE enabled. It will be useful to use this SQL in your monitoring scripts if you notice developers enabling trace using the process definitions and getting the program with its process definition migrated all the way to production or if your analyst or DBA have the tendency to enable the trace and forget about it.
SELECT PRCSNAME, PARMLIST
FROM PS_PRCSDEFN
WHERE UPPER(PARMLIST) LIKE ‘%TRACE%’
AND PRCSTYPE = ‘Application Engine’;

Read More About How To Trace AE

Monday, December 10, 2007

BI Implementation Enabler: Calibration for BI systems


In the last 2 posts, we looked at the way Agile Framework can be implemented to manage BI systems. Diagram below is intended to reiterate the process of Agile Framework implementation with its Planning & Execution phases.
Business Implementation Enabler
Having taking care of managing the evolution of enterprise BI systems, the next important aspect of implementation is “How to measure the evolution”? This brings us to the next important enabler – “Calibrating Business Intelligence   systems”
What is Calibration?
Calibration = “Measurement” – Can be defined as the alignment of process to certain calibration factors so that the health of the process can be measured with respect to those factors
How is Calibration used in the BI context?
  • Strategic tool to prioritize and align the EDW with the corporate vision
  • Measure the evolution of EDW against pre-set goals
  • Mechanism to identify technology pain areas and take appropriate corrective actions
  • Is a way to objectively communicate the progress of EDW to business stakeholders
  • Helps the DW project manager in tactically planning for the immediate future
There are 3 levels of scorecards developed by Hexaware that helps in measuring the evolution of BI systems implemented using the Agile Framework.
Business Implementation Enabler
Business Implementation Enabler
Business Implementation Enabler
The method of calibration and the usage of Analytic Hierarchy Process (AHP) are explained in the webinar available at http://www.hexaware.com/webcastarchive1.html . The title of the webinar is: Agile Framework for Calibrating the Enterprise Data Warehouse.

 You might want to read these awesome related posts Business Intelligence

Saturday, December 8, 2007

How to create new Manager Self Service Transaction?

As everyone knows, PeopleSoft itself provides a lot of MSS transaction.
For example
  1. Transfer Employee
  2. Job Change
  3. Promote Employee
  4. Terminate Employee
  5.  
Like that, say for example, if we are getting new MSS related requirements for Paygroup Change, How will we accomplish? Here is an insight!!!
Broadly saying, we should have 3 types of components that needs to be built to accomplish any MSS transaction
  1. Transaction Level Components
  2. Approval related Components (If Required)
  3. Viewing Transaction/Approval Workflow Status related Components
  4.  
Each of these components are in turn driven by 2 types of Component Pages, one for Launching Manager’s Direct Reports and the other one for Adding / Approving/ Viewing a (Paygroup change) Transaction.
In order to maintain all these transactional and approval workflow status related data, we should create 2 records to hold the data. As per PS Convention, One of the ways to maintain the data is achieved by having
  1. HR_PAYGRPCHG_DAT – This record used to hold all the (Paygroup change related) Transactional data
  2. HR_PAYGRPCHG_STA – This record used to hold all the Approval Workflow Status related data
  3.  
Transactional Level Components
This type of components is purely meant for doing transactional changes. In our case, transactional change is nothing but Paygroup change for an employee.
  1. Launching Page:

  2. Manager’s Direct Reports  Launching Component Page is constructed either by Direct Report Setup (Using OPRROWS component to show Direct Reports) or Configure Direct Reports UI Setup (Using HR_DR_SELECTION_UI component to show Direct Reports)
    Navigation: Setup HRMS >> Common Definition >> Direct Reports for Managers
  3.  
  4. Transaction Page:

  5. Transactional Component Page should be manually built as per the Requirements. As a manager, this is where Manager has to change the Paygroup of an employee, which in turn will fire Approval workflows (if required) or insert the data to Job component directly.
    Once the data is saved on this page, it should insert this transaction related data to DAT record and all the Approval workflow related data to STA record.
  6.  
Approval related Components / Viewing Approval Workflow Status related Components
  1. Launching Page:

  2. The Launching Page for both Approval / Viewing Status related Components should be manually designed to populate the Manager’s Direct Reports.

  3. Basically Launching pages are used to show the Direct Reports of the Manager. But in this case, Population of Manager’s Direct reports is restricted. For Eg. If the MSS transaction needs an approval, then the corresponding employee for that transaction will be shown as one of the Direct Reports in Launching page. Needless to say, Showing data for Manager’s Direct Reports should be determined by both Transactional (HR_PAYGRPCHG_DAT) and Status (HR_PAYGRPCHG_STA) data records.
  4.  
  5. Transaction Page:

  6. In the case of Approval related components, the main Approval page should be provided with 2 extra buttons to approve or deny the transaction. All other transactional data information should be placed in this page in non editable mode.

  7. For Viewing Statuses, the entire page should be in READ only access mode. This page should be used only to view the transaction and approval workflow status related details.

Read More About  Self Service Transaction

Monday, December 3, 2007

BI Implementation Enabler: Agile Framework for Data Warehousing–Part 2


In my previous post, I established the fact that Agile Framework can be used to manage complexity in enterprise wide Business Intelligence systems.
Two phases to the Agile framework implementation are:
  1. Planning Phase

  2. Execution Phase

Agile Framework – Planning Phase
Planning is typically done at the end of a particular year for the subsequent year, once the business plans & budgets are finalized.
Assumptions / Pre-requisites
  1. Enterprise BI infrastructure is already in place in the organization

  2. Technology Architecture (BI Tools/Technologies) and Process Architecture (Standards, Policies, Procedures) are already defined.

Start –> EndActivitiesDeliverables
Create & Prioritize the Stories
  • Conduct JAD sessions and collect user requirements
  • Have stakeholders sign-off on BI vision document
Functionality List (Stories) – With approximate effort estimates
Create the phase planIdentify Phases for completing the “Story”“Story – Phase” Mapping Document
Identify the “Cycles”Identify number of development & stabilization cycles required to complete the Phase“Story – Phase – Cycles” Mapping Document
Create the Release PlanIdentify the cycles (across stories) that can fit into a particular releaseMonthly Release Plan
Agile Framework – Execution Phase
Execution Phase is for implementing the monthly release. This has the following tasks:
Start –> EndActivitiesDeliverables
Execute the Cycles
  • Each cycle will have its own specifications, design & test plan documents
  • Develop the code to satisfy the requirements for each cycle
  • Design Document
  • Test Plan
  • Test Results
Deliver the ReleaseAll the cycles combined into a working release (typically delivered once a month)Code Release Plan
Deliver the PhaseWhen all cycles for particular phase are completed, perform a regression test on some of the critical cyclesPhase Release Plan
Complete the storyWhen all the phases for a particular story are completed, perform the regression test on some of the critical phasesComplete the documentation of the business functionality
I have seen lot of benefits in managing BI systems using the Agile framework, especially in Enterprise Data Warehousing situations.
Let me put some disclaimers here:
  1. Agile Development with its brand of specific techniques around Xtreme Programming, Pair programming, SCRUM etc. is a vast area with its own body of knowledge & best practices. This post neither has the intention nor the author has the talent (!) to be an authoritative guide for Agile methodologies.

  2. This post is aimed at Business Intelligence practitioners to stimulate a new way of thinking around managing complexity in Business Intelligence systems. This is definitely not a “silver bullet” for managing BI projects.

Information Nugget: Came across a wonderful blog titled “Occam’s Razor” by Mr. Avinash Kaushik (www.kaushik.net). Avinash is an authority on Web Analytics and his blogs have some great information for everybody. Happy reading!

Thursday, November 29, 2007

BI Implementation Enabler: Agile Framework for Data Warehousing – Part 1


As part of the Business Intelligence Utopia series, I am going to focus on the implementation enablers in the next few posts. The first implementation enabler is: Agile Framework for Managing Business Intelligence systems
BI systems are complex to manage due to the following reasons:
  • Keeps Evolving over time – Enterprise DW can never be completely built
  • BI drives business decisions – Needs “Total Alignment” with corporate vision
  • Power of BI applications increases exponentially as the number of information consumers increases
  • Data Warehouses need to be measured & calibrated against pre-set goals
  • Development & Support has to be managed concurrently
Standard process methodologies like the Waterfall model, Spiral & Iterative models are not suitable for managing Business Intelligence systems. One methodology I have seen work very well, having used it in multiple projects at Hexaware, is the “Agile Methodology”. The philosophy of the Agile framework fits in very nicely to alleviate some of the complex issues in managing BI systems.
Agile Methodology – Definition
Agile development is a software development approach that “cycles” through the development phases, from gathering requirements to delivering functionality into a working release.
Basic Tenets:
  • Shared Vision and Small Teams working on specific functionality
  • Frequent Releases that make business sense
  • Relentlessly manage scope – Managing the scope-time-resources triad effectively
  • Creating a Multi-Release Framework with master plan and supporting architecture
  • Accommodate changes “gracefully”
BI practitioners would appreciate the fact that the basic tenets do provide solutions to some of the critical pain areas when it comes to managing enterprise BI systems.
The ultimate goal of any DW/BI project is to roll out new business functionality on a regular and rapid basis with a high degree of conformance to what is already there –> à Fits in well with the “Agile” philosophy
The next few posts will illustrate the practical application of the Agile Framework to Business Intelligence systems.
BI Information Nugget – One of the recent websites that I really liked is: www.biblogs.com. This is a Business Intelligence blog aggregation site that has blogs written by seasoned BI practitioners. Happy reading!