Monday, July 16, 2012

BusinessObjects Administration – Repository Diagnostic Tool

Hello Booglers,
Repository Diagnostic Tool (RDT) is going to be the topic of discussion today. It is always possible in every business objects deployment to have inconsistencies between CMS database and File Repository Servers. This may occur during events such as Backup restoration, disaster recovery or network outage. CMS system database may be interrupted during these events and causes inconsistencies with objects in the CMS database.
Repository Diagnostic Tool (RDT) comes into picture here. It scans the CMS system database and identifies inconsistencies in infoobjects, diagnoses and repairs them.  Types of inconsistencies are
  • Object to file inconsistencies
Inconsistency between info objects in CMS database and the corresponding file in FRS
  • Infoobjects metadata inconsistencies
Inconsistency in metadata of an infoobjects in CMS database.
Running RDT Tool
You can find the RDT tool here:
In WINDOWS:
The executable is “reposcan.exe” and is available in
“<INSTALLDIR>\BusinessObjects Enterprise 12.0\win32_x86\”
In UNIX:
The Shell script is “boe_reposcan.sh” and is in “<INSTALLDIR>/bobje/”
Sample RDT Usage Pattern (Windows):
We need to execute the below in Command prompt from the reposcan.exe’s installation directory. The below sample is for CMS database in SQL SERVER.
reposcan.exe
-dbdriver sqlserverdatabasesubsystem
-connect “UID=XXX;PWD= XXX;DSN=XXX”
-repair true
-inputfrsdir “E:\Program Files (x86)\Business Objects\BusinessObjects Enterprise 12.0\FileStore\Input”
-outputfrsdir “E:\Program Files (x86)\Business Objects\BusinessObjects Enterprise 12.0\FileStore\Output”
-outputdir “E:\Program Files (x86)\Business Objects\BusinessObjects Enterprise 12.0\reposcan”
-scanfrs true
-scancms true
-repair false
XXX has to be replaced with your DSN credentials.
Viewing Repository Diagnostic Tool’s result set
The output files would be generated at the below directory by default:  <BOE install>\BusinessObjects Enterprise 12.0\reposcan. You can change the default location with a different directory using -outputdir option in reposcan execution usage pattern.
The output files generated would be
  • The name format of scan output file is Repo_Scan_YYYY_MM_DD_HH_MM_SS.xml
  • The name format of repair output file is Repo_Repair_YYYY_MM_DD_HH_MM_SS.xml
Scan Summary report
Repair report
Based on the reports we can proceed further with remedial action.
Hope this blog is interesting.
Thanks for reading.  Read More About  Business Objects Administration

Friday, July 6, 2012

Business Objects Administration – Security Rights migration from SAP BO 3.x and BI 4.x

Hi Readers,
In this blog we are going to see the security rights migration from SAP BO 3.x and BI 4.x and the challenges we could encounter while doing BO content migration from 3.x to 4.x.
This post describes security settings as they correspond to the new interface and functions. The structure of the interface has been redesigned and security settings changed in certain cases. This means that some 3.x security settings are not directly compatible with the new interface. Where equivalents exist, these are used.
This document will guide you in the changes you might need to make when migrating content from 3.x to 4.x. Certain rights have been renamed, others are unaffected, and some rights are not supported in 4.x, and will require unsetting before resaving and migrating those reports.
Below are some of the known general Issues.
  • Rights are not supported in 4.x and exist in 3.x
  • Rights that are renamed in 4.x and exist in 3.x
Let us see each case in detail.
  • Rights are not supported in 4.x
The typical example for this category would be Desktop intelligence application and redesign of      BI launch pad interface.
  • Desktop intelligence
As Desktop Intelligence is removed in 4.x all the corresponding rights are not supported.
  • Interface
In 3.x we are allowed to hide the toolbar based on the user rights. But in case of 4.x it is replaced by toolboxes in which we can disable the individual component.
Some of the rights come under this category:
RightMigration status
Enable drill modeNo longer maintained in 4.x
Extend scope of analysisNo longer maintained in 4.x
Interactive: General – Ability to hide / show toolbarsToolbar is replaced with toolbox
Enable HTML Report PanelHTML viewer is removed in 4.x
Desktop Intelligence Application level rightsDesktop Intelligence Application is removed
How to resolve this?
We need to remove these rights in 3.x before migration so that they will not be migrated to the new version.
  • Rights that are renamed in 4.x
The right “View SQL“in 3.x has been renamed to “Query Script – Enable Viewing” in 4.x.
Similarly below are some of the rights that belong to this category in my knowledge.
Rights in 3.xRights in 4.x
Create documentDocuments – enable creation
Data Tracking: Enable for usersData – enable data tracking
Data Tracking: Enable format display changes by usersData – enable formatting of changed data
Edit SQLQuery script – enable editing (SQL , MDX…)
Enable Auto save for this userDocuments – enable auto save
Enable formula and variable creationReporting – create formulas and variables
Enable Java Report PanelInterfaces – enable Rich Internet Application
Enable Publish and Manage Document Content for this user (did not exist)Documents – enable publish and manage content as web service
Merge dimensions for synchronizationReporting – enable merged dimensions
View SQLQuery script – enable viewing (SQL , MDX…)
Web Intelligence Rich Client : Save a document locally on the file systemDesktop interface – save documents locally
Web Intelligence Rich Client: Allow local data providersDesktop Interface – enable local data providers
Web Intelligence Rich Client: Export a documentDesktop interface – export documents
Web Intelligence Rich Client: Import a documentDesktop interface – import documents
Web Intelligence Rich Client: Install from Info ViewDesktop interface – install from BI launch pad
Web Intelligence Rich Client: Print a documentDesktop interface – print documents
Web Intelligence Rich Client: Remove document securityDesktop interface – remove document security
Web Intelligence Rich Client: Save a document for all usersDesktop interface – save document for all users
Web Intelligence Rich Client: Send by mailDesktop interface – send by mail
How to overcome this?
We need to remap these rights in 4.x after migration by comparing them against 3.x rights.
  • Special cases
The below rights are added in XI3 SP4 and are not included till 4.0.3.
i.    Import from BI On Demand
ii.   Export to BI On Demand
Resolution
SAP recommends not to migrate to XI 3.4 or later to BI 4.0.3.x or a previous release as some of the rights that are added in these versions are not replicated in till BI 4.0.3.x.
Hope the post was useful for those considering 4.x migration.
Thanks for reading.  Read More About  Business Objects Administration

Monday, May 14, 2012

Oracle Bi Testing Strategy


The primary focus of DW/BI testing is to ensure competent and perfect database structures, ETL processes, front-end access and that BI reports generation processes completely support the client requirements.

What is OBIEE?
Oracle Business Intelligence Enterprise Edition (OBIEE) is a comprehensive business intelligence platform that delivers a full range of analytic and reporting capabilities. Designed for scalability, reliability, and performance, Oracle Business Intelligence Enterprise Edition delivers contextual, relevant and actionable insight to everyone in an organization, resulting in improved decision-making, better-informed actions, and more efficient business processes. In our project, we are doing OBIEE Report testing:
  • Understanding Sub Queries
  • Understanding Table Manipulation
  • XML in SQL Server
Prerequisites:

Before go through the below documents we must have basic knowledge of OBIEE and its components
Like …
Oracle Bi Server
Oracle Business Intelligence Dashboards
Oracle Business Intelligence Answers/Analysis
Oracle Business Intelligence Delivers
Oracle Business Intelligence Briefing books
Oracle Business intelligence Publisher

1. Sample report and Navigation steps
  • Oracle Interactive Dashboard page is displayed as designed for project
  • Click on Customer Subject area (Note : Generally set of reports may or may not  come under specific subject area)
2. Checking the Source record count and Target record count

  • Checking the source record count and Target record count for matching
  • While writing the queries we can validate the record count
Once the Report gets executed we need to validate query in database by finding the matching record count.
3. Checking the Naming Conventions: During the Project setup and later during maintenance these conventions can help for communication about several possible entities: ETL, DWH and Repository. Using naming convention is only effective if they are used by everyone at the same level of details:
  • Fact Table (W_xxx_F)
  • Dimension Table (W_xxx_F)
  • Dimension Hierarchy Table(W_XXX_DH)
  • Aggregated Fact Table (w_xxx_AG
4. Authentication: Is the process used to verify the identity of a user who tries to access the system. This is implemented by BI Server using either the Internal Authentication or External Methods

Authorization: Is the process used to verify that a user has been granted sufficient privileges to perform the requested action on the special object.

5. Report Data Validation: In BI testing world, generally we call it as End to End testing hence we need to perform report data validation against data warehouse(DB) and source system data (i.e. Test data which are created specific to report from Business team or Front office team)
Whenever the report is opened, one session log will be generated with Query which is used to identify the physical data source to which Oracle BI Server is connecting. Use this SQL to analyze the tables and fields while validation the data of the report.

6. Steps to follow to navigate to the Session log Query:

Step 1: Click Settings > Administration to open the Oracle BI Presentation Services Administration   window
Step 2: In the Session Management window, under Cursor Cache, click the View Log link for the last entry

7. Source to target validation : Here source is represented by metadata repository and Target represents OBIEE Reports & Dashboards, BI Publishers.

8. Presentation Layer Object Validation: This is the layer of the logical business model that is accessible for the client through the structure query Language better known as the logical SQL. The presentation is the appropriate layer to set user permissions and to validate user permissions to reports.

9. Categorizing the metrics: It is important to classify the metrics from multiple perspectives such as, their frequency of use, potential performance impacts, and complexity of calculations involved.  Such a classification helps drive priority of testing.

10. Dashboard charts and filters criteria: User interface testing should encompass tests with multiple options in the available filter criteria.  OBIEE gives enough drilldown features to verify the underlying data on the clickable components of the charts.  Test cases written should be detailed enough to verify data aggregated at various layers.

11. Filter Validation
  • Validate the entire filters which are available on report. Example refer below report and its filter
  • Example: For Performance Measure filter- Validate filter contents against report requirement and database
Filter types:

Local filters:  Filtering the records in the report level.
Global filters: Filtering the records based on user selection in Dashboard.

Dashboard Validation: When a user selects certain request that need to display the exact results in the dashboard.

12. Data level security: Data level security validation means user will be able to see only particular data for the given permission

Example: Both the Eastern and Western region Sales Managers will be seeing the same reports but the Data visible to them in the reports will be Eastern and Western region Sales data respectively.

Object Level security: Need to validate whether the particular user is able to access the particular dashboard or folder etc.

Example: For example, users in a particular department can view only the subject areas that belong to their department.

13. Bursting the reports: Bursting the reports means distributing the reports based on the regions. Eg: if there are 4 regional reports, validate to burst the reports (based on East, West, South, North regions).

14. Buzz Matrix validation: Need to validate the alerts in the Dashboard.

Example: We are running stock market and CEO is very much interested to know today’s business weather, has it reached a certain level that which he expects compared to the last week. If the level has reached to a certain level in Dashboard Buzz (Alert), it should raise an alert saying that it has reached the level in such a way the buzz matrix validates.

15. Testing in Levels: In a typical OBIEE project, it is advisable to test in multiple areas rather than attempting to test everything at once.

a) The first set of tests can verify the accuracy of the column to column transport of the data between the source and target.  This verification is typically done using SQL statements on the source and target databases.

b) The next step is to verify the accuracy of the repository (the .RPD file.) These tests will include testing with appropriate dimensional filters on the metrics and the formula used to compute those metrics.  Testers can build two sets of comparable queries within the repository interface.

c) The next step in testing will be to verify the dashboard / reports against comparable queries on repository metrics.  In these tests, testers verify dashboard charts / reports against corresponding results from queries they execute on metrics of the repository.

d) Finally, the functional interface tests will cover tests to verify the lookups, performance, ease of use, look and feel etc.

The first three types of tests are performed by testers who can create simple SQL statements.
Structure and organization of test cases – the choices on test cases naming convention and structure can help organize the test artifacts better and aid a great deal in implementing the overall testing strategy.
For example: if the test cases are grouped based on the nature of the tests, like,  source to target verification, RPD metrics tests, functional, security, performance and usability, it would be easier to pick and choose the tests based on the testing context and tester capabilities.

16. User acceptance criteria: Users typically have an existing legacy mechanism to verify if what is displayed in the new solution makes sense.  Testers should dig into this and understand how the end users built the project acceptance criteria.  Testers should challenge the assumptions made by the business community in deriving the acceptance criteria.  This activity helps get an end user perspective built into the testing efforts from early on.

17. Validating Master Detail Report: Master Details linking of views allows you to establish a relationship between two or more views such that one view, called the master view, will drive data changes in one or more other views, called detail views.

18. Time series functions validation: Time series functions provide the ability to compare business performance with previous time periods, allowing you to analyze data that spans multiple time periods.
Time series functions enable comparisons between current sales and sales a year ago, a month ago, and so on.

a. Ago: With ago function  we can compare period to period
b. To date: Time series functions enable comparisons between current sales and sales a year ago, a month ago, and so on.
c. Period rolling: The PERIODROLLING function does not have a time series grain; instead, you specify a start and end period in the function.

19. Oracle bi-publisher validation: Oracle BI Publisher known as XML Publisher offers efficient scalable reporting solution available for complex, distributed environments. It provides a central architecture for generation and delivering information to employees’, customer and business partners both security and in the right format.

Thus this Document gives an overview of OBIEE Testing and commonly used in BI Components while doing validation.

Thanks For Reading Our Blogs. Please Know More About:: Oracle BI Testing



Thursday, March 29, 2012

Strategies For Testing Data Warehouse Applications


Introduction:

There is an exponentially increasing cost associated with finding software defects later in the development lifecycle. In data warehousing, this is compounded because of the additional business costs of using incorrect data to make critical business decisions. Given the importance of early detection of software defects, let’s first review some general goals of testing an ETL application:

Below content describes the various common strategies used to test the Data warehouse system:
Data completeness: 

Ensures that all expected data is loaded in to target table.

1. Compare records counts between source and target..check for any rejected records.
2. Check Data should not be truncated in the column of target table.
3. Check unique values has to load in to the target. No duplicate records should be existing.
4. Check boundary value analysis (ex: only >=2008 year data has to load into the target)

Data Quality:

1.Number check: if in the source format of numbering the columns are as xx_30 but if the target is only 30 then it has to load not pre_fix(xx_) .. we need to validate.

2.  Date Check: They have to follow Date format and it should be same across all the records. Standard format : yyyy-mm-dd etc..

3. Precision Check: Precision value should display as expected in the target table.

Example: In source 19.123456 but in the target it should display as 19.123 or round of 20.

4.  Data Check: Based on business logic, few record which does not meet certain criteria should be filtered out.
Example: only record whose date_sid >=2008 and GLAccount != ‘CM001’ should only load in the
target table.

5. Null Check: Few columns should display “Null” based on business requirement
Example: Termination Date column should display null unless & until if his “Active status”
Column is “T” or “Deceased”.

Note: Data cleanness will be decided during design phase only.

Data cleanness:

Unnecessary columns should be deleted before loading into the staging area.

1.  Example: If a column have name but it is taking extra space , we have to “trim” space so before loading in the staging area with the help of expression transformation space will be trimed.

2. Example: Suppose telephone number and STD code in different columns and requirement says it should be in one column then with the help of expression transformation we will concatenate the values in one column.

Data Transformation: All the business logic implemented by using ETL-Transformation should reflect.

Integration testing:

Ensures that the ETL process functions well with other upstream and downstream processes.

Example:
1.  Downstream:Suppose if you are changing precision in one of the transformation “column”, let us assume a “EMPNO” is column having data type with size 16, this data type precision should be same for all transformation where ever this “EMPNO” column is used.

2.  Upstream: If the source is SAP/ BW and we are extracting data there will be ABAP code which will act as interface between SAP/ BW and map where there source is SAP /BW and to modify existing mapping we have to re-generate the ABAP code in the ETL tool (informatica)., if we don’t do it, wrong data will be extracted since ABAP code is not updated.

User-acceptance testing:

Ensures the solution meets users’ current expectations and anticipates their future expectations.
Example: Make sure none of the code should be hardcoded.

Regression testing:

Ensures existing functionality remains intact each time a new release of code is completed.

Conclusion:

Taking these considerations into account during the design and testing portions of building a data warehouse will ensure that a quality product is produced and prevent costly mistakes from being discovered in production.

BI Testing-SQL Performance tuning


Introduction:
Generally ETL performance testing is confirmation test to ensure that an ETL ‘system’ can handle the load of multiple users and transaction.  For any project this is primarily ensuring that the ‘system’ can easily manage the throughput of millions of transactions.
You can improve your application performance by optimizing the queries you use. The following sections outline techniques you can use to optimize query performance.
Improve Indexes:
  • Creating useful indexes is one of the most important ways to achieve better query performance. Useful indexes help you find data with fewer disk I/O operations and less system resource usage.
  • To create useful indexes, you must understand how the data is used, the types of queries and the frequencies they run, and how the query processor can use indexes to find your data quickly.
  • When you choose what indexes to create, examine your critical queries, the performance of which will affect the user’s experience most. Create indexes to specifically aid these queries. After adding an index, rerun the query to see if performance is improved. If it is not, remove the index.
  • As with most performance optimization techniques, there are tradeoffs. For example, with more indexes, SELECT queries will potentially run faster. However, DML (INSERT, UPDATE, and DELETE) operations will slow down significantly because more indexes must be maintained with each operation. Therefore, if your queries are mostly SELECT statements, more indexes can be helpful. If your application performs many DML operations, you should be conservative with the number of indexes you create.
Choose what to Index:
  • We recommend that you always create indexes on primary keys. It is frequently useful to also create indexes on foreign keys. This is because primary keys and foreign keys are frequently used to join tables. Indexes on these keys let the optimizer consider more efficient index join algorithms. If your query joins tables by using other columns, it is frequently helpful to create indexes on those columns for the same reason.
  • When primary key and foreign key constraints are created, SQL Server Compact 3.5 automatically creates indexes for them and takes advantage of them when optimizing queries. Remember to keep primary keys and foreign keys small. Joins run faster this way.
Use Indexes with Filter Clauses
  • Indexes can be used to speed up the evaluation of certain types of filter clauses. Although all filter clauses reduce the final result set of a query, some can also help reduce the amount of data that must be scanned.
  • A search argument (SARG) limits a search because it specifies an exact match, a range of values, or a conjunction of two or more items joined by AND. It has one of the following forms:
Understand Response Time Vs Total Time:
  • Response time is the time it takes for a query to return the first record. Total time is the time it takes for the query to return all records. For an interactive application, response time is important because it is the perceived time for the user to receive visual affirmation that a query is being processed. For a batch application, total time reflects the overall throughput. You have to determine what the performance criteria are for your application and queries, and then design accordingly.
Example:
  • Suppose the query returns 100 records and is used to populate a list with the first five records. In this case, you are not concerned with how long it takes to return all 100 records. Instead, you want the query to return the first few records quickly, so that you can populate the list.
  • Many query operations can be performed without having to store intermediate results. These operations are said to be pipelined. Examples of pipelined operations are projections, selections, and joins. Queries implemented with these operations can return results immediately. Other operations, such as SORT and GROUP-BY, require using all their input before returning results to their parent operations. These operations are said to require materialization. Queries implemented with these operations typically have an initial delay because of materialization. After this initial delay, they typically return records very quickly.
  • Queries with response time requirements should avoid materialization. For example, using an index to implement ORDER-BY, yields better response time than using sorting. The following section describes this in more detail.
Index the ORDER-BY / GROUP-BY / DISTINCT Columns for Better Response Time
  • The ORDER-BY, GROUP-BY, and DISTINCT operations are all types of sorting. The SQL Server Compact 3.5 query processor implements sorting in two ways. If records are already sorted by an index, the processor needs to use only the index. Otherwise, the processor has to use a temporary work table to sort the records first. Such preliminary sorting can cause significant initial delays on devices with lower power CPUs and limited memory, and should be avoided if response time is important.
  • In the context of multiple-column indexes, for ORDER-BY or GROUP-BY to consider a particular index, the ORDER-BY or GROUP-BY columns must match the prefix set of index columns with the exact order. For example, the index CREATE INDEX Emp_Name ON Employees (“Last Name” ASC, “First Name” ASC) can help optimize the following queries:
    • .. ORDER BY / GROUP BY “Last Name” …
    • … ORDER BY / GROUP BY “Last Name”, “First Name” …
It will not help optimize:
  • … ORDER BY / GROUP BY “First Name” …
  • … ORDER BY / GROUP BY “First Name”, “Last Name” …
For a DISTINCT operation to consider a multiple-column index, the projection list must match all index columns, although they do not have to be in the exact order. The previous index can help optimize the following queries:
  • … DISTINCT “Last Name”, “First Name” …
  • … DISTINCT “First Name”, “Last Name” …
It will not help optimize:
  • … DISTINCT “First Name” …
  • … DISTINCT “Last Name” …
Rewrite Subqueries to Use JOIN
Sometimes you can rewrite a subquery to use JOIN and achieve better performance. The advantage of creating a JOIN is that you can evaluate tables in a different order from that defined by the query. The advantage of using a subquery is that it is frequently not necessary to scan all rows from the subquery to evaluate the subquery expression. For example, an EXISTS subquery can return TRUE upon seeing the first qualifying row.
Example:
To determine all the orders that have at least one item with a 25 percent discount or more, you can use the following EXISTS subquery:
SELECT “Order ID” FROM Orders O
WHERE EXISTS (SELECT “Order ID”
FROM “Order Details” OD
WHERE O.”Order ID” = OD.”Order ID”
AND Discount >= 0.50)
You can rewrite this by using JOIN:
SELECT DISTINCT O.”Order ID” FROM Orders O INNER JOIN “Order Details”
OD ON O.”Order ID” = OD.”Order ID” WHERE Discount >= 0.50
Limit Using Outer JOINs
OUTER JOINs are treated differently from INNER JOINs in the optimizer. It does not try to rearrange the join order of OUTER JOIN tables as it does to INNER JOIN tables. The outer table (the left table in LEFT OUTER JOIN and the right table in RIGHT OUTER JOIN) is accessed first, followed by the inner table. This fixed join order could lead to execution plans that are less than optimal.
Use Parameterized Queries:
  • If your application runs a series of queries that are only different in some constants, you can improve performance by using a parameterized query. For example, to return orders by different customers, you can run the following query:
  • SELECT “Customer ID” FROM Orders WHERE “Order ID” = ?
  • Parameterized queries yield better performance by compiling the query only once and executing the compiled plan multiple times. Programmatically, you must hold on to the command object that contains the cached query plan. Destroying the previous command object and creating a new one destroys the cached plan. This requires the query to be re-compiled. If you must run several parameterized queries in interleaved manner, you can create several command objects, each caching the execution plan for a parameterized query. This way, you effectively avoid re-compilations for all of them.
17 Tips for Avoiding Problematic Queries
1. Avoid Cartesian products
2. Avoid full table scans on large tables
3. Use SQL standards and conventions to reduce parsing
4. Lack of indexes on columns contained in the WHERE clause
5. Avoid joining too many tables
6. Monitor V$SESSION_LONGOPS to detect long running operations
7. Use hints as appropriate
8. Use the SHARED_CURSOR parameter
9. Use the Rule-based optimizer if I is better than the Cost-based optimizer
10. Avoid unnecessary sorting
11. Monitor index browning (due to deletions; rebuild as necessary)
12. Use compound indexes with care (Do not repeat columns)
13. Monitor query statistics
14. Use different tablespaces for tables and indexes (as a general rule; this is old-school somewhat, but the main point is reduce I/O contention)
15. Use table partitioning (and local indexes) when appropriate (partitioning is an extra cost feature)
16. Use literals in the WHERE clause (use bind variables)
17. Keep statistics up to date
Conclusion
ETL projects today are designed for correct functionality and adequate performance, i.e., to complete within a time window. However, the task of optimizing ETL designs is left to the experience and intuition of the ETL designers. In addition, ETL designs face additional objectives beyond performance.

ETL testing Fundamentals


Introduction:
Comprehensive testing of a data warehouse at every point throughout the ETL (extract, transform, and load) process is becoming increasingly important as more data is being collected and used for strategic decision-making. Data warehouse or ETL testing is often initiated as a result of mergers and acquisitions, compliance and regulations, data consolidation, and the increased reliance on data-driven decision making (use of Business Intelligence tools, etc.). ETL testing is commonly implemented either manually or with the help of a tool (functional testing tool, ETL tool, proprietary utilities). Let us understand some of the basic ETL concepts.
BI / Data Warehousing testing projects can be conjectured to be divided into ETL (Extract – Transform – Load) testing and henceforth the report testing.
Extract Transform Load is the process to enable businesses to consolidate their data while moving it from place to place (i.e.) moving data from source systems into the data warehouse. The data can arrive from any source:
Extract - It can be defined as extracting the data from numerous heterogeneous systems.
Transform - Applying the business logics as specified b y the business on the data derived from sources.
Load - Pumping the data into the final warehouse after completing the above two process. The ETL part of the testing mainly deals with how, when, from, where and what data we carry in our data warehouse from which the final reports are supposed to be generated. Thus, ETL testing spreads across all and each stage of data flow in the warehouse starting from the source databases to the final target warehouse.
Star Schema
The star schema is perhaps the simplest data warehouse schema. It is called a star schema because the entity-relationship diagram of this schema resembles a star, with points radiating from a central table. The center of the star consists of a large fact table and the points of the star are the dimension tables.
A star schema is characterized by one OR more of very large fact tables that contain the primary information in the data warehouse, and a number of much smaller dimension tables (OR lookup tables), each of which contains information about the entries for a particular attribute in the fact table.
A star query is a join between a fact table and a number of dimension tables. Each dimension table is joined to the fact table using a primary key to foreign key join, but the dimension tables are not joined to each other. The cost-based optimizer recognizes star queries and generates efficient execution plans for them. A typical fact table contains keys and measures. For example, in the sample schema, the fact table sales, contain the measures, quantity sold, amount, average, the keys time key, item-key, branch key, and location key. The dimension tables are time, branch, item and location.
Snow-Flake Schema
The snowflake schema is a more complex data warehouse model than a star schema, and is a type of star schema. It is called a snowflake schema because the diagram of the schema resembles a snowflake. Snowflake schemas normalize dimensions to eliminate redundancy. That is, the dimension data has been grouped into multiple tables instead of one large table.
For example, a location dimension table in a star schema might be normalized into a location table and city table in a snowflake schema. While this saves space, it increases the number of dimension tables and requires more foreign key joins. The result is more complex queries and reduced query performance. Figure above presents a graphical representation of a snowflake schema.
When to use star schema and snowflake schema?
When we refer to Star and Snowflake Schemas, we are talking about a dimensional model for a Data Warehouse or a Datamart. The Star schema model gets it name from the design appearance because there is one central fact table surrounded by many dimension tables. The relationship between the fact and dimension tables is created by PK -> FK relationship and the keys are generally surrogate to the natural or business key of the dimension tables. All data for any given dimension is stored in the one dimension table. Thus, the design of the model could potentially look like a STAR. On the other hand, the Snowflake schema model breaks the dimension data into multiple tables for the purpose of making the data more easily understood or for reducing the width of the dimension table. An example of this type of schema might be a dimension with Product data of multiple levels. Each level in the Product Hierarchy might have multiple attributes that are meaningful only to that level. Thus, one would break the single dimension table into multiple tables in a hierarchical fashion with the highest level tied to the fact table. Each table in the dimension hierarchy would be tied to the level above by natural or business key where the highest level would be tied to the fact table by a surrogate key. As you can imagine the appearance of this schema design could resemble the appearance of a snowflake.
Types of Dimensions Tables
Type 1: This is straightforward r e f r e s h . The fields are constantly overwritten and history is not kept for the column. For example should a description change for a Product number,the old value will be over written by the new value.
Type 2: This is known as a slowly changing dimension, as history can be kept. The column(s) where the history is captured has to be defined. In our example of the Product description changing for a product number, if the slowly changing attribute captured is the product description, a new row of data will be created showing the new product description. The old description will still be contained in the old.
Type 3: This is also a slowly changing dimension. However, instead of a new row, in the example, the old product description will be moved to an “old value” column in the dimension, while the new description will overwrite the existing column. In addition, a date stamp column exists to say when the value was updated. Although there will be no full history here, the previous value prior to the update is captured. No new rows will be created for history as the attribute is measured for the slowly changing value.
Types of fact tables:
Transactional: Most facts will fall into this category. The transactional fact will capture transactional data such as sales lines or stock movement lines. The measures for these facts can be summed together.
Snapshot: A snapshot fact will capture the current data for point for a day. For example, all the current stock positions, where items are, in which branch, at the end of a working day can be captured.
Snapshot fact measures can be summed for this day, but cannot be summed across more than 2 snapshot days as this data will be incorrect.
Accumulative: An accumulative snapshot will sum data up for an attribute, and is not based on time. For example, to get the accumulative sales quantity for a sale of a particular product, the row of data will be calculated for this row each night – giving an “accumulative” value.
Key hit-points in ETL testing are:There are several levels of testing that can be performed during data warehouse testing and they should be defined as part of the testing strategy in different phases (Component Assembly, Product) of testing. Some examples include:
1. Constraint Testing: During constraint testing, the objective is to validate unique constraints, primary keys, foreign keys, indexes, and relationships. The test script should include these validation points. Some ETL processes can be developed to validate constraints during the loading of the warehouse. If the decision is made to add constraint validation to the ETL process, the ETL code must validate all business rules and relational data requirements. In Automation, it should be ensured that the setup is done correctly and maintained throughout the ever-changing requirements process for effective testing. An alternative to automation is to use manual queries. Queries are written to cover all test scenarios and executed manually.
2. Source to Target Counts: The objective of the count test scripts is to determine if the record counts in the source match the record counts in the target. Some ETL processes are capable of capturing record count information such as records read, records written, records in error, etc. If the ETL process used can capture that level of detail and create a list of the counts, allow it to do so. This will save time during the validation process. It is always a good practice to use queries to double check the source to target counts.
3. Source to Target Data Validation: No ETL process is smart enough to perform source to target field-to-field validation. This piece of the testing cycle is the most labor intensive and requires the most thorough analysis of the data. There are a variety of tests that can be performed during source to target validation. Below is a list of tests that are best practices:
4. Transformation and Business Rules: Tests to verify all possible outcomes of the transformation rules, default values, straight moves and as specified in the Business Specification document. As a special mention, Boundary conditions must be tested on the business rules.
5. Batch Sequence & Dependency Testing: ETL’s in DW are essentially a sequence of processes that execute in a particular sequence. Dependencies do exist among various processes and the same is critical to maintain the integrity of the data. Executing the sequences in a wrong order might result in inaccurate data in the warehouse. The testing process must include at least 2 iterations of the end–end execution of the whole batch sequence. Data must be checked for its integrity during this testing. The most common type of errors caused because of incorrect sequence is the referential integrity failures, incorrect end-dating (if applicable) etc, reject
records etc.
6. Job restart Testing: In a real production environment, the ETL jobs/processes fail because of number of reasons (say for ex: database related failures, connectivity failures etc). The jobs can fail half/partly executed. A good design always allows for a restart ability of the jobs from the failure point. Although this is more of a design suggestion/approach, it is suggested that every ETL job is built and tested for restart capability.
7. Error Handling: Understanding a script might fail during data validation, may confirm the ETL process is working through process validation. During process validation the testing team will work to identify additional data cleansing needs, as well as identify consistent error patterns that could possibly be diverted by modifying the ETL code. It is the responsibility of the validation team to identify any and all records that seem suspect. Once a record has been both data and process validated and the script has passed, the ETL process is functioning correctly. Conversely, if suspect records have been identified and documented during data validation those are not supported through process validation, the ETL process is not functioning correctly.
8. Views: Views created on the tables should be tested to ensure the attributes mentioned in the views are correct and the data loaded in the target table matches what is being reflected in the views.
9. Sampling: Sampling will involve creating predictions out of a representative portion of the data that is to be loaded into the target table; these predictions will be matched with the actual results obtained from the data loaded for business Analyst Testing. Comparison will be verified to ensure that the predictions match the data loaded into the target table.
10. Process Testing: The testing of intermediate files and processes to ensure the final outcome is valid and that performance meets the system/business need.
11. Duplicate Testing: Duplicate Testing must be performed at each stage of the ETL process and in the final target table. This testing involves checks for duplicates rows and also checks for multiple rows with same primary key, both of which cannot be allowed.
12. Performance: It is the most important aspect after data validation. Performance testing should check if the ETL process is completing within the load window.
13. Volume: Verify that the system can process the maximum expected quantity of data for a given cycle in the time expected.
14.Connectivity Tests: As the name suggests, this involves testing the upstream, downstream interfaces and intra DW connectivity. It is suggested that the testing represents the exact transactions between these interfaces. For ex: If the design approach is to extract the files from source system, we should actually test extracting a file out of the system and not just the
connectivity.
15. Negative Testing: Negative Testing checks whether the application fails and where it should fail with invalid inputs and out of boundary scenarios and to check the behavior of the application.
16. Operational Readiness Testing (ORT): This is the final phase of testing which focuses on verifying the deployment of software and the operational readiness of the application. The main areas of testing in this phase include:
Deployment Test
1. Tests the deployment of the solution
2. Tests overall technical deployment “checklist” and timeframes
3. Tests the security aspects of the system including user authentication and
authorization, and user-access levels.
Conclusion
Evolving needs of the business and changes in the source systems will drive continuous change in the data warehouse schema and the data being loaded. Hence, it is necessary that development and testing processes are clearly defined, followed by impact-analysis and strong alignment between development, operations and the business.