Ads 468x60px

Featured Posts

Friday, August 24, 2012

Emerging DB Technology – Columnar Database


Today’s Top Data-Management Challenge:

Businesses today are challenged by the ongoing explosion of data. Gartner is predicting data growth will exceed 650% over the next five years. Organizations capture, track, analyze and store everything from mass quantities of transactional, online and mobile data, to growing amounts of machine-generated data. In fact, machine-generated data, including sources ranging from web, telecom network and call-detail records, to data from online gaming, social networks, sensors, computer logs, satellites, financial transaction feeds and more, represents the fastest-growing category of Big Data. High volume web sites can generate billions of data entries every month.

As volumes expand into the tens of terabytes and even the petabyte range, IT departments are being pushed by end users to provide enhanced analytics and reporting against these ever increasing volumes of data. Managers need to be able to quickly understand this information, but, all too often, extracting useful intelligence can be like finding the proverbial ‘needle in the haystack.

How do columnar databases work?

The defining concept of a column-store is that the values of a table are stored contiguously by column. Thus the classic supplier table from supplier and parts database would be stored on disk or in memory something like:  S1S2S3S4S52010302030LondonParis Paris LondonAthensSmithJonesBlakeClarkAdams



This is in contrast to a traditional row-store which would store the data more like this:
S120LondonSmithS210Paris JonesS330Paris BlakeS420LondonClarkS530AthensAdams
From this simple concept flows all of the fundamental differences in performance, for better or worse, between a column-store and a row-store. For example, a column-store will excel at doing aggregations like totals and averages, but inserting a single row can be expensive, while the inverse holds true for row-stores. This should be apparent from the above diagram.

The Ubiquity of Thinking in Rows:

Organizing data in rows has been the standard approach for so long that it can seem like the only way to do it. An address list, a customer roster, and inventory information—you can just envision the neat row of fields and data going from left to right on your screen.

Databases such as Oracle, MS SQL Server, DB2 and MySQL are the best known row-based databases.
Row-based databases are ubiquitous because so many of our most important business systems are transactional.
Data Set Ex:  See the below data set contents of 20 columns X 50 Millions of Rows.


Example Data Set
Row-oriented databases are well suited for transactional environments, such as a call center where a customer’s entire record is required when their profile is retrieved and/or when fields are frequently updated.

Other examples include:
• Mail merging and customized emails
• Inventory transactions
• Billing and invoicing

Where row-based databases run into trouble is when they are used to handle analytic loads against large volumes of data, especially when user queries are dynamic and ad hoc.

To see why, let’s look at a database of sales transactions with 50-days of data and 1 million rows per day. Each row has 30 columns of data. So, this database has 30 columns and 50 million rows. Say you want to see how many toasters were sold for the third week of this period. A row-based database would return 7-million rows (1 million for each day of the third week) with 30 columns for each row—or 210-million data elements. That’s a lot of data elements to crunch to find out how many toasters were sold that week. As the data set increases in size, disk I/O becomes a substantial limiting factor since a row-oriented design forces the database to retrieve all column data for any query.

As we mentioned above, many companies try to solve this I/O problem by creating indices to optimize queries. This may work for routine reports (i.e. you always want to know how many toasters you sold for the third week of a reporting period) but there is a point of diminishing returns as load speed degrades since indices need to be recreated as data is added. In addition, users are severely limited in their ability to quickly do ad-hoc queries (i.e. how many toasters did we sell through our first Groupon offer? Should we do it again?) that can’t depend on indices to optimize results.


Pivoting Your Perspective: Columnar Technology

Column-oriented databases allow data to be stored column-by-column rather than row-by-row. This simple pivot in perspective—looking down rather than looking across—has profound implications for analytic speed. Column-oriented databases are better suited for analytics where, unlike transactions, only portions of each record are required. By grouping the data together this way, the database only needs to retrieve columns that are relevant to the query, greatly reducing the overall I/O.

Returning to the example in the section above, we see that a columnar database would not only eliminate
43 days of data, it would also eliminate 28 columns of data. Returning only the columns for toasters and units sold, the columnar database would return only 14 million data elements or 93% less data. By returning so much less data, columnar databases are much faster than row-based databases when analyzing large data sets. In addition, some columnar databases (such as Infobright®) compress data at high rates because each column stores a single data type (as opposed to rows that typically contain several data types), and allow compression to be optimized for each particular data type. Row-based databases have multiple data types and limitless range of values, thus making compression less efficient overall.

Thanks For Reading This Blog. View More:: BI Analytics

Performance Center Best Practices


For Performance Testing we have started using HP Performance Center due to many advantages it provides to the testing team. We have listed out some of the best practices which can be followed when using Performance Center.

Architecture – Best Practices

  • Hardware Considerations
    • CPU, Memory, Disk sized to match the role and usage levels
    • Redundancy added for growth accommodation and fault-tolerance
    • Never install multiple critical components on the same hardware
  • Network Considerations
    • Localization of all PC server traffic - Web to Database, Web to File Server, Web to Utility Server, Web to Controllers, Controller to Database, Controller to File Server, Controller to Utility Server.
    • Separation of operational and virtual user traffic – PC operational traffic should not share same network resources as virtual user traffic – for optimal network performance.
  • Backup and Recovery Considerations
    • Take periodic backup Oracle Database and File System (\\<fileserver>\LRFS)
  • Backups of PC servers and hosts are optional.
  • Monitoring Considerations
    • Monitor services (eg. SiteScope) should be employed to manage availability and responsiveness of each PC component

Configuration – Best Practice

  • Set ASP upload buffer to the maximum size of a file that you will permit to be uploaded to the server.
    • Registry: HKLM\SYSTEM\CurrentControlSet\Services\w3svc\Parameters
  • Modify MaxClientRequestBuffer
    • create as a DWORD if it does not exist)
    • Ex. 2097152 is 2 Mb
  • Limit access to the PC File System (LRFS) for security
    • Performance Center User (IUSR_METRO) needs “Full Control”
  • We recommend 2 LoadTest Web Servers when
    • Running 3 or more concurrent runs
    • Having 10 plus users viewing tests
  • The load balancing needs an external, web session based, load balancer
  • In Internet Explorer, set “Check for newer versions of stored pages” to “Every visit to the page”
    • NOTE: This should be done on the client machines that are accessing the Performance Center web sites

Script Repository – Best Practice

  • Use VuGen integration for direct script upload
  • Ensure dependent files are within zip file
  • Re-configure script with optimal RTS
  • Validate script execution on PC load generators
  • Establish meaningful script naming convention
  • Clean-up script repository regularly

Monitor Profile – Best Practice

  • Avoid information overload
    • Min-Max principle – Minimum metrics for maximum detection
  • Consult performance experts and developers for relevant metrics
    • Standard Process Metrics (CPU, Available Memory, Disk Read/Write Bytes, Network Bandwidth Utilization)
    • Response Times / Durations (Avg. Execution Time)
    • Rates and Frequencies (Gets/sec, Hard Parses/sec)
    • Queue Lengths (Requests Pending)
    • Finite Resource Consumption (JVM Available Heap Size, JDBC Pool’s Active Connections)
    • Error Frequency (Errors During Script Runtime, Errors/sec)

Load Test – Best Practice

  • General
  • Create new load test for any major change in scheduling logic or script types
  • Use versioning (by naming convention) to track changes
  • Scripts
  • When scripts are updated with new run-logic settings, remove and reinsert updated script in load test
  • Scheduling
  • Each ramp-up makes queries to Licensing (Utility) Server, and LRFS file system.  Do not ramp at intervals less than 5 seconds.
  • Configure ramp-up quantity per interval to match available load generators
  • Do not run (many/any) users on Controller

Timeslots – Best Practice

  • Scheduling
    • Always schedule time slots in advance of load test
    • Always schedule extra time (10-30 minutes) for large or critical load tests
    • Allow for gaps between scheduled test runs (in case of emergencies)
  • Host Selection
    • Use automatic host selection whenever possible
    • Reserve manual hosts only when specific hosts are needed (because of runtime configuration requirements)
The above mentioned solutions will help you to make use of Performance Center without any issues and will also save you a lot of time by avoiding some issues which might arise because of not doing some of the above mentioned practices.

Thanks For Reading This Block. Want To Know More Visit At: Performance Center Best Practices

Thursday, August 23, 2012

Title
Peoplesoft Tester
Categories
Grade
G4
Skill
Peoplesoft, HRMS Testing, Payroll
Start Date
21-08-2012
Location
Chennai
Job Information
1)   3-5 years of experience in ERP Related Product Testing.

2)   Knowledge of complete testing life-cycle and different testing methodologies.

3)   Min. 2 – 3 years of hands on experience on PeopleSoft – HRMS.

4)   Min. 1 year of experience on writing Test Scripts on PS Payroll Module.

5)   Good knowledge on HP QC.

6)   Strong analytical and troubleshooting skills.

Unit
10



Title
DataStage Developer
Categories
Grade
G4
Skill
DataStage with SQL Queries
Start Date
21-08-2012
Location
Mumbai
Job Information
1)   3– 6 Years of experience in DataStage.

2)   Strong experience in IBM DataStage 8.5; SQL Server and SQL queries.

3)   Good communication skills.

Unit
6



Title
SAP BPC Professional
Categories
Grade
G5/ G6
Skill
SAP BPC
Start Date
21-08-2012
Location
Chennai
Job Information
1)   5– 8 Years of overall experience in SAP BPC.

2)   Experience in BPC Net weaver platform (BPC 7.5).

3)   Experience in both Planning & Consolidation areas in BPC.

4)   Specific experience in the following BPC applications (within Planning): o HCM application (Head count management) o Finance Planning.

5)   Good communication skills.

Unit
1



Title
Abinitio Developer
Categories
Grade
G4
Skill
Abinitio with UNIX
Start Date
21-08-2012
Location
Chennai
Job Information
1)   3 – 5 Years of overall experience in Abinitio.

2)   Should have 3+ years hands on experience on Ab Initio and UNIX.

3)   Must have development project experience.

4)   Working experience of UNIX shell scripting is must.

5)   Good communication skills.

Unit
5



Title
Datastage Administrator
Categories
Grade
G5/ G6 ( 5 -7 yrs)
Skill
Datastage admin
Start Date
21-08-2012
Location
Chennai
Job Information
1)   AIG GIU Migration from Sun Solaris to AIX;

2)   Datastage Admin position JD – Install and Configure IBM Information Server 8.1 with Datastage on AIX 6.1,

3)   Create Projects,

4)   Creating user and roles, Administrating Security, Backing up and restoring IBM Information Server components including Datastage, Knowledge of Cognos.

Unit
1