Showing posts with label Software Quality Assurance. Show all posts
Showing posts with label Software Quality Assurance. Show all posts

Tuesday, January 31, 2012

CALL WAITING….. ??? – Costly Miss Costly Fix


On Jan. 15, 1990, around 60,000 AT & T long-distance customers tried to place long-distance calls as usual — and got nothing. Behind the scenes, the company’s 4ESS long-distance switches, all 114 of them, kept rebooting in sequence. AT&T assumed it was being hacked, and for nine hours, the company and law enforcement tried to work out what was happening. In the end, AT&T uncovered the culprit: an obscure fault in its new Oracle Software.

Here’s how the switches were supposed to work: If one switch gets congested, it sends a “do not disturb” message to the next switch, which picks up its traffic. The second switch resets itself to keep from disturbing the first switch. Switch 2 checks back on Switch 1, and if it detects activity, it does another reset to reflect that Switch 1 is back online. So far, so simple.

The month before the crash, AT & T Tweaked the code to speed up the process. The trouble was, things were too fast. The first server to overload sent two messages, one of which hit the second server just as it was resetting. The second server assumed that there was a fault in its CCS7 internal logic and reset itself. It put up its own “do not disturb” sign and passed the problem on to a third switch.

The third switch also got overwhelmed and reset itself, and so the problem cascaded through the whole system. All 114 switches in the system kept resetting themselves, until engineers reduced the message load on the whole system and the wave of resets finally broke.

In the meantime, AT&T lost an estimated $60 million in long-distance charges from calls that didn’t go through. The company took a further financial hit a few weeks later when it knocked a third off its regular long-distance rates on Valentine’s Day to make amends with customers

Wednesday, January 18, 2012

Test Consulting: How to Improve a QA Area


Is your client having difficulty to measure QA performance? Is your client undecided on what test strategy to follow for new application implementation? Is your client looking for testing tool recommendations or need to improve the usage of the existing ones?
Hexaware has a dedicated Test Consulting practice where testing experts are involved in providing test consulting services to clients. During the consulting engagements, Hexaware assesses and evaluates the Approach, People and Technology and fill in the gaps by bringing in our domain experience, best practices, frameworks, tools experience, etc. The consulting services also include tools selection, tools optimization and TCoE Creation and/or optimization.
If you want to learn more about test strategy, the next information will help you to execute a test strategy engagement.
Test Strategy Approach
The first step is to identify the problem(s) that the client is facing and define the strategy objectives. After that, I recommend to follow this approach to execute a test strategy project:
Assessment Areas
As part of the information gathering phase, we leverage its proprietary ATPTM (Approach, People, and Technology) methodology to focus on the right areas and meet the strategy objectives. Hexaware’s APT™ Methodology is the foundation of all of our QA service offerings, below is the description of each component:
• The Approach component is designed to lay the foundation for the processes that each client use as part of testing
People is the component of the IT organization with focus on testing, Hexaware analyze the groups, roles and responsibilities involved in QA and testing
• The Technology component includes the use of QA and automation testing tools for efficiency to optimize technology and lower costs.

At the end of the project, we provide to the client the following component:
•Current State: An analysis of the current state with regard to the testing objectives
•Gaps: The gaps found between the current state and the best practices and the desired state
•Recommendations: Our recommendations to close the gaps and meet the organization objectives
•Implementation Road Map: A recommended path to follow in order to implement the recommendations.
As a result of the analysis phase, we show to our clients the current state in a quantitative graph. This graph evaluates all relevant aspects of an IT organization and prioritizes each category according to the testing objectives. One example of this graph is showed below.

This was an assessment provided as part of a test strategy we created for a leading bank in Mexico for a T24 product implementation. The benefits showed in this strategy was to reduce testing cycles by 30%, automate at least 50% of the manual test cases and have a defect free implementation using robust and repeatable testing processes.
Other examples of metrics commonly used as part of the strategy objectives are the increment of automation coverage by 30%, increase productivity by 25%, reduce overall testing cost by 15%, etc.
A test consulting practice is an area full of innovation, industry best practices and shared experiences. Now with Hexaware blogs all of us will be able to formally share our experience and our colleagues can leverage them for future assignments.

Test Consulting: How to Improve a Quality Assurance Area


Is your client having difficulty to measure QA performance? Is your client undecided on what test strategy to follow for new application implementation? Is your client looking for testing tool recommendations or need to improve the usage of the existing ones?

Hexaware has a dedicated Strategic Consulting practice where testing experts are involved in providing test consulting services to clients. During the consulting engagements, Hexaware assesses and evaluates the Approach, People and Technology and fill in the gaps by bringing in our domain experience, best practices, frameworks, tools experience, etc. The consulting services also include tools selection, tools optimization and TCoE Creation and/or optimization.

If you want to learn more about test strategy, the next information will help you to execute a test strategy engagement.

Test Strategy Approach
The first step is to identify the problem(s) that the client is facing and define the strategy objectives. After that, I recommend to follow this approach to execute a test strategy project:

Assessment Areas
As part of the information gathering phase, we leverage its proprietary ATPTM (Approach, People, and Technology) methodology to focus on the right areas and meet the strategy objectives. Hexaware’s APT™ Methodology is the foundation of all of our QA service offerings, below is the description of each component:
• The Approach component is designed to lay the foundation for the processes that each client use as part of testing
People is the component of the IT organization with focus on testing, Hexaware analyze the groups, roles and responsibilities involved in QA and testing
• The Technology component includes the use of QA and automation testing tools for efficiency to optimize technology and lower costs.


At the end of the project, we provide to the client the following component:
•Current State: An analysis of the current state with regard to the testing objectives
•Gaps: The gaps found between the current state and the best practices and the desired state
•Recommendations: Our recommendations to close the gaps and meet the organization objectives
•Implementation Road Map: A recommended path to follow in order to implement the recommendations.
As a result of the analysis phase, we show to our clients the current state in a quantitative graph. This graph evaluates all relevant aspects of an IT organization and prioritizes each category according to the testing objectives. One example of this graph is showed below.



This was an assessment provided as part of a test strategy we created for a leading bank in Mexico for a T24 product implementation. The benefits showed in this strategy was to reduce testing cycles by 30%, automate at least 50% of the manual test cases and have a defect free implementation using robust and repeatable testing processes.
Other examples of metrics commonly used as part of the strategy objectives are the increment of automation coverage by 30%, increase productivity by 25%, reduce overall testing cost by 15%, etc.

A test consulting practice is an area full of innovation, industry best practices and shared experiences. Now with Hexaware blogs all of us will be able to formally share our experience and our colleagues can leverage them for future assignments.

To know More: Visit Quality Assurance Area 

Tuesday, January 17, 2012

Good Attributes to be kept in mind for designing Automated Test Cases


Programming remains the biggest & most critical component of test case automation. Hence designing & coding of test cases is extremely important, for their execution and maintenance to be effective.

Some fundamental attributes of good automated test cases which can be followed,
  • Simple
  • Modular
  • Robust
  • Reusability
  • Maintainable
  • Documented
  • Independant
1) Simplicity: The test case should have a single objective. Multi-objective test cases are difficult to understand and design. There should not be more than 10-15 test steps per test case, May be depending on the process steps may increase. However 10 to 15 steps note a good clarity test case. Multipurpose test cases are likely to break or give misleading results. If the execution of a complex test leads to a system failure, it is difficult to isolate the cause of the failure.

2) Modularity: Each test case should have a setup and cleanup phase before and after the execution test steps, respectively. The setup phase ensures that the initial conditions are met before the start of the test steps. Similarly, the cleanup phase puts the system back in the initial state, that is, the state prior to setup. Each test step should be small and precise. However we can’t expect all test cases to have set up and cleanup process, it may vary accordingly.. The test steps are building blocks from reusable libraries that are put together to form multi-step test cases.

3) Robustness and Reliability: A test case verdict (pass or fail) should be assigned in such a way that it should be unambiguous and understandable. Robust test cases can ignore trivial failures such as one pixel mismatch in a graphical display. Care should be taken so that false test results are minimized. The test cases must have built-in mechanisms to detect and recover from errors. For example, a test case need not wait indefinitely if the software under test has crashed. Rather, it can wait for a while and terminate an indefinite wait by using a timer mechanism.

4) Reusability: The test steps are built to be configurable, that is, variables should not be hard coded. They can take values from a single configurable file(Data Tables). Attention should be given while coding test steps to ensure that a single global variable is used, instead of multiple, decentralized, hard-coded variables. Test steps are made as independent of test environments as possible. The automated test cases are categorized into different groups so that subsets of test steps and test cases can be extracted to be reused for other platforms and/or configurations. Finally, in GUI automation hard-coded screen locations must be avoided.

5) Maintainability: Any changes to the software under test will have an impact on the automated test cases and may require necessary changes to be done to the affected test cases. Therefore, it is required to conduct an assessment of the test cases that need to be modified before an approval of the project to change the system. The test suite should be organized and categorized in such a way that the affected test cases are easily identified. If a particular test case is data driven, it is recommended that the input test data be stored separately from the test case and accessed by the test procedure as needed. The test cases must comply with coding standard formats. Finally, all the test cases should be controlled with a version control system.

6) Documented: The test cases and the test steps must be well documented. Each test case gets a unique identifier, and the test purpose is clear and understandable. Author name, date of creation, and the last time it was modified must be documented. There should be traceability matrix to the features and requirements being checked by the test case. The situation under which the test case cannot be used is clearly described. The environment requirements are clearly stated with the source of input test data (if applicable). Finally, the result, that is, pass or fail, evaluation criteria are clearly described.

7) Independent and Self-sufficient: Each test case is designed as a cohesive entity, and test cases should be largely independent of each other. Each test case consists of test steps, which are naturally linked together. The predecessor and successor of a test step within a test case should be clearly understood.

Know More About: Automated Test Cases

Wednesday, February 20, 2008

Issues and workarounds for the Siebel DAC compare and Merge strategy


The Siebel provided DAC compare and merge strategy is a nice tool that helps us identify differences between a source DAC repository and the target DAC repository. Also helps to figure out which objects are to be moved to the target.
But, there onwards the merge process is manual and a big waste of time. One cheat used by many administrators is to simply copy the repository from the lower environment to the higher environment. This works, but it has one major drawback. The last ETL process id is inherited from the lower environment.
The implication is that the run ids stored in the Run Stats table has no relation to the run number stored in the W_PARAM_G table and it becomes a major challenge to figure out which ETL run did what. Then one has to meddle with the DAC repository tables to get the last ETL process being stored in the DAC repository in sync with the numbers stored in W_PARAM_G. This is a risky proposition especially if one has not understood the relationships between the repository tables and the significance of all the long keys stored by Siebel in the repository to maintain the relationship between the tables.
So, here is a solution which we have tried out (with success) to move a DAC repository from a lower environment to a higher environment without making a hash of the run history.The basic idea here is to move only few selected tables from the source environment to the target instead of doing a complete repository overwrite.
The steps are as below:
Step 1: Take backup (Export) from Target DAC Repository make it separate folder(Just for Backup)
Step 2: Import Schema DAC repository into to Target DAC repository
Step 3: Export below mention Tables from Source DAC Repository. Make a separate folder which should contain below mention tables (20) only.
Sr. NoEntityMain Table
1Database ConnectionsW_ETL_DBCONN
2Database IndicesW_ETL_DB_INDEX
3Database TablesW_ETL_TABLE_DT
4Execution PlanW_ETL_DEFN
5Execution Plan—Database ConnectionsW_ETL_DEFN_DB
6Execution Plan—Subject AreaW_ETL_DEFN_SA
7Execution Plan—Pre-PostStepsW_ETL_DEFN_STEP
8GroupW_ETL_GROUP
9Group TableW_ETL_GRP_TBL
10Index ColumnsW_ETL_INDEX_COL
11IndicesW_ETL_INDEX
12Informatica FolderW_ETL_FOLDER
13TablesW_ETL_TABLE
14TasksW_ETL_STEP
15Task DependenciesW_ETL_STEP_DEP
16Task PhaseW_ETL_PHASE
17Task TablesW_ETL_STEP_TBL
18Subject AreaW_ETL_SA
19Subject Area/GroupW_ETL_SA_GROUP
20System PropertiesW_ETL_SYSPROP
Step 4: Import this tables into Target DAC Repository.
Step 5: After importing we need to make some changes in the Setup tab, Database connection and informatica server setup in Target DAC Repository.
Step 6: Refresh all dates (for full load)
The steps to refresh are: Tools–>ETL Management–> Reset Data Warehouse
Step 7: Before running Full Load truncate below mentioned tables, please take backup before truncating.
Tables:
  1. S_ETL_RUN from OLTP
  2. W_ETL_RUN_S from OLAP

Any better ideas out there?
Inputs from Raghunatha Yadav & Sanjay Rao

you can read it More SiebelDAC

Thursday, February 7, 2008

Limitation of Scripting in Siebel


Siebel CRM is one of the most effective CRM solution available till date. Clients with complex business requirements opt for Siebel to help themselves in identifying, acquiring and retaining their respective Customers. As a Siebel Professional we use Configuration, Scripting and EAI in order to build an Application.
There are nearly 200 different types of objects that can be configured in Siebel to reflect the changes in the User Interface Layer, Business Objects Layer, and Data Objects Layer. Also, the Siebel objects have intertwined multiple relationships between them.
We need to write programs in case of complex requirements (According to Siebel best practices — scripting should be the last resort in situations where only configuration cannot accomplish the business and technical requirements).
There are primarily 4 programming languages used when customizing Siebel.
Siebel Visual Basic (Siebel VB), Siebel eScript, JavaScript (available as of Siebel 7) and Java (through Java Business Services).
But there are some limitations of scripting in Siebel which are cited below:
  • It’s case sensitive.
  • Siebel eScript code is executed one statement at a time, in the order in which it is read.
  • If objects are not instantiated properly then it can lead to Memory leakages.
  • Need to take extra care while declaring the variables at application level.
  • Difficult to maintain the code.
Some of the biggest demerits of Scripting in Siebel are:
  1. May not be fully upgradable. In Siebel upgrades, scripts undergo extensive retrofitting

  2. One object can be scripted in only one programming language, i.e. if you are writing a BS in eScript and want to access some COM / ActiveX object then in most of the cases you need to switch on to a different Object which is written in VBscript. If the facility to use many scripting language in an Object has been given then one do not require to switch in between objects.

  3. In one method of an Object one can write only 16KB of code, which some times becomes critical especially when one wants to modify some existing functionality.

  4. If the script is written in Browser side then one can easily call Server side script and get it executed, but the vice-versa is not true.

There are numerous benefits of scripting in Siebel in comparison to its miniscule limitations. However it is very vital that one is aware of the limitations too.
Please feel free to Read More : Scripting in Siebel

Tuesday, February 5, 2008

Unleashing the Giant – Siebel CRM


The first “C” program that I wrote was about printing “Hello World”. On synonymous lines to my first “C” program, while blogging the first post I would like say “Welcome and Explore the Siebel Street”. Siebel has been the most active promoter of CRM for more than a decade. How many customers have been better served because Tom Siebel decided to help popularize Customer Relationship Management? The answer is innumerable.
Siebel CRM has been consistent market leader in CRM software arena. The best in-class customization, top-notch user interface, strong integration tools, diversified industry-wide applications, easy upgrades and reliable customer support are few key attributes of this Giant – Siebel CRM. Siebel CRM caters to its clients in various verticals like Finance, Pharma, Telecom, Hi-tech manufacturing, Insurance, Sales, Marketing etc.
The customizations in the Siebel applications are implemented using Configuration, Scripting and Workflows. Siebel EAI integrates third party applications/objects with Siebel and Siebel EIM populates external data in interface tables and user data in base tables. Siebel Analytics furnishes the needs of Report generation, analyzing of business trends, logistics etc. This Giant CRM’s robustness lies in the sheer depth and breadth of its functionality. Siebel products are continuously rated at the apex in sales, marketing, customer service and other CRM peripheral areas.
Post-Oracle’s big ticket acquisition Siebel CRM has been the lustrous feather in the cap of Oracle. Oracle already has PeopleSoft, JD Edwards, Oracle Apps under its belt and now the bellwether – Siebel CRM. With so-many CRM solution providers under its kitty, Oracle plans to roll-out Oracle-Fusion in near future.
Moving forward, Oracle plans to make Siebel as the center piece of its CRM product, while bringing together the functionality of all its acquisitions under Fusion. So if I was to choose between a newcomer, a struggling cameo or a strong veteran with no signs of fading off, the choice would be clear – Siebel CRM.
Watch out this space for more on various Siebel CRM topics, techno-functional issues, problems-resolutions, experiences etc.