Wednesday, January 18, 2012

Test Consulting: How to Improve a QA Area


Is your client having difficulty to measure QA performance? Is your client undecided on what test strategy to follow for new application implementation? Is your client looking for testing tool recommendations or need to improve the usage of the existing ones?
Hexaware has a dedicated Test Consulting practice where testing experts are involved in providing test consulting services to clients. During the consulting engagements, Hexaware assesses and evaluates the Approach, People and Technology and fill in the gaps by bringing in our domain experience, best practices, frameworks, tools experience, etc. The consulting services also include tools selection, tools optimization and TCoE Creation and/or optimization.
If you want to learn more about test strategy, the next information will help you to execute a test strategy engagement.
Test Strategy Approach
The first step is to identify the problem(s) that the client is facing and define the strategy objectives. After that, I recommend to follow this approach to execute a test strategy project:
Assessment Areas
As part of the information gathering phase, we leverage its proprietary ATPTM (Approach, People, and Technology) methodology to focus on the right areas and meet the strategy objectives. Hexaware’s APT™ Methodology is the foundation of all of our QA service offerings, below is the description of each component:
• The Approach component is designed to lay the foundation for the processes that each client use as part of testing
People is the component of the IT organization with focus on testing, Hexaware analyze the groups, roles and responsibilities involved in QA and testing
• The Technology component includes the use of QA and automation testing tools for efficiency to optimize technology and lower costs.

At the end of the project, we provide to the client the following component:
•Current State: An analysis of the current state with regard to the testing objectives
•Gaps: The gaps found between the current state and the best practices and the desired state
•Recommendations: Our recommendations to close the gaps and meet the organization objectives
•Implementation Road Map: A recommended path to follow in order to implement the recommendations.
As a result of the analysis phase, we show to our clients the current state in a quantitative graph. This graph evaluates all relevant aspects of an IT organization and prioritizes each category according to the testing objectives. One example of this graph is showed below.

This was an assessment provided as part of a test strategy we created for a leading bank in Mexico for a T24 product implementation. The benefits showed in this strategy was to reduce testing cycles by 30%, automate at least 50% of the manual test cases and have a defect free implementation using robust and repeatable testing processes.
Other examples of metrics commonly used as part of the strategy objectives are the increment of automation coverage by 30%, increase productivity by 25%, reduce overall testing cost by 15%, etc.
A test consulting practice is an area full of innovation, industry best practices and shared experiences. Now with Hexaware blogs all of us will be able to formally share our experience and our colleagues can leverage them for future assignments.

Test Consulting: How to Improve a Quality Assurance Area


Is your client having difficulty to measure QA performance? Is your client undecided on what test strategy to follow for new application implementation? Is your client looking for testing tool recommendations or need to improve the usage of the existing ones?

Hexaware has a dedicated Strategic Consulting practice where testing experts are involved in providing test consulting services to clients. During the consulting engagements, Hexaware assesses and evaluates the Approach, People and Technology and fill in the gaps by bringing in our domain experience, best practices, frameworks, tools experience, etc. The consulting services also include tools selection, tools optimization and TCoE Creation and/or optimization.

If you want to learn more about test strategy, the next information will help you to execute a test strategy engagement.

Test Strategy Approach
The first step is to identify the problem(s) that the client is facing and define the strategy objectives. After that, I recommend to follow this approach to execute a test strategy project:

Assessment Areas
As part of the information gathering phase, we leverage its proprietary ATPTM (Approach, People, and Technology) methodology to focus on the right areas and meet the strategy objectives. Hexaware’s APT™ Methodology is the foundation of all of our QA service offerings, below is the description of each component:
• The Approach component is designed to lay the foundation for the processes that each client use as part of testing
People is the component of the IT organization with focus on testing, Hexaware analyze the groups, roles and responsibilities involved in QA and testing
• The Technology component includes the use of QA and automation testing tools for efficiency to optimize technology and lower costs.


At the end of the project, we provide to the client the following component:
•Current State: An analysis of the current state with regard to the testing objectives
•Gaps: The gaps found between the current state and the best practices and the desired state
•Recommendations: Our recommendations to close the gaps and meet the organization objectives
•Implementation Road Map: A recommended path to follow in order to implement the recommendations.
As a result of the analysis phase, we show to our clients the current state in a quantitative graph. This graph evaluates all relevant aspects of an IT organization and prioritizes each category according to the testing objectives. One example of this graph is showed below.



This was an assessment provided as part of a test strategy we created for a leading bank in Mexico for a T24 product implementation. The benefits showed in this strategy was to reduce testing cycles by 30%, automate at least 50% of the manual test cases and have a defect free implementation using robust and repeatable testing processes.
Other examples of metrics commonly used as part of the strategy objectives are the increment of automation coverage by 30%, increase productivity by 25%, reduce overall testing cost by 15%, etc.

A test consulting practice is an area full of innovation, industry best practices and shared experiences. Now with Hexaware blogs all of us will be able to formally share our experience and our colleagues can leverage them for future assignments.

To know More: Visit Quality Assurance Area 

Tuesday, January 17, 2012

Good Attributes to be kept in mind for designing Automated Test Cases


Programming remains the biggest & most critical component of test case automation. Hence designing & coding of test cases is extremely important, for their execution and maintenance to be effective.

Some fundamental attributes of good automated test cases which can be followed,
  • Simple
  • Modular
  • Robust
  • Reusability
  • Maintainable
  • Documented
  • Independant
1) Simplicity: The test case should have a single objective. Multi-objective test cases are difficult to understand and design. There should not be more than 10-15 test steps per test case, May be depending on the process steps may increase. However 10 to 15 steps note a good clarity test case. Multipurpose test cases are likely to break or give misleading results. If the execution of a complex test leads to a system failure, it is difficult to isolate the cause of the failure.

2) Modularity: Each test case should have a setup and cleanup phase before and after the execution test steps, respectively. The setup phase ensures that the initial conditions are met before the start of the test steps. Similarly, the cleanup phase puts the system back in the initial state, that is, the state prior to setup. Each test step should be small and precise. However we can’t expect all test cases to have set up and cleanup process, it may vary accordingly.. The test steps are building blocks from reusable libraries that are put together to form multi-step test cases.

3) Robustness and Reliability: A test case verdict (pass or fail) should be assigned in such a way that it should be unambiguous and understandable. Robust test cases can ignore trivial failures such as one pixel mismatch in a graphical display. Care should be taken so that false test results are minimized. The test cases must have built-in mechanisms to detect and recover from errors. For example, a test case need not wait indefinitely if the software under test has crashed. Rather, it can wait for a while and terminate an indefinite wait by using a timer mechanism.

4) Reusability: The test steps are built to be configurable, that is, variables should not be hard coded. They can take values from a single configurable file(Data Tables). Attention should be given while coding test steps to ensure that a single global variable is used, instead of multiple, decentralized, hard-coded variables. Test steps are made as independent of test environments as possible. The automated test cases are categorized into different groups so that subsets of test steps and test cases can be extracted to be reused for other platforms and/or configurations. Finally, in GUI automation hard-coded screen locations must be avoided.

5) Maintainability: Any changes to the software under test will have an impact on the automated test cases and may require necessary changes to be done to the affected test cases. Therefore, it is required to conduct an assessment of the test cases that need to be modified before an approval of the project to change the system. The test suite should be organized and categorized in such a way that the affected test cases are easily identified. If a particular test case is data driven, it is recommended that the input test data be stored separately from the test case and accessed by the test procedure as needed. The test cases must comply with coding standard formats. Finally, all the test cases should be controlled with a version control system.

6) Documented: The test cases and the test steps must be well documented. Each test case gets a unique identifier, and the test purpose is clear and understandable. Author name, date of creation, and the last time it was modified must be documented. There should be traceability matrix to the features and requirements being checked by the test case. The situation under which the test case cannot be used is clearly described. The environment requirements are clearly stated with the source of input test data (if applicable). Finally, the result, that is, pass or fail, evaluation criteria are clearly described.

7) Independent and Self-sufficient: Each test case is designed as a cohesive entity, and test cases should be largely independent of each other. Each test case consists of test steps, which are naturally linked together. The predecessor and successor of a test step within a test case should be clearly understood.

Know More About: Automated Test Cases

Monday, January 16, 2012

Manual Testing (Vs) Automation Testing


As the world is moving toward a new era in Technology and business with respect to Testing, we will just have a walkthrough on Manual Testing and Automation testing.

1. Manual Testing is Boring
• No one wants to keep filling the same forms
• There is nothing new to learn when one tests manually
• People tend to neglect running manual tests
• No one maintains a list of the tests required to be run if they are manual tests

Automated tests on the other hand are Code
• They are fun and challenging to write
• One has to carefully think of design for reusability and coverage
• They require analytical and reasoning skills
• They represent contribution that is usable in the future

2. Manual Testing is not reusable
• The effort required is the same each time
• One cannot reuse a Manual Test

Automated Test are completely reusable
• Once written the Automated Tests form a part of the codebase
• They can be reused without any additional effort for the lifetime of the Project

3. Manual Testing has a high risk of missing out on something
• Each time a developer runs manual tests it is likely he will miss out on an important test case
• New developers may have no clue about the battery of tests to be run
Automated Tests have zero risk of missing out a pre-decided test
• Once a Test becomes a part of Continuous Integration – it will run without someone having to remember to run it

4. Manual Tests do not provide a safety-net
• Manual tests are run post-facto and hence only drive bug-patching
Automated Tests provide a safety-net for refactoring / additions
• Even New developers who have never touched the code can be confident about making changes

5. Manual Tests have no training value
• Manual Test are a mechanical process and does not have defined training values
Automated Tests act as documentation
• Reading a set of Unit Tests clarifies the purpose of a codebase
• They provide a clear contract and define the requirement
• They provide visibility into different use cases and expected results
• A new developer can understand a piece of code much more by looking at Unit Tests than by looking at the code
• Unit Tests define the expected behavior of the code

Read More: About Manual & Automation Testing

Wednesday, January 11, 2012

MoSCoW Principle – Key for success of any project


Problems hindering a successful project:

To make any project successful, requirements and project objectives should be prioritised properly. Though few methods were followed for the prioritization, but still the results were observed to be flawed.

Problem 1: Prioritizing the Project Requirements

An important factor for any project to be successful is to ensure that the requirements are prioritized. Sometimes it is the customers’ fault who wants the entire system to be delivered now or it is the project manager’s fault because they do not discuss the project with the customer.

However prioritising is not an easy process, and on using number system people found it troublesome to prioritize requirements by 1,2,3 etc. However who wants a requirement to be a “2″ or even a “3″? As a result all requirements become a “1″, which is useless. So effective prioritisation is important but how can it be done if number systems are not effective?

Problem 2: Prioritizing the Project Objectives

When a set of requirements has been prioritized then key project objectives of the project such as scope, quality, timescale and resources should be prioritized.
If nearly all requirements are prioritised as “Must”, then there is not much flexibility in the scope of a project. However many studies have shown that it is better if a project is delivered on time, even if it has few features, than if a project is delivered late, but with a full set of features.
If quality is sacrificed then faults will occur in the software. One way around this is to train the users in the use of a new system, so that they only use it in proper fashion, and know how to get around any bugs that are discovered.

Finally all systems must be produced to a budget, and a business does not have unlimited resources to put into a project. Moreover the business case normally assumes a rate of return, which will be considerably reduced if the resources are increased significantly on a project. Therefore resources have a strong case for being the most important factor.

Regardless you cannot “have it all and have it now”, and a balanced and planned prioritisation of the factors must take place if a project is to have a chance of delivering business value. If it is not then the fifth factor of risk goes sky high, and ceases to be risk and become inevitable.

MoSCoW Principle

To overcome the above problems we have a useful method MoSCoW which was first developed by Dai Clegg of Oracle UK Consulting.

This stands for

M – “MUST” be implemented. Requirements falling under this category must be implemented and should not be left off for any reason. If they are not delivered then the project is a failure.
S – “SHOULD” be implemented. This represents requirement should be implemented in the project if it is possible. If it is found to be difficult to implement or not possible, then an alternate solution is also acceptable.
C – “COULD” be implemented. These requirements are considered as nice to have in other ways not a mandatory one. Requirements mapped under this category are desirable ones if time and resources permit.
W – “WON’T” be implemented. This requirement won’t be implemented at this time and likely to be implemented in future.
The two lower case “o” were added to pronounce the word easier.

Conclusion

To deliver business value and be successful a project requires prioritisation of:
  • The requirements.
  • The main project objectives: scope, quality, timescale and resources.
To Know More About: MOSCOW

Monday, January 2, 2012

How to achieve full potential of Test Automation?


Why test automation?

Software testing is an art, aimed at evaluating an attribute or capability of a program or system and determining whether it meets the expected outcomes. Software testing can be very costly and time consuming.

Hence automation is a good way to cut down time and cost.

Benefits of test automation:

Test automation provides numerous benefits to any organization. Some of them are listed below:
  • Reduced test execution time and cost
  • Increased test coverage on each testing cycle
  • Increased value of manual testing effort
  • Reduced manual work
Reasons for failure of test automation:

There are a number of reasons why test automation efforts are unproductive. Some of the most common include:
  • Poor quality of tests being automated
  • Lack of good test automation framework and process
  • Inability to adapt to changes in the system under test
Five Steps for Successful Test Automation

To avoid the above reason and to achieve the full potential of test automation, emphasis has to be given on the below steps:
  • Planning
  • Preparation
  • Proof of concept
  • Implementation
  • Maintenance
Planning

Each phase in the project must be planned with a clear view on the current testing methods and information of the infrastructure. Gather information from key people who have experience in test automation.

Preparation

During preparation phase, a pilot project should be defined and test cases that need to be automated have to be selected. Define the roles and responsibilities and prepare input test data.
Proof of Concept

The test automation tool must be configured in order to be compatible with application under test (AUT). The tool must be able to capture user actions. Since more and more application types may be being used, the configuration part is not as easy. It is possible that the test tool will not recognize the application “out-of-the-box”, which means you need to configure the tool.
Then the team needs to automate a limited amount of test cases to prove that the application can be automated. Select an easy, a normal and a complex test case. Demonstrate execution and reporting of the automated test cases to decision makers.

Implementation

If proof of concept is successful, automation of selected test cases (scope defined at preparation step) can start. Analyze selected tests, think about data separation, functional decomposition, reusability of certain business components. Modularize your script into clear-cut building blocks. During this phase, you can set up test automation framework which can vary from documentation on tool usage to a full-scaled framework based on a spreadsheet or database layer.

Maintenance

A test automation project expands together with application under test. A new release may offer new functionality that needs to be automated. The existing automated scripts need to be maintained, new automated tests need to be added and there is a possibility that other organization may take over automation project in future. Team has to prepare for these events by providing documentation for each script, by constantly allocating the proper resources for the project and by training newcomers in the tool, the script and the execution.

Conclusion

Test Automation in today’s competitive environment is much needed than desired; to keep operating efficiently and considerably cut down costs and efforts, without compromising on quality and security. However you must adopt a well planned and a structured approach to automation in order to ensure a higher return on investment. It is suggested that prior to opting for automation, QA teams need to perform an exhaustive automation assessment to identify the right set of automation methods, tools and techniques that will compliment their QA needs.

HP Sprinter – New Era in Manual Testing


Problems faced in manual testing:


Manual software testing is the oldest mode of testing soft wares and is still in practice. But all these years of practice do not make it a routine as one would expect. We need to accept the below mentioned facts first.
  • Manual testing can often be very tedious and time consuming. The productivity of a tester depends on multiple sources (test script, test data, defects tracked, application under test).
  • In today’s world where software needs to operate in multiple operating system environments and web browsers, manually testing software adds a large amount to the time needed to release an application.
  • Manual testing is often error prone. Test steps are easily missed, test data often incorrectly entered, and defects are often incorrectly captured thereby decreasing overall quality of the application, increasing risk the application poses, and increasing costs due to associated work replication.
What is HP Sprinter?


HP Sprinter is an easy to use solution provided by HP that delivers accurate and efficient manual software testing fully integrated with HP Oracle Application Management and HP Quality Center.

How HP Sprinter ease manual testing?


With this new era of HP Sprinter, manual software testing does not have to be tedious, error prone, or time consuming anymore. Some of the features are:
  • HP Sprinter dramatically reduces time needed to perform manual software tests and increases their accuracy and effectiveness.
  • Manual tests are launched from HP Application Lifecycle Management or HP Quality Center into HP Sprinter where tester carries out the test.
  • The actions and results of the test are simply recorded and results are saved within HP Application Lifecycle Management or HP Quality Center.
  • Defects can also be directly logged within HP Application Lifecycle Management or HP Quality Center without leaving HP Sprinter, helping bridge the gap that exists with developers.
  • HP Sprinter also handles automated injection of data into fields under test increasing the speed and accuracy in which a test can be executed.
  • HP Sprinter allows screen capturing, screen annotation, and movie recording. It can also be used to automatically record and log tester’s activities and actions when executing exploratory testing without pre-defined steps.
  • HP Sprinter’s mirror testing capabilities allow users’ actions to be automatically replicated across multiple systems hosting multiple environment configurations.
Conclusion:


HP Sprinter thus helps to streamline manual testing and improve collaboration and communication. It increases speed of execution, improves productivity, reduces costs and accelerates application delivery.

Thanks For Reading This Article. Know More About: HP Sprinter