Wednesday, February 29, 2012

Business Objects Content Management planning

Hi BOOglers,
Let us resume with Administration track once again with Content management planning in Business Objects. Hope this is going to be more interesting and helpful for the Administrators to start up with.
Content management plan is a collection of procedures used to manage Business objects work flow in a collaborative and manageable environment. This ensures who needs access to what. While planning for Business Objects enterprise system appropriate content management planning is an important factor. Because of lack of expertise and time we always end up with BOE environment which is not structured properly, as a result BOE environment will become more difficult to manage and maintain.
Below are simple measures to consider in order plan for our BI content.
  • Easy to understand the hierarchy and secure implementation
  • Ensure users only accessing documents which they interested and authorized to.
  • Efficient structure so users are able to search the info they need easily
  • Easy access to information in the system will increase effectiveness of using the system.
Points to be consider for the Content management Planning
1. Creating a folder structure and organizing objects
As a first and foremost step we need to segregate the BI content according to users who is going to consume the information. This will enable us to decide the folder hierarchy of the system.
For example, for set of reports to Marketing department, we will manage them in the Marketing parent folder. Then we have a subfolder called Marketing- Americas or Marketing-Asia Pacific where reports can further be separated.
2. Organize users by creating a Group/User structure
Now we need to organize user group structure that will allow access to BI content. This will be similar to the folder structure, in our example we will be having a Marketing group each further categorized basd on the region.
3. Set access levels for folders and objects
Next we need to establish the security access levels for folders and objects contained in our group/user structure. This is extremely critical as we may risk in setting inappropriate security access levels for our users. Determining the needs of our users will help us establish who needs access to what folders and objects within the system. For example only users of the Marketinggroup can access the Marketing folder based on their functional roles.
Custom access levels created for each functional user group is depicted as below.
And finally this is how security is applied on Folder at Group level.
4. Creating corporate categories and assigning objects
The advantage of categories is that it will help the uses to search and access the reports that are appropriate to them. This can be configured according to uses requirement.
For example, the Marketing department user search for reports that are specific to vendor evaluation,
Even if the vendor related reports managed across different folders, we can create a separate category called Vendor management and group the corresponding reports in the Vendor management category. As a result the user not necessarily needs to go to every folder and search for the required report. They can simply access the category with which they authorized to access with.
In the below screen though the reports Marketing Dashboard and Sales Dashboard physically exist in Marketing and Sales folders respectively, still they can be accessed from Vendor  Management category.
The above are the initial steps that we can take to create a planned content management system with which we can end up with success.
I look forward to your inputs and feedback. Thanks for reading!

Database Tuning


Why Database Tuning?

It is a primary responsibility of a performance Engineer to provide tuning recommendations for a database server when it fails to respond as expected.
Performance tuning can be applied when you face the following scenarios:-
  • If the real user waiting time is high
  • To keep your database updated on par with your business
  • If you want to use your hardware optimally
General Tuning Recommendations
  • SQL Query Optimization:-
Poorly written SQL queries can push any database to respond badly. It is widely accepted that 75% of performance issues arise due to poorly written SQL. Manual Tuning of SQL query is practically difficult; but we have more SQL Profiler tools available in the market.

The following types of SQL Queries are suggested to use

✔  Avoid table scan queries – especially long table scans.
✔  Utilize the indexes promptly.
✔  Avoid complex joins wherever possible – especially unions.
  • Memory Tuning
Most of the Oracle Database can operate efficiently by utilizing its memory structures instead of disk IO’s.

The theme behind this is - A read from (or) write to memory is much faster than a read from (or) write to disk.


For efficient Memory
✔  Ensure that the buffer hit ratio, shared pool (library and dictionary hit ratio) meet the recommended levels.
✔  Properly size all other buffers including the redo log buffer, PGA, java pool, etc.
  • Disk IO Tuning
It is crucial that there be no IO contention on the physical disk devices. It is therefore important to spread IO’s across many devices.

Spreading data across disks to avoid I/O contention

✔  You can avoid bottlenecks by spreading data storage across multiple disks and multiple disk controllers:
✔  Put databases with critical performance requirements on separate devices. If possible, also use separate controllers from those used by other databases. Use segments as needed for critical tables and partitions as needed for parallel queries.
✔  Put heavily used tables on separate disks.
✔  Put frequently joined tables on separate disks.
✔  Use segments to place tables and indexes on their own disks.
✔  Ensure that indexes are properly tuned.
  • Sorting Tuning
Ensure that your sort area size is large enough so that most of your sorts are done in memory; not on disk.
  • Operating System Concerns
Ensure that no memory paging or swapping is occurring.
  • Lock Contention
Performance can be devastated if a user waiting for another user to free up a resource.
  • Wait Events
Wait events help to locate where the database issues may be.

Conclusion

In an actual Load Testing, it is an essential practice to simulate the database as in production. Due to this, a lots of database issues may be encountered. The above given solutions are general tuning mechanisms for any database. These solutions may be ideal for most of the performance issues in a database.

Thanks For Reading This Blog. Please Visit At: Database Tuning Know More

Tuesday, February 28, 2012

Performance Testing in the Cloud


What is Cloud?

A cloud consists of three components, one or multiple datacenters, a network and a “zillion” number of devices. That’s what it’s all about, the components and their interaction. Ultimately the user is interested in the end-to-end experience, regardless of the components.

A cloud is grouped into private or public cloud based on the location of the data center where the services are being virtualized. In general, a public cloud is an environment that exists outside the purview of company firewall, could be a service/technology offered by a third party vendor while private cloud acts behind the firewall for the exclusive benefit of an organization and its customers.


Source : HP

Testing and the Cloud
While many companies are approaching cloud computing with cautious optimism, testing appears to be one area where they are willing to be more adventurous. There are several factors that account for this openness toward testing in the cloud:

Testing is a periodic activity and requires new environments to be set up for each project
  Test labs in companies typically sit idle for longer periods, consuming capital, power and space.   Approximately 50% to 70% of the technology infrastructure earmarked for testing is underutilized, according  to both anecdotal and published reports.

Testing is considered an important but non-business-critical activity. Moving testing to the cloud is seen as a safe bet because it doesn’t include sensitive corporate data and has minimal impact on the organization’s business-as-usual activities.

Applications are increasingly becoming dynamic, complex, distributed and component- based, creating a multiplicity of new challenges for testing teams. 
For instance,mobile and Web applications must be tested for multiple operating systems and updates,multiple browser platforms and versions, different types of hardware and a large number of concurrent users to understand their performance in real-time. The conventional approach of manually creating in-house testing environments that fully mirror these complexities and multiplicities consumes huge capital and resources.

Why Opting for Cloud Computing as source of Performance testing
Many companies use their Web sites for sales and marketing purposes. In fact, a company can spend millions of pounds creating engaging content and running promotional campaigns to draw users to its site. Unfortunately, if the site crashes or response time crawls all that time, energy and money could be wasted.
A first case is when demand for a service varies with time. Provisioning a data center for the peak load it must sustain a few days per month leads to underutilization at other times, for example. Instead, Cloud Computing lets an organization pay by the hour for computing resources, potentially leading to cost savings even if the hourly rate to rent a machine from a cloud provider is higher than the rate to own one. A second case is when demand is unknown in advance.

For example, a web startup will need to support a spike in demand when it becomes popular, followed potentially by a reduction once some of the visitors turn away. Finally, organizations that perform batch analytics can use the ”cost associativity” of cloud computing to finish computations faster: using 1000 EC2 machines for 1 hour costs the same as using 1 machine for 1000 hours. For the first case of a web business with varying demand over time and revenue proportional to user hours, we have captured the tradeoff in the equation below.The left-hand side multiplies the net revenue per user-hour by the number of user-hours, giving the expected profit from using Cloud Computing. The right-hand side performs the same calculation for a fixed-capacity datacenter by factoring in the average utilization, including nonpeak workloads, of the datacenter. Whichever side is greater represents the opportunity for higher profit.

Cloud Areas and their Differentiators:
  • Private cloud is on premise, dedicated to one organization, often located in the same datacenter(s) as the legacy applications; address the needs of mission critical applications and applications having access to large amount of confidential data.
  • Hosted private cloud is really a variant of the previous category, located either in the client’s or the hoster datacenter, but managed by a hosting provider. It addresses the same needs as the private cloud.
  • Virtual private cloud is a single or multi-tenant environment, located in a clearly defined geography, with documented and auditable security processes & procedures, clear SLA’s, addressing the needs of business critical applications. Formal contracts are established, payment is by consumption, but with regular invoices. Access to the environment can be over the internet, VPN or leased lines. HP calls this an Enterprise Class Cloud.
  • Public Cloud is a multi-tenant environment providing IaaS, PaaS and/or SaaS services on a pay- per-use basis without formal contracts. Payment is typically via credit card. Such environments address the needs of web applications, test & development, large scale processing, (if not too much data needs to be transferred).


Source: HP

The first two categories are asset intensive and typically do not lend themselves to a pay-per-use model; the latter two are pay-per-use. The first two are typically single tenant, this means they are used by one organization; the latter two are most often multi-tenant environments where multiple companies share the same assets.

Performance Testing in Cloud – Benefits & Limitations:
• Fast provisioning using preconfigured images. You can set up the infrastructure you need in minutes.

• Simplified security. All required protections are set up by default, including firewall, certificates, and encryption.

• Improved scalability. Leading load testing solution providers have negotiated with cloud providers to allow users of their software to employ more virtual machines (for the purpose of load testing) than are allowed by default.

• A unified interface for multiple cloud providers. Load testing solutions can hide provisioning and billing details, so you can take maximum advantage of the cloud in a minimum of time.

• Advanced test launching. You can save time and effort by defining and launching load generators in the cloud directly from the load testing interface.

• Advanced results reporting. Distinct results from each geographic region involved in the test are available for analysis.

Limitations:
Although testing from the cloud is, in many cases, more realistic than testing in the lab, simply moving to the cloud is not enough to ensure the most realistic tests. Real users often have access to less bandwidth than a load generator in a cloud data center. With a slower connection, the real user will have to wait longer than the load generator to download all the data needed for a web page or application. This has two major implications:

• Response times measured as-is from the cloud with virtually unlimited bandwidth are better than for real users. This can lead test engineers to draw the wrong conclusions, thinking that users will see an acceptable response time when in reality they will not.

• The total number of connections established with the server will increase, because on average, connections for real users will be open longer than connections for the load generator. This can lead to a situation in which the server unexpectedly refuses additional connections under load.

Conclusion: The benefits of Cloud Computing Solution cannot be ignored by companies running Performance tests striving to overcome the constraints of their current IT hardware to simulate the realistic environment whilst struggling to justify the cost of investing in major upgrades.

Protocol – In Performance Testing View


Protocol in performance testing view is “Communication Protocol”. Communicating between the physical systems; it may be a Load Generator, Application and Web servers.

The key elements of a protocol are:

Syntax: Include Time data formats and signal levels.
Semantics: Includes control information and error handling.

Communication Protocol is a set of rules and regulations that determine how data is transmitted between the systems or a communications protocol is a system of digital message formats and rules for exchanging those messages in or between computing systems.

Protocols include sending, receiving, authentication and error detection and correction capabilities. Protocols are used for communications between entities in a system. Entities use protocols in order to implement their service definitions.

Multiple protocols could be used in different circumstances. Also communication Protocols are used as a suite or layering.  For Example: Internet Protocol (Web) suites consists application, transport, internet, and network interfaces.

Below listed communication protocols for performance testing could be used by Hexaware Performance Testing CoE.
  • Web – HTTP/HTTPS
  • J2EE
  • Citrix
  • .NET (Client & Web)
  • ERP – PeopleSoft (SAP, Siebel, Oracle Apps)
  • Web Services
  • SQL
  • Client – Server (COM/DCOM)
  • Mobile
  • Action Message Format (AMF)
  • AJAX (Click and Script)
Classification schemes for protocols usually focus on domain of use and function. Based on the application communication protocol could be selected or performance testing tool adviser is used to finalize the protocol.
From the above synopsis we could understand; what is protocol? How it works and an example for protocol layering/suite then which protocols is used by Hexaware performance testing CoE. And then final paragraph explains; how it is used or identifies the protocol for a particular domain, application or business functions.

Thanks For Reading This Blog. Know More Visit At: Performance Testing

Monday, February 27, 2012

Tips for Handling Recording Problems In LoadRunner


When we were using LoadRunner for our Business Process Testing engagements we have encountered certain recording problems. We have listed down some of the most common problems and the steps that should be followed to troubleshoot those issues.

Problem 1: NOT PROXIED error in the recording events log.

This error occurs when there is some sort of spyware software installed on the system.

To troubleshoot this issue follow the below steps:

1. Use process explorer to get the list of dlls getting hooked into Internet Explorer.

2. Compare the list with the dlls from a machine where you can record.

3. Run Ad-Aware software. To download this, visit http://www.lavasoftusa.com/software/adaware/. This usually eliminates the spyware programs.

4. Make sure to check carefully the processes running on the machine. If you find any suspicious DLL\exe, just google the name and see if it’s a known spyware. If so, uninstall the software related to this DLL\executable.

5. Sort the list of DLLs by ‘Path Name’ in Process Explorer. The DLLs to scrutinize carefully are the ones which are not located in LoadRunner\bin and c:\windows\system32 or c:\winnt\system32.

Problem 2: “Cannot connect to remote server” in the recording log

This error occurs when the communication from this server and on that particular port should not be filtered

To troubleshoot this issue follow the below steps:

Try port mapping if it is a HTTPS site.
Go to Recording Options > Port Mapping and add the Target Server’s IP address, Port Number as 443, Service ID as HTTP, Connection Type as SSL. Make sure “Test SSL” returns success. Change the SSL Ciphers and SSL Version to achieve success for “Test SSL”.

Problem 3: “Connection prematurely aborted by the client” error in the recording log.

This error occurs when the communication from this server and on that particular port should not be filtered

To troubleshoot this issue follow the below steps:

Try port mapping with Direct Trapping. Here’s how we can do direct trapping
a. Enable the Direct trapping option:
[HKEY_CURRENT_USER\Software\Mercury Interactive\LoadRunner\Protocols\WPlus\Analyzer]
“EnableWSPDebugging”=dword:00000001
“ShowWSPDirectTrapOption”=dword:00000001
“EnableWSPDirectTrapping”=dword:00000001

b. Before Recording Multi-Web, add port mapping entries for HTTP/TCP (and NOT SSL) connections.
Set their ‘Record Type’ from ‘Proxy’ to ‘Direct’.
c. Start recording.

Problem 4: Application sends a “Client Side” certificate to the server and waits for server authentication.

This error occurs because LoadRunner sits in between the client and server, it takes this certificate information from the client and pass it on to the server. So, you need to have the certificate in .pem format.

To troubleshoot this issue follow the below steps:

Use openSSL to convert the client side certificate to .pem format and add it in the port mapping in recording options

In Internet Explorer:

1. Choose Tools > Internet Options. Select the Content tab and click Certificates.
2. Select a certificate from the list and click Export.
3. Click Next several times until you are prompted for a password.
4. Enter a password and click Next.
5. Enter a file name and click Next.
6. Click Finish

In Netscape:

1. Choose Communicator > Tools > Security Info.
2. Click on the Yours link in the Certificates category in the left pane.
3. Select a certificate from the list in the right pane, and click Export
4. Enter a password and click OK.
5. Enter a filename and save the information.

The resulting certificate file is in PKCS12 format. To convert the file to PEM format, use the openssl.exe utility located in the LoadRunner bin directory. To run the utility:
Open a DOS command window.

Set the current directory to the LoadRunner bin directory.
Type openssl pkcs12 -in <input_file> -out <output file.pem>
Enter the password you chose in the export process.
Enter a new password (it can be the same as before).
Enter the password again for verification.

In Recording Options > Port Mapping > check the option for “Use specified client side certificate” and point to the saved .pem file.

Problem 5: Server sends a certificate to the client for authorization and only when authorized by the client a connection can be established between the client and server. This is mostly true for Java based applications. The error we get is “No trusted certificate found”

This error occurs since the recorder is in between client and server, you should authorize the certificate from the server. So, your wplusca.crt should be in the server’s certificate repository.

To troubleshoot this issue follow the below steps:

Copy wplusca.crt file into the certificate repository of the server.

1. Locate keytool.exe executable (usually under JRE_ROOT\bin directory).
2. Add its directory to the PATH environment variable.
3. Locate cacerts file (with no extension) under JRE_ROOT\lib\security directory.
4. Copy the attached cacert.pem to this directory.
5. Make this to be the current directory in the command prompt.
6. Execute:
keytool -list -keystore cacerts
7. It will ask you for the password, reply:
“changeit”
8. It will list all the CA certificates installed (usually 25).
Notice the number of certificates.
9. Execute:
keytool -import -file cacert.pem -keystore cacerts -alias mercuryTrickyCA
10. It will ask you for the password, reply:
“changeit”
11. It will ask you if you are sure you want to add the certificate to the store, reply:
“yes”
12. Execute:
keytool -list -keystore cacerts
13. It will ask you for the password, reply:
“changeit”
14. It will list all the CA certificates installed.
The number of certificates should be bigger by 1

Problem 6: The recording log does not indicate any error but the connection between the client & Server has ended abruptly. This could happen possibly due to the ‘WPLUSCA’ certificate with a private key

This error occurs when because the recorder sends the ‘WPLUSCA’ certificate with a private key to the server for authentication while recording.

To troubleshoot this issue follow the below steps:

1. Go to LoadRunner\bin\certs and make a copy of the WPlusCA.crt file.
2. Open it in a notepad
3. Delete the portion between “Begin RSA Private Key” and “End RSA Private Key”
4. Save the file.
5. Now right click the file and choose “Install certificate” and follow the on screen instructions.
This will now send a certificate without the private key and if the server accepts the communication will continue.

Problem 7: “Failed to resolve hostname” error in the recording log

If VuGen box is not able to resolve the server name that is given in URL, it throws this error in the recording log and then crashes the IE

To troubleshoot this issue follow the below steps:

Modify hosts file on the vugen machine with server name and any IP address
The above problems are some of the common problems when we are using LoadRunner for Performance Testing. The above mentioned troubleshooting steps will help you to fix these issues if they occur during your performance testing assignment.

Thanks For Reading This Blog. Please Know More About: LoadRunner

Thursday, February 23, 2012

Capacity Planning in Performance Testing


What is Capacity Planning

Capacity Planning is the process of determining what type of hardware and software configuration is required to meet application needs. Capacity planning, performance benchmarks and validation testing are essential components of successful enterprise implementations. Capacity planning is an iterative process. A good capacity management plan is based on monitoring and measuring load data over time and implementing flexible solutions to handle variances without impacting performance.

The goal of capacity planning is to identify the right amount of resources required to meet service demands now and in the future. It is a proactive discipline with far-reaching impact, supporting:
• IT and business alignment, helping to show the cost and business need for infrastructure upgrades

Hexaware's Consolidation and Virtualization Strategies, ensuring that consolidated real and virtual system configurations will meet service levels

Capacity Planning Approach

Capacity Planning is planning efficient resource use by applications during the development and deployment of an application, but also when it is operational. It addresses considering how different resources can be accessed simultaneously by different applications, but also knowing when it is done in an optimal way. Big organizations and operational environments have high expectations in means of capacity planning.
The thesis considers which type of configurations (clustered, unclustered, etc.) should be taken into consideration, which types / forms / categories of applications can run on the same server / cluster, but also what should be avoided when planning for capacity.

Challenges Faced

Capacity planning should be conducted when:

• Designing a new system
• Migrating from one solution to another
• Business processes and models have changed, thus requiring an update to the application architecture
• End user community has significantly changed in number, location, or function

Typical objective of capacity planning is to estimate:

• Number and speed of CPU Cores
• Required Network bandwidth
• Memory size
• Storage type and size

Key items influencing capacity:

• Number of concurrent users
• User workflows
• Architecture
• Tuning and implementation of best practices

Capacity planning is about how many resources an application uses. It implies knowing a system’s profile. For instance, if you have two applications, e.g. application A and application B, each known to use certain values of Central Processing Unit (CPU), memory, disk and network resources when being the single application running on your machine, but you only have one machine. If application A uses only little of one resource and application B much of the same one. This is a simple case of capacity planning, and one must have in mind that when the applications are executed in parallel on the machine, the total resource usage is no simple addition of the sole execution of each one of them. There could for instance be overlapping of memory portions, which would make parallel execution impossible, and re-writing the code would be necessary to empower this.

The process of determining what type of hardware and software configuration is required to adequately meet application needs is called capacity planning.
Because the needs of an application are determined among others by the number of users, in other words the number of parallel accesses, capacity planning can also be defined as: how many users can a system handle before changes need to be made? Thus, when an application is deployed, one should have in mind how large it will be at first, but also how fast the number of users/servers/machines will increment, so that enough margins is left and one must not wholly change an application because of e.g. the addition of a single user.
To perform capacity planning, essential data is collected and analyzed to determine usage patterns and to project capacity requirements and performance characteristics. Tools are used to determine optimum hardware/software configurations.

Proposed Solution(s)

Bottlenecks, or areas of marked performance degradation, should be addressed while developing your capacity management plan. The objective of identifying bottlenecks is to meet your performance goals, not eliminate all bottlenecks. Resources within a system are finite. By definition, at least one resource (CPU, memory, or I/O) can be a bottleneck in the system. Planning for anticipated peak usage, for example, may help minimize the impact of bottlenecks on your performance objectives.
There are several ways to address system bottlenecks. Some common solutions include:

• Using Clustered Configurations
• Using Connection Pooling
• Setting the Max Heap Size on JVM
• Increasing Memory or CPU
• Segregation of Network Traffic



Clustered configurations distribute workloads among multiple identical cluster member instances. This
effectively multiplies the amount of resources available to the distributed process, and provides for seamless fail over for high availability.

Using Connection Pooling
To improve the performance by using existing database connections, you can limit the number of connections, timing of the sessions and other parameters by modifying the connection strings.
Setting the Max Heap Size on JVM (Java Virtual Machines)
This is a application-specific tunable that enables a tradeoff between garbage collection times and the number of JVMs that can be run on the same hardware. Large heaps are used more efficiently and often result in fewer garbage collections. More JVM processes offer more fail over points.

Increasing Memory or CPU
Aggregating more memory and/or CPU on a single hardware resource allows localized communication between the instances sharing the same hardware. More physical memory and processing power on a single machine enables the JVMs to scale and run much larger and more powerful instances, especially
64-bit JVMs. Large JVMs tend to use the memory more efficiently, and Garbage Collections tend to occur less frequently. In some cases, adding more CPU means that the machine can have more instruction and data cache available to the processing units, which means even higher processing efficiency.
Segregation of Network Traffic
Network-intensive applications can introduce significant performance issues for other applications using network. Segregating the network traffic of time-critical applications from network-intensive applications, so that they get routed to different network interfaces, may reduce performance impacts. It is also possible to assign different routing priorities to the traffic originating from different network interfaces.

Business benefits-

• Increase revenue through maximum availability, decreased downtime, improved response times, greater productivity, greater responsiveness to market dynamics, greater return on existing IT investment

• Decrease costs through higher capacity utilization, more efficient processes, just-in-time upgrades, greater cost control

Future Direction/Long Term Focus

Capacity planning process is a forecast or plan for the organization’s future. Capacity planning is a process for determining the optimal way to satisfy business requirements such as forecasted increases in the amount of work to be done, while at the same time meeting service level requirements. Future processing requirements can come from a variety of sources. Inputs from management may include expected growth in the business, requirements for implementing new applications, IT budget limitations and requests for consolidation of IT resources.

Recommendations

The basic steps involved in developing a capacity plan are:

1. To determine service level requirements
a. Define workloads
b. Determine the unit of work
c. Identify service levels for each workload

2. To analyze current system capacity
a. Measure service levels and compare to objectives
b. Measure overall resource usage
c. Measure resource usage by workload
d. Identify components of response time

3. To plan for the future
a. Determine future processing requirements
b. Plan future system configuration

By following these steps, we can help to ensure that your organization will be prepared for the future, ensuring that service level requirements will be met using an optimal configuration.

Tuesday, February 21, 2012

The Grinder – An open source Performance testing alternative


Owing to the cut throat competition, IT companies are striving to go one step ahead than their competitors to woo their prospective clients. Cutting down the costs without compromising on the quality has been the effective strategy these days. Open source tools not only promise to cut down the costs drastically, but are also more flexible and provide certain unique features of their own. The huge expense involved in procuring performance testing tools has urged the testing community to look for an open source alternative that would go easy on the budget.

The Grinder is an open source performance testing tool originally developed by Paco Gomez and Peter Zadrozny. The Grinder is a JavaTM load testing framework that makes it easy to run a distributed test using many load injector machines.

1.1          Why Grinder?

  • The Grinder can be a viable open source option for performance testing.It is freely available under a BSD-style open-source license that can be downloaded from SourceForge.net http://www.sourceforge.net/projects/grinder
  • The test scripts are written in simple and flexible Jython language which makes it very powerful and flexible. As it is an implementation of the Python programming language written in Java, all of the advantages of Java are also inherent here. No separate plug-ins or libraries are required
  • The Grinder makes use of a powerful distributed Java load testing framework that allows simulation of multiple user loads across different “agents” which can be managed by a centralized controller or “console”. From this console you can edit the test scripts and distribute them to the worker processes as per the requirement
  • It is a surprisingly lightweight tool, fairly easy to set up and run and it takes almost no time at all to get started. The installation simply involves downloading and configuring the recorder, console and agent batch files.The Grinder 3  is distributed as two zip files. The grinder properties file can be customized to suit our requirement, each time we execute a test
  • From a developer’s point of view, Grinder is the preferred load testing tool wherein the developers can opt to test their own application. I.e. The programmers get to test the interior tiers of their own application
  • The Grinder tool has a strong support base in the form of mailing lists i.e. http://sourceforge.net/
  • The Grinder tool has excellent compatibility with Grinder Analyzer which is also available as an open source license. The analyzer extracts data from grinder log and generates report and graphs containing response time, transactions per second, and network bandwidth used
  • Other than HTTP and HTTPS, The Grinder does support internet protocols such as POP3, SMTP, FTP, and LDAP

1.2          Difficulties that should be considered before opting for the Grinder

  • No user friendly interface is provided for coding, scripting(parameterization, correlation, Inserting custom functions etc) and other enhancements at the test script preparation level
  • Annoying syntax errors do creep in into the coding part due to the rigid syntax and indentation required at the coding level as Jython scripting is used
  • A lot depends on the tester’s ability to understand the interior tiers of the application unlike other commercial tools  where the tester can blindly follow the standard procedures without any insight into the intricate complexity of the application and can still succeed in completing his job successfully
  • It is dependent on Grinder Analyzer for analysis, report generation etc
  • The protocols supported by the Grinder are limited, whereas commercial tools such as LoadRunner, Silk Performer provides support for all the web based protocols. This is one major limiting factor as the web based applications these days use multi protocols for communication.
  • Unlike LoadRunner and other vendor licensed tools, it does not offer effective monitoring solutions and in-depth diagnostic capabilities. Also there is no separate user friendly interface component dedicated for analyzing the test results
  • More support is required in the form of forums, blogs and user communities
In short Tom Braverman sums it up brilliantly in a post to the Grinder use,
“I don’t try and persuade my clients that The Grinder is a replacement for LoadRunner, etc. I tell them that The Grinder is for use by the developers and that they’ll still want the QA team to generate scalability metrics using LoadRunner or some other tool approved for the purpose by management”

For an open source testing tool, it has to be admitted that the Grinder does have the capabilities feature wise to make a stand amidst other commercial alternatives.

Thanks for Reading This Blogs. Know More: Performance Testing & Quality Assurance

Tuesday, February 14, 2012

Business Objects Mobile – Configuration

Hi BOOglers,
This is the continuation of my previous blog Business Objects Architecture and Deployment. Hope you all eager to know about the BO Mobile configuration. Let us proceed further
Environment & Product versions used:
  • Operating System: Windows Server 2008 R2
  • BOE XI4.0 with BOBJ MOBILE XI4.0
  • BlackBerry Email & MDS Simulator
  • BlackBerry Device simulator
The BO Mobile server was installed on the same machine that hosted the BOE server.
The BlackBerry Email MDS Simulator & BlackBerry device simulator also Installed in the same machine. This architecture is depicted as below.
Installed Components and CCM:
Once we done with all the above, we can start with the Mobile Server configuration.
Mobile Server configuration
Launch Mobile Server Configuration Tool from Start menu. Select Blackberry BES Deployment as we are going to configure with BES simulator.
Application Installation in Mobile Client
Go to the below URL, provided you have already deployed the MobileOTA14.war file in Web Application Server. Please note BI will be replaced with your CMS Server name. You will get the screen like below
http://BI:8080/MobileOTA14/OTADeploy
Now try to access the URL from Device simulator.
Once Application is downloaded,configure with your credentials and access it from the mobile device.
Finally we got it in mobile. Let us compare this with desktop based GUI.
Mobile Vs Web browser comparison
This is all about BO Mobile configuration. Hope you feel it so easy to deploy and configure.
If you have any questions, or if you want to share your experience, feel free to leave a comment!
Happy Blogging!