EDITING BOARD
RO
EN
×
▼ BROWSE ISSUES ▼
Issue 20

An overview of Performance Testing on Desktop solutions

Sorin Lungoci
Tester
@ISDC



TESTING

Application performance can break or make a business given the direct impact on revenue, customer satisfaction or brand reputation. Praised or criticized, performance of business-critical applications falls into the top three issues that impact a business. Currently, the pressure on performance is skyrocketing in a marketplace where demands of application users are getting more varied and complex and definitely, real-time.

I have chosen to talk about performance testing on desktop solutions because the information available in terms of performance is pretty limited but nevertheless critical to success. I am considering for my story not only my experience acquired in different applications, coming from industries like finance or e-learning, but also the wisdom of others expressed as ideas, guidelines, concerns or risk warnings. I hope that my story can make your life somewhat easier and, definitely, more enjoyable when you"re asked to perform such task.

If we speak about the performance of a system, it is important to start from the same definition of what "performance" is. Could it be responsiveness? Resource consumption? Something else?

Performance desktop applications context can have different meanings. I will explain them below.

Architecture

From architecture point of view, there are several types of desktop applications. The layers used are quite the same but the place and interaction between them leads to a different architecture type. Among the most used layers we can recall: UI (User Interface), BL (Business Layer), TL (Transfer Layer) and DB (Database Layer).

Please remember that these don"t cover all combinations between architecture styles and types:

  1. 100% pure desktop - the installed application, having user interface, business layer and database on the same machine without the need of a network connection. As an example, think of Microsoft Money (personal finance management software). This is a single tier application that runs in a single system driven by a single user.
  2. Another Client/Server solution is to have a thin client used a little more than taking the input from the user and display the information, the application being downloaded, installed and run on the server side. It is widely used in Universities or Factories or by staff within an intranet infrastructure. An example can be a student having a CITRIX client installed on a local PC and running an internal application on one of the University servers.
  3. Client/Server application style using Rich Client and Server is the most used solution on a Desktop platform. This type of architecture can be found mostly on intranet computer networks. This is a 2-tier application that runs in two or more systems having a limited number of users. The connection exists until logout and the application is menu-driven. Think of Microsoft Outlook or any other Desktop e-mail program. The program resides on a local PC and it connects momentarily to the mail server to send and receive mail. Of course, Outlook works offline also, but you cannot complete the job without connecting to the server.

Approach

Testing client/server applications requires some additional techniques to handle the effects introduced by the client/server architecture. An example can be: the separation of computations might improve reliability but this can increase the network traffic and also increase the vulnerability to certain types of security attacks.

Testing client/server systems is definitely different - but it"s not the "from another planet" type.

The key to understanding how to test these systems is to understand exactly how and why each type of performance potential problem might arise. With this insight, the testing solution will usually be obvious. We should take a look at how things work and from that, we"ll develop the testing techniques we need.

The testing at the Client side is often perceived more like functional testing. This happens because the application is designed to handle the requests coming from a single user. It isn"t appropriate to load a desktop application with, let"s say 100 users, in order to test the Server response. In case you could do that, you will test the local machine hardware/software (which doubtlessly will be a bottleneck) and not the server or application overall speed. The client performance testing should be done considering the following risks:

  • Impact on user actions - how fast the application handles requests from that user,
  • Impact on users system - how light the application is for users system (starting from how fast the application opens to memory used for running or other consumed resources)

The Server part of the client-server system is usually designed to be performance-tested using somehow the same approach as Web testing: record transactions/requests sent to server, and then create multiple virtual-users to repeat those flows/requests. The server must be able to process concurrent requests from different clients.

In various examples of setting up the test environment for Client/Server applications, many teams decided to set multiple workstations (from 2 to 5) as clients, for both functional and performance testing. Each workstation was set to simulate a specific load profile.

Tools

If we compare the tools available to test the performance of a Desktop application compared to the Web, the truth is that there is a misbalance, meaning less tools in the first category. And even fewer tools able to cover multiple platforms like: Desktop, Web or Mobile.

During my investigation on this topic, I have come across some interesting tools described in different articles, forums or presentations.

  • Apache JMeter it is most commonly used to test backend applications (e.g. servers, databases and services). JMeter does not control GUI elements of Desktop application (e.g. simulate pressing a button or scrolling a page), therefore it is not a good option to test desktop applications from UI layer (e.g. MS Word). JMeter is meant to test the load on systems using multiple threads, or users. Since you"ve got a client application, it will likely only have one user at a time. It makes more sense to test the database response independent of the Windows application.
  • Telerik Test Studio runs functional test as performance test, offers in-depth result analysis, historical view and test comparison. Designed for Web and Windows WPF only. No Windows Forms applications supported.
  • Infragistic TestAdvantage for Windows Forms supports testing on Windows Forms - or WPF-powered application user interface controls.
  • WCFStorm - it"s a simple, easy-to-use test workbench for WCF Services. It supports all bindings (except webHttp) including netTcpBinding, wsHttpBinding and namedPipesBinding to name a few. It also lets you create functional and performance test cases.

Due to time constraints, the following tools were not investigated but might help you test the Desktop applications: Microsoft Visual Studio, Borland Silk Performer, Seapine Resource Thief, Quotium Qtest Windows Robot (WR), LoginVSI, TestComplete with AQtime, WCF Load Test, Grinder or Load Runner.

In classic client-server systems, the client part is an application that must be installed and used by a human user. That means that, in most cases, the client application is not expected to execute a lot of concurrent jobs but must respond promptly to the user"s actions for the current task and provide the user with visual information without big delays. The client application performance is usually measured using a profiling tool.

The Profilers combined with Performance Counters even at SQL level could be a powerful method to find out what happens on the local or server side.

You might consider using the built-in profiler of Visual Studio. It allows you to measure how long a method takes and how many times it"s called. For memory profiling, CLR Profiler allows us to see how much memory the application takes, which objects are being created by which methods.

Most UI test tools could be used to record a script that you played back on a few machines.

Collective findings around performance tests

Below, you have the overview of useful findings on desktop application performance as experienced by myself or some other testers:

  • Many instances of badly designed SQL were subsequently optimized
  • Several statements taking minutes were improved to sub-second
  • Several incorrect views were identified
  • Some table indexes that were not set up were also identified and corrected
  • Too much system memory consumed by the desktop application
  • Program crashes often occur when repeated use of specific features within the application causes counters or internal array bounds to be exceeded.
  • Reduced performance due to excessive late binding and inefficient object creation and destruction
  • Memory leak identified, when the application was opened and left for a longer period of time (few hours)..

Risks

The most encountered problems relate to software and the environment. The predominant issue that concerns the performance tester is stability, because there are many situations when the tester has to work with software that is imperfect or unfinished.

I will expose here some of the risks, directly related to Desktop applications performance test:

  • A quite frequent problem during scripting and running the tests for the first time is related to resource usage on the client side leading to a failure (usually because of memory running out). Some of the applications often crash when repeated use of specific features within the application causes counters or internal array bound to be exceeded. Of course, those problems will be fixed, but it is an impact on time spent because these scripts have to be postponed until the fix is done.
  • Building a performance test database involves generating a lot of rows in selected tables. There are two risks involved in this activity:
    • The first one is that, in creating the invented data in the database tables, the referential integrity of the database is not maintained.
    • The second risk is that the business rules, for example, reconciliation of financial fields in different tables are not adhered to. In both cases, the load simulation may not be compromised but the application may not be able to handle such inconsistencies and therefore fails. It is helpful for the person preparing the test database to understand the database design, the business rules and the application.
  • Underestimation of the effort required to prepare and conduct a performance can lead to problems. Performance testing a Client/Server system is a complex activity, mostly because of the environment and the infrastructure simulation.
  • Over ambition, at least early in the project, is common. People involved often assume that databases have to be populated with valid data, every transaction must be incorporated into the load and every response time measured. As usual, the 80/20 rule applies: 80% of the database volume will be taken up by 20% of the system tables. 80% of the system load will be generated by 20% of the system transactions. Only 20% of system transactions need to be measured. Experienced testers would probably assume a 90/10 rule. Inexperienced managers seem to mix up the 90 and the 10.
  • Tools to execute automated tests do not require highly specialized skills but, as with most software development and testing activities, there are principles which, if complied with, should allow reasonably competent testers to build a performance test. It is common for managers or testers with no test automation experience to assume that the test process consists of two stages: test scripting and test running. On top of this, the testers may have to build or customize the tools they use.
  • When software developers who have designed, coded and functionally tested an application are asked to build an automated test suite for a performance test, their main difficulty is their lack of testing experience. Experienced testers who have no experience of the SUT however, usually need a period to familiarize with the system to be tested.

Conclusions

In summary, some practical conclusions can be drawn and applied in your own work:

  • Tools are great and essential, but the problem isn"t only about tools. The real challenge is to identify the scope of performance test. What are the business, infrastructure or end user concerns? Among the contractually-bound usage scenarios, identify also most common, business-critical, performance-intensive usage scenarios from technical, stakeholder or application point of view.
  • Usually the risks are traced to infrastructure and architecture, not the user interface. For this reason during the planning and design phase, you have to have a clear overview on concern - test relationship. Don"t waste the time on designing ineffective tests; each test should solve a specific problem or concern.
  • The desktop application performance is very close to test automation and as well as to writing code. There is a slight trend on the internet stating that more and more people develop their own tool for automation/performance using .NET, Java, Python, Perl or other languages.
  • It"s difficult to find a tool that can record the majority of the application at UI level and then play-back with multiple users or threads. It seems that the performance focus on Desktop solution is moved more at API / Service Layer.
  • For some performance testing (like testing the client side) you don"t need a specific tool, only a well-designed test case set, a group of test suites with some variables and that"s it!
  • Factors such as firewalls, anti-virus software, networks, other programs running etc. all affect the client performance, as does operating systems and service packs etc. It"s a complicated set of variables and must be taken into consideration.
  • Database, system and network administrators cannot create their own tests, so they should be intimately involved in the staging of all tests to maximize the value of the testing.
  • There are logistical, organizational and technical problems with performance testing - many issues can be avoided if the principles like the one shared below are recognized and followed.
    • The approach to testing 2-tier and 3-tier systems is similar, although the architectures differ in their complexity.
    • Proprietary test tools help, but improvisation and innovation are often required to make a test "happen".
  • Please consider technology when choosing the tool, because some of them can record applications only using WPF (Windows Presentation Foundation) technology like Telerik Test Studio, while others provide support only for Windows Forms.
  • Other limitations observed during testing or investigation:
    • Many tools require the application to be already running
    • Some of the tools stop the playback if a pop-up is involved (like QA Wizard Pro)
    • Others cannot select "browse" for a file on the local Hard Drive or even select a value from the menu (Open or Save)
    • Other challenges that are related to performance testing on desktop solutions could be the multitude of operating systems and versions, hardware configurations or simulating the real environments.

Enjoy testing !!

Conference

VIDEO: ISSUE 109 LAUNCH EVENT

Sponsors

  • Accenture
  • BT Code Crafters
  • Accesa
  • Bosch
  • Betfair
  • MHP
  • BoatyardX
  • .msg systems
  • Yardi
  • P3 group
  • Ing Hubs
  • Colors in projects

VIDEO: ISSUE 109 LAUNCH EVENT