TheSemanticBlog

The Semantic Web, Semantic Web Services and more…

Archive for the ‘Web services’ Category

Installing AutoPerf and Load Testing Web applications

Posted by Aditya Thatte on November 15, 2010

Here I will be discussing AutoPerf [1] in brief and will go through the installation procedure and load test a simple Web Service. AutoPerf is an automated Load generator and Profiler for Web applications. The 2 modules viz. Master and Profiler act together to generate load and profile applications, which collects performance metrics based on a minimal given input via an XML file.  When you want to load test a Web service residing on a machine remotely, you have to install the Master module on a Client machine, and the prof agent’ at the server machine, where the Web service is deployed.

Autoperf currently works only on a Linux Operating System, so here we will install AutoPerf on Ubuntu 10.04 Lucid.

To begin with, on the Client machine where you would run the Master module, you need to have Java installed along with the CLASSPATH set to something like this in the /.bashrc file  ” export CLASSPATH=/home/aditya/Desktop/AutoPerf-Shrirang/AutoPerf-Master/code/jar/log4j-1.2.13.jar:/home/aditya/Desktop/AutoPerf-Shrirang/AutoPerf-Master/code/class “.

The AutoPerf-Master folder structure should look like this : 

The input.xml file is the one that we will be using with the Load Generator as the input file.  Also ensure that the XML file is consistent in terms of tags and parameters. The input.xml specifies the following :

– Transaction name ( i.e the operation / Web service to be invoked )

– Target URL address of the Web service

– No. of Concurrent users

– Think time in milli seconds

– IP address of the Server machine

– Port address at which the prof agent is running

A sample input.xml file looks like this :

 

Now, lets take a look at the Server Machine. To setup the Profiler agent on the Server, you must begin by installing the standard package and libraries of ‘gcc’ ( libgcc ) from the Synaptic manager,  otherwise you will run into errors while initializing the prof agent.

The folder structure for the Linux Profiler on the server should look like this :

Once the packages have been installed on both, the client and server side, you are ready to start load testing applications provided you have your webservices up and running.  Also, take care to see if the input.xml file is not missing any tags, else you would run into parsing errors.

To start load generator using the Master, you first need to initialize the prof agent at the server side. This can be done using the command ‘ sudo ./prof -d 2011’ . Execute this comment when you’re inside the LinuxProfiler directory as show above. Executing this command starts the prof agent at the port 2011. This now means that the agent will pick up any incoming requests at this port address, and will profile the Webservice which is hosted at some particular web address.

However, you need to start the Master component from the client machine using the following command ‘ sudo java Master input.xml’ . Executing this command will parse the input.xml and start the load generation ( load testing ) based on the parameters of the input.xml file ( no. of users etc ) .

Lets now take a look at the profiling of a sample Complex Addition web service deployed at the server side.

Make sure the prof is running at the server. This can be verified using the ‘ps’ command, and will look like this :

At the client side, you will start the load generator, and will see an output like this :

Once you get such an output , it means that the prof agent has captured all the performance metrics and sent the output to the Master module. When the ‘java Master input.xml’ command is issued at the client machine, the Webservice at the specified URL is invoked by 10 concurrent users, with a think time of 1000 msec. Therefore, the output generated is at that load level.

[1] AutoPerf : An Automated Load Generator and Performance Measurement tool for multi-tier software systems

 

 

 

Posted in Capacity Planning, Web services | 2 Comments »

QoS based Web service discovery

Posted by Aditya Thatte on September 12, 2009

With the increasing importance of Quality of Service (QoS) in computer science and IT, the need to have well performing services has become essential, because of their distributed nature. The performance characteristics of such Web services becomes the key in deciding which one to use (bind to) at runtime. Here we consider both, enterprises services as well as the ones exposed over the web. Performance ( response time ) is one of the most critical parameters in determining the QoS of any software component to to maintain Service Level Agreements (SLA) between the consumers and providers, especially in mission critical service composition scenarios.. Most software components do not come with any specification in terms of QoS ( eg. response time, CPU utilization, availability etc ), because of which it is hard to determine the performance of such components. These QoS characteristics and or specifications form an essential part in systems which invoke and compose software services ( components ) dynamically on the fly.
Many research groups have and still are contributing to the notion of QoS based Web service discovery, which attempts to discover services based not only on IOPEs but QoS specifications as well. Using semantics we are able to describe QoS parameters within Web service descriptions, which then will be useful in dynamic discovery and invocation based on those parameters. So if a requestor wishes to bind to a service with a response time under 20 ms, the semantic matchmaker can apply matchmaking that fits the corresponding criteria of the requestor. Essentially, this QoS information can form a part of the ontology for Web services ( OWL-S / WSMO ) and the matchmaker can refer to these ontologies during discovery and enable subsequent invocation of the service. This approach can prove to be indispensible in enabling mission critical performant systems. However one thing to be kept in mind is, the QoS specifications will be only local to that provider since the service operates within those limits based on that particular target environment and will change accordingly as the service is hosted into a new operating environment.

Posted in Semantic Web, Web services | Tagged: , | Leave a Comment »