TheSemanticBlog

The Semantic Web, Semantic Web Services and more…

Archive for the ‘Capacity Planning’ Category

Understanding The Virtualization Lifecycle In The Context Of Cloud Computing: A Beginner’s Approach

Posted by Aditya Thatte on June 9, 2011

The building blocks of Cloud Computing are Operating Systems, Virtualization, Principles of Networking, Information & Network Security and Storage. All of these areas combined, form the basis of the Infrastrucure as a Service (Iaas) model of Cloud Computing. In this article we shall take a brief look at the Virtualization lifecycle in the context of Cloud IaaS.

Virtualization is one of the key enablers of cloud computing. Virtualizing hardware, applications and consolidating them aims to reduce IT Infrastructure costs (which includes purchasing hardware and maintaining it), and allowing easier management of resources in the data center. The IaaS model of the cloud deals with provisioning of compute capacity, storage and networking resources. Infrastructure costs are reduced by virtualizing hardware, thereby avoiding under-utilization of resources. As a result, multiple applications can be virtualized over the same physical hardware ensuring optimal usage of resources. The other terms that we often hear w.r.t cloud computing are ‘on-demand’ and ‘elasticity’. These two terms go hand-in-hand; on-demand refers to a ‘pay-as-you-go’ model. You need to pay only for the resources you use, and elasticity refers to scaling up/down the resources at will. In the context of cloud computing, the virtualization lifecycle comprises a set of technical assessment activities which are governed by business and operational decisions. Technical assessment for virtualizing candidates revolves around meeting end-user Service Level Agreements (SLA’s), reducing IT costs, and designing an optimized data center. Every phase in the virtualization lifecycle for cloud computing is highly challenging with a wide variety of complex and open problems which are currently being tackled.

Analysis & Discovery: When migration from Physical environments to Virtualized environments (P2V), in-depth analysis of the virtualization candidates must be performed. This stage involves discovering the data center entities (servers, networks, storage devices), collecting utilization profile data of applications along the different dimensions (CPU, memory, network i/o, disk i/o). The main theme of P2V is to move applications from an under-utilized bare metal environment to a virtualized / hypervisor environment to enable optimal utilization of hardware. In addition to discovering the heavy artillery in the physical environment, it is important to assess the applications deployed on them. The OS characteristics, application performance footprints play a vital role in determining capacity in a virtualized environment. On completion of these assessments, capacity management models need to be developed for the virtual environments.

Implementing Capacity Models: Developing capacity models for a virtual environment is a tricky task since it is governed by other business and operational factors. Target SLA’s (performance, availability), power consumption levels are to be considered along with the possible impacts (side effects) of virtualization (hardware normalization, hypervisor overheads, IO interference etc). The idea is to come up with a ‘pre-VM placement’ strategy which describes the ‘footprints’ of VM’s. Capacity planning for virtualized data centers in the light of cloud computing has become a highly sought after topic.Determining capacity size of virtual machines apriori to migration becomes an extremely critical step. If done accurately it can result in optimal allocation and usage of resources, if over-cooked, can lead to resource wastage. Similarly if under-cooked, can result in poor performance and violation of SLAs. There are many useful P2V, V2V and capacity analysis tools that can help you achieve this, viz. Platespin Recon, Microsoft SCVMM, VMware P2V Assistant, to name a few. Also researchers are exploring intelligent ways of doing capacity sizing in Virtual environments.

VM Placement & Management: This is the most critical process from a data center administrator point of view. Academic and Industrial groups are grappling to identify ‘best-fit’ placement strategies to enable highly optimized virtual environments. This refers to the concept of ‘packing’ VMs appropriately. VMs need to be packed in such a way that perfomance of isolated (individual) VMs is not hampered due to interference, and to avoid fragmentation in the data center. The on-demand provisioning of virtual servers will eventually lead to a server sprawl, thus complicating management of virtual servers. Hence efficient techniques for placement and management hold key in having a greener and well maintained data center. Other issues may also involve cross data center migration, synchronization between different servers, enabling and managing hybrid clouds (a combination of public and private / in-house environments).

Thus, the virtualization life cycle poses many challenges in different areas. Leading Cloud providers and academicians are busy solving these problems, and we hope to see greener data centers soon !!

Posted in Capacity Planning, Cloud Computing, IaaS, Virtualization | Leave a Comment »

Installing AutoPerf and Load Testing Web applications

Posted by Aditya Thatte on November 15, 2010

Here I will be discussing AutoPerf [1] in brief and will go through the installation procedure and load test a simple Web Service. AutoPerf is an automated Load generator and Profiler for Web applications. The 2 modules viz. Master and Profiler act together to generate load and profile applications, which collects performance metrics based on a minimal given input via an XML file.  When you want to load test a Web service residing on a machine remotely, you have to install the Master module on a Client machine, and the prof agent’ at the server machine, where the Web service is deployed.

Autoperf currently works only on a Linux Operating System, so here we will install AutoPerf on Ubuntu 10.04 Lucid.

To begin with, on the Client machine where you would run the Master module, you need to have Java installed along with the CLASSPATH set to something like this in the /.bashrc file  ” export CLASSPATH=/home/aditya/Desktop/AutoPerf-Shrirang/AutoPerf-Master/code/jar/log4j-1.2.13.jar:/home/aditya/Desktop/AutoPerf-Shrirang/AutoPerf-Master/code/class “.

The AutoPerf-Master folder structure should look like this : 

The input.xml file is the one that we will be using with the Load Generator as the input file.  Also ensure that the XML file is consistent in terms of tags and parameters. The input.xml specifies the following :

– Transaction name ( i.e the operation / Web service to be invoked )

– Target URL address of the Web service

– No. of Concurrent users

– Think time in milli seconds

– IP address of the Server machine

– Port address at which the prof agent is running

A sample input.xml file looks like this :

 

Now, lets take a look at the Server Machine. To setup the Profiler agent on the Server, you must begin by installing the standard package and libraries of ‘gcc’ ( libgcc ) from the Synaptic manager,  otherwise you will run into errors while initializing the prof agent.

The folder structure for the Linux Profiler on the server should look like this :

Once the packages have been installed on both, the client and server side, you are ready to start load testing applications provided you have your webservices up and running.  Also, take care to see if the input.xml file is not missing any tags, else you would run into parsing errors.

To start load generator using the Master, you first need to initialize the prof agent at the server side. This can be done using the command ‘ sudo ./prof -d 2011’ . Execute this comment when you’re inside the LinuxProfiler directory as show above. Executing this command starts the prof agent at the port 2011. This now means that the agent will pick up any incoming requests at this port address, and will profile the Webservice which is hosted at some particular web address.

However, you need to start the Master component from the client machine using the following command ‘ sudo java Master input.xml’ . Executing this command will parse the input.xml and start the load generation ( load testing ) based on the parameters of the input.xml file ( no. of users etc ) .

Lets now take a look at the profiling of a sample Complex Addition web service deployed at the server side.

Make sure the prof is running at the server. This can be verified using the ‘ps’ command, and will look like this :

At the client side, you will start the load generator, and will see an output like this :

Once you get such an output , it means that the prof agent has captured all the performance metrics and sent the output to the Master module. When the ‘java Master input.xml’ command is issued at the client machine, the Webservice at the specified URL is invoked by 10 concurrent users, with a think time of 1000 msec. Therefore, the output generated is at that load level.

[1] AutoPerf : An Automated Load Generator and Performance Measurement tool for multi-tier software systems

 

 

 

Posted in Capacity Planning, Web services | 2 Comments »

Research papers on ‘Modeling/Provisioning/Profiling Virtual machine resources in Virtualized Environments’

Posted by Aditya Thatte on August 27, 2010

Hi, here I will be pointing you to some important literature related to dynamic provisioning of VM resources, profiling VMs, modeling Virtual environments , capacity planning and so on.

Performance Models / Modeling

1. Performance Models for Virtualized Applications

2.Profiling and modeling resource usage of virtualized applications

3. Black-box performance models for virtualized web service applications

4. Probabilistic performance modeling of virtualized resource allocation

5. Automatic virtual machine configuration for database workloads

6. Towards Modeling & Analysis of Consolidated CMP Servers

7. Modeling Virtual Machine Performance

Provisioning

1. Autonomic virtual resource management for service hosting platforms

2. Virtual Putty

3. Efficient resource provisioning in compute clouds via VM multiplexing

4. On Dynamic Resource Provisioning for Consolidated Servers in Virtualized Data Centers

5. Resource Provisioning with Budget Constraints for Adaptive Applications in Cloud Environments

6. Utility Analysis of Internet Oriented Server Consolidation in VM BasedData Centers

Profiling / Interference

1. XenMon: QoS Monitoring and Performance Profiling Tool

2. An Analysis of Performance Interference Effects in Virtual Environments

3. VrtProf

Posted in Capacity Planning, Cloud Computing, IaaS, Virtualization | Leave a Comment »

Capacity Planning for Virtual Environments : Part 1

Posted by Aditya Thatte on August 18, 2010

Capacity planning for virtualized data centers in the light of cloud computing has become a highly sought after topic. Capacity planning for traditional data centers includes development of performance models of stand-alone applications residing on bare-metal architectures, as opposed to a hypervised environment which hosts multiple applications across a shared resource pool in an isolated fashion. Sizing capacity for virtualized environments adds new dimensions in terms of constraint variables and dependencies which are to be considered while developing models.

As we all know by now, the motivation behind Virtualizing applications is to ‘do more with less’, increase ROI, reduce TCO, create a greener environment and so on, planning the size of virtual machines hosting these applications becomes a key aspect. Server consolidation is a means to achieve higher utilization of servers, which may be under-utilized in a dedicated physical environment. Placing multiple VM’s across a shared resource pool is governed by target SLA’s, optimizing power consumption, optimally sharing physical resources, workload type (database, web server)of the application. Sizing and managing capacity of these virtual entities becomes an important factor during the virtualization lifecycle in the context of cloud computing. Understanding issues of VM interference (cache interference, i/o interference), hypervisor overheads should be helpful in sizing VMs.

Analysis could be done for P2V, V2V migrations, thereby estimating VM size and adapting according to existing (current) bottlenecks and future trends. There are many useful P2V tools and capacity analyzers made available by vendors.

– PlateSpin Recon

– Microsoft SCVMM

– Oracle VM Manager

– VMware P2V Assistant

– HP Capacity Advisor

– Vkernel Capacity Optimization

In this article we just scratched the surface of  ‘Capacity Planning for Virtual Environments’ . In the next part we shall see detailed aspects related to interference and performance of applications in a hypervised setup.  Here’s one of my favorite literature on capacity planning http://esj.com/Articles/2010/07/13/Capacity-Planning-Virtual-Environment.aspx?Page=1

Posted in Capacity Planning, Cloud Computing, IaaS, Virtualization | Tagged: , , , | Leave a Comment »