TheSemanticBlog

The Semantic Web, Semantic Web Services and more…

Installing MTS MBlaze USB Broadband modem on Mac OS X Lion 10.7

Posted by Aditya Thatte on April 30, 2012

The MTS USB Broadband modem employs a ZTE based device driver. The MTS USB broadband internet modem fails to work out-of-the-box on Mac OS lion 10.7. Many forums suggests that the operating system throws up an error with the screen frozen, asking you to reboot the machine. This is due to the incompatibility of the existing driver that is shipped with the modem. To fix this issue, you may have to install a specific driver viz. ‘CrossPlatformUI-V2.1.2.-SSTL.dmg’. You can download the crossplatformui-v2.1.2-sstl.dmg.

Please note to uninstall or remove your previous instance of MTS broadband from your OS.
Once downloaded, run the installer and plug your usb modem.

Click System Preferences –> Network –> ZTE Wireless Terminal

Apply the following settings as shown in the screenshot. Enter password as ‘MTS’

Click on connect, and you should be able to access the internet.

Some troubleshooting:

You may come across the following error window.. ( The communication device selected for your connection does not exists. Verify your settings and try reconnecting )

In this case, double click the MTS application icon on the desktop and try connecting again.

Another link to download other ZTE and Huawei drivers for different operating systems.

References:

http://www.macandmobile.com/?p=346

http://reliancenetconnect.co.in/index.php?option=com_content&view-article&id=7&Itemid=9

Posted in Miscellaneous | 7 Comments »

Understanding The Virtualization Lifecycle In The Context Of Cloud Computing: A Beginner’s Approach

Posted by Aditya Thatte on June 9, 2011

The building blocks of Cloud Computing are Operating Systems, Virtualization, Principles of Networking, Information & Network Security and Storage. All of these areas combined, form the basis of the Infrastrucure as a Service (Iaas) model of Cloud Computing. In this article we shall take a brief look at the Virtualization lifecycle in the context of Cloud IaaS.

Virtualization is one of the key enablers of cloud computing. Virtualizing hardware, applications and consolidating them aims to reduce IT Infrastructure costs (which includes purchasing hardware and maintaining it), and allowing easier management of resources in the data center. The IaaS model of the cloud deals with provisioning of compute capacity, storage and networking resources. Infrastructure costs are reduced by virtualizing hardware, thereby avoiding under-utilization of resources. As a result, multiple applications can be virtualized over the same physical hardware ensuring optimal usage of resources. The other terms that we often hear w.r.t cloud computing are ‘on-demand’ and ‘elasticity’. These two terms go hand-in-hand; on-demand refers to a ‘pay-as-you-go’ model. You need to pay only for the resources you use, and elasticity refers to scaling up/down the resources at will. In the context of cloud computing, the virtualization lifecycle comprises a set of technical assessment activities which are governed by business and operational decisions. Technical assessment for virtualizing candidates revolves around meeting end-user Service Level Agreements (SLA’s), reducing IT costs, and designing an optimized data center. Every phase in the virtualization lifecycle for cloud computing is highly challenging with a wide variety of complex and open problems which are currently being tackled.

Analysis & Discovery: When migration from Physical environments to Virtualized environments (P2V), in-depth analysis of the virtualization candidates must be performed. This stage involves discovering the data center entities (servers, networks, storage devices), collecting utilization profile data of applications along the different dimensions (CPU, memory, network i/o, disk i/o). The main theme of P2V is to move applications from an under-utilized bare metal environment to a virtualized / hypervisor environment to enable optimal utilization of hardware. In addition to discovering the heavy artillery in the physical environment, it is important to assess the applications deployed on them. The OS characteristics, application performance footprints play a vital role in determining capacity in a virtualized environment. On completion of these assessments, capacity management models need to be developed for the virtual environments.

Implementing Capacity Models: Developing capacity models for a virtual environment is a tricky task since it is governed by other business and operational factors. Target SLA’s (performance, availability), power consumption levels are to be considered along with the possible impacts (side effects) of virtualization (hardware normalization, hypervisor overheads, IO interference etc). The idea is to come up with a ‘pre-VM placement’ strategy which describes the ‘footprints’ of VM’s. Capacity planning for virtualized data centers in the light of cloud computing has become a highly sought after topic.Determining capacity size of virtual machines apriori to migration becomes an extremely critical step. If done accurately it can result in optimal allocation and usage of resources, if over-cooked, can lead to resource wastage. Similarly if under-cooked, can result in poor performance and violation of SLAs. There are many useful P2V, V2V and capacity analysis tools that can help you achieve this, viz. Platespin Recon, Microsoft SCVMM, VMware P2V Assistant, to name a few. Also researchers are exploring intelligent ways of doing capacity sizing in Virtual environments.

VM Placement & Management: This is the most critical process from a data center administrator point of view. Academic and Industrial groups are grappling to identify ‘best-fit’ placement strategies to enable highly optimized virtual environments. This refers to the concept of ‘packing’ VMs appropriately. VMs need to be packed in such a way that perfomance of isolated (individual) VMs is not hampered due to interference, and to avoid fragmentation in the data center. The on-demand provisioning of virtual servers will eventually lead to a server sprawl, thus complicating management of virtual servers. Hence efficient techniques for placement and management hold key in having a greener and well maintained data center. Other issues may also involve cross data center migration, synchronization between different servers, enabling and managing hybrid clouds (a combination of public and private / in-house environments).

Thus, the virtualization life cycle poses many challenges in different areas. Leading Cloud providers and academicians are busy solving these problems, and we hope to see greener data centers soon !!

Posted in Capacity Planning, Cloud Computing, IaaS, Virtualization | Leave a Comment »

Installing AutoPerf and Load Testing Web applications

Posted by Aditya Thatte on November 15, 2010

Here I will be discussing AutoPerf [1] in brief and will go through the installation procedure and load test a simple Web Service. AutoPerf is an automated Load generator and Profiler for Web applications. The 2 modules viz. Master and Profiler act together to generate load and profile applications, which collects performance metrics based on a minimal given input via an XML file.  When you want to load test a Web service residing on a machine remotely, you have to install the Master module on a Client machine, and the prof agent’ at the server machine, where the Web service is deployed.

Autoperf currently works only on a Linux Operating System, so here we will install AutoPerf on Ubuntu 10.04 Lucid.

To begin with, on the Client machine where you would run the Master module, you need to have Java installed along with the CLASSPATH set to something like this in the /.bashrc file  ” export CLASSPATH=/home/aditya/Desktop/AutoPerf-Shrirang/AutoPerf-Master/code/jar/log4j-1.2.13.jar:/home/aditya/Desktop/AutoPerf-Shrirang/AutoPerf-Master/code/class “.

The AutoPerf-Master folder structure should look like this : 

The input.xml file is the one that we will be using with the Load Generator as the input file.  Also ensure that the XML file is consistent in terms of tags and parameters. The input.xml specifies the following :

– Transaction name ( i.e the operation / Web service to be invoked )

– Target URL address of the Web service

– No. of Concurrent users

– Think time in milli seconds

– IP address of the Server machine

– Port address at which the prof agent is running

A sample input.xml file looks like this :

 

Now, lets take a look at the Server Machine. To setup the Profiler agent on the Server, you must begin by installing the standard package and libraries of ‘gcc’ ( libgcc ) from the Synaptic manager,  otherwise you will run into errors while initializing the prof agent.

The folder structure for the Linux Profiler on the server should look like this :

Once the packages have been installed on both, the client and server side, you are ready to start load testing applications provided you have your webservices up and running.  Also, take care to see if the input.xml file is not missing any tags, else you would run into parsing errors.

To start load generator using the Master, you first need to initialize the prof agent at the server side. This can be done using the command ‘ sudo ./prof -d 2011’ . Execute this comment when you’re inside the LinuxProfiler directory as show above. Executing this command starts the prof agent at the port 2011. This now means that the agent will pick up any incoming requests at this port address, and will profile the Webservice which is hosted at some particular web address.

However, you need to start the Master component from the client machine using the following command ‘ sudo java Master input.xml’ . Executing this command will parse the input.xml and start the load generation ( load testing ) based on the parameters of the input.xml file ( no. of users etc ) .

Lets now take a look at the profiling of a sample Complex Addition web service deployed at the server side.

Make sure the prof is running at the server. This can be verified using the ‘ps’ command, and will look like this :

At the client side, you will start the load generator, and will see an output like this :

Once you get such an output , it means that the prof agent has captured all the performance metrics and sent the output to the Master module. When the ‘java Master input.xml’ command is issued at the client machine, the Webservice at the specified URL is invoked by 10 concurrent users, with a think time of 1000 msec. Therefore, the output generated is at that load level.

[1] AutoPerf : An Automated Load Generator and Performance Measurement tool for multi-tier software systems

 

 

 

Posted in Capacity Planning, Web services | 2 Comments »

Research papers on ‘Modeling/Provisioning/Profiling Virtual machine resources in Virtualized Environments’

Posted by Aditya Thatte on August 27, 2010

Hi, here I will be pointing you to some important literature related to dynamic provisioning of VM resources, profiling VMs, modeling Virtual environments , capacity planning and so on.

Performance Models / Modeling

1. Performance Models for Virtualized Applications

2.Profiling and modeling resource usage of virtualized applications

3. Black-box performance models for virtualized web service applications

4. Probabilistic performance modeling of virtualized resource allocation

5. Automatic virtual machine configuration for database workloads

6. Towards Modeling & Analysis of Consolidated CMP Servers

7. Modeling Virtual Machine Performance

Provisioning

1. Autonomic virtual resource management for service hosting platforms

2. Virtual Putty

3. Efficient resource provisioning in compute clouds via VM multiplexing

4. On Dynamic Resource Provisioning for Consolidated Servers in Virtualized Data Centers

5. Resource Provisioning with Budget Constraints for Adaptive Applications in Cloud Environments

6. Utility Analysis of Internet Oriented Server Consolidation in VM BasedData Centers

Profiling / Interference

1. XenMon: QoS Monitoring and Performance Profiling Tool

2. An Analysis of Performance Interference Effects in Virtual Environments

3. VrtProf

Posted in Capacity Planning, Cloud Computing, IaaS, Virtualization | Leave a Comment »

Capacity Planning for Virtual Environments : Part 1

Posted by Aditya Thatte on August 18, 2010

Capacity planning for virtualized data centers in the light of cloud computing has become a highly sought after topic. Capacity planning for traditional data centers includes development of performance models of stand-alone applications residing on bare-metal architectures, as opposed to a hypervised environment which hosts multiple applications across a shared resource pool in an isolated fashion. Sizing capacity for virtualized environments adds new dimensions in terms of constraint variables and dependencies which are to be considered while developing models.

As we all know by now, the motivation behind Virtualizing applications is to ‘do more with less’, increase ROI, reduce TCO, create a greener environment and so on, planning the size of virtual machines hosting these applications becomes a key aspect. Server consolidation is a means to achieve higher utilization of servers, which may be under-utilized in a dedicated physical environment. Placing multiple VM’s across a shared resource pool is governed by target SLA’s, optimizing power consumption, optimally sharing physical resources, workload type (database, web server)of the application. Sizing and managing capacity of these virtual entities becomes an important factor during the virtualization lifecycle in the context of cloud computing. Understanding issues of VM interference (cache interference, i/o interference), hypervisor overheads should be helpful in sizing VMs.

Analysis could be done for P2V, V2V migrations, thereby estimating VM size and adapting according to existing (current) bottlenecks and future trends. There are many useful P2V tools and capacity analyzers made available by vendors.

– PlateSpin Recon

– Microsoft SCVMM

– Oracle VM Manager

– VMware P2V Assistant

– HP Capacity Advisor

– Vkernel Capacity Optimization

In this article we just scratched the surface of  ‘Capacity Planning for Virtual Environments’ . In the next part we shall see detailed aspects related to interference and performance of applications in a hypervised setup.  Here’s one of my favorite literature on capacity planning http://esj.com/Articles/2010/07/13/Capacity-Planning-Virtual-Environment.aspx?Page=1

Posted in Capacity Planning, Cloud Computing, IaaS, Virtualization | Tagged: , , , | Leave a Comment »

Virtualization / Cloud Computing Blogs, Websites, Resources, Articles

Posted by Aditya Thatte on July 5, 2010

With the increasing hype and excitement of Cloud Computing, Enterprises and Academic groups are paying close attention to this new paradigm of computing. This has spurred the availability of information on topics like virtualization, cloud computing, service oriented architectures. There are innumerable sources on virtualization and cloud computing out there, and I would like to list a few of my favorites in this post.

David Linthcium’s blog

Cloud Switch

Software Design & Construction

Lanamark

TIBCO Silver

Thomas Bittman ( Gartner )

AMD Blog

VMblog

HostedFTP

ElasticVapor

CloudTweaks

Posted in Cloud Computing, Virtualization | Leave a Comment »

QoS based Web service discovery

Posted by Aditya Thatte on September 12, 2009

With the increasing importance of Quality of Service (QoS) in computer science and IT, the need to have well performing services has become essential, because of their distributed nature. The performance characteristics of such Web services becomes the key in deciding which one to use (bind to) at runtime. Here we consider both, enterprises services as well as the ones exposed over the web. Performance ( response time ) is one of the most critical parameters in determining the QoS of any software component to to maintain Service Level Agreements (SLA) between the consumers and providers, especially in mission critical service composition scenarios.. Most software components do not come with any specification in terms of QoS ( eg. response time, CPU utilization, availability etc ), because of which it is hard to determine the performance of such components. These QoS characteristics and or specifications form an essential part in systems which invoke and compose software services ( components ) dynamically on the fly.
Many research groups have and still are contributing to the notion of QoS based Web service discovery, which attempts to discover services based not only on IOPEs but QoS specifications as well. Using semantics we are able to describe QoS parameters within Web service descriptions, which then will be useful in dynamic discovery and invocation based on those parameters. So if a requestor wishes to bind to a service with a response time under 20 ms, the semantic matchmaker can apply matchmaking that fits the corresponding criteria of the requestor. Essentially, this QoS information can form a part of the ontology for Web services ( OWL-S / WSMO ) and the matchmaker can refer to these ontologies during discovery and enable subsequent invocation of the service. This approach can prove to be indispensible in enabling mission critical performant systems. However one thing to be kept in mind is, the QoS specifications will be only local to that provider since the service operates within those limits based on that particular target environment and will change accordingly as the service is hosted into a new operating environment.

Posted in Semantic Web, Web services | Tagged: , | Leave a Comment »

Semantic Data Storage in Oracle

Posted by Aditya Thatte on January 16, 2009

Oracle 10g Release 2/ 11g offers a robust, scalable, secure platform to store RDF, OWL data. It allows efficient storage, loading and querying of semantic data. Queries are enhanced by adding relationships ( ontologies ) to data and evaluated on the basis of semantics. Data storage is in the form of RDF triples (Subject Predicate Object) and can scale upto million triples. The triples stored in the semantic data store are modelled as a graphed structure. All the data is stored in a single central schema allowing access to users for loading and querying data.

The Subject and object are modelled as nodes, while the predicates are denoted by links in the graphed structure. Nodes are stored and efficiently reused when required. An RDF triple in the semantic store has a subject ( start node ), predicate (relationship), object ( end node ), which comprises a link. A new link is created on inserting a new triple and nodes are reused if similar nodes already exists.

New object types are defined to manage Semantic Data viz. SDO_RDF_TRIPLE and SDO_RDF_TRIPLE_S. The former stores the references to the data and the latter holds the actual data content. The nodes ( Subject, Object ) are stored in the RDF_NODE$ table, which can be further broken down into START_NODE_ID and END_NODE_ID. The RDF_LINKS$ table stores the record for the link whenever a new triple is inserted. Blank nodes may also be inserted as a part of any triple, which are stored in the RDF_BLANK_NODE$. An RDF model stores references to all the RDF data in the database and can be created by executing the sem_apis.create_sem_model procedure.

To get started with semantic data management on Windows XP, click “https://thesemanticway.wordpress.com/2009/01/04/configuring-semantic-web-technology-support-in-oracle-11g-release-1-on-windows-xp” to configure semantic web technology support in Oracle.

This article gives an overview of semantic data storage,however to get additional in-depth information on Semantic Data support in Oracle, here are some useful links:

http://download.oracle.com/docs/cd/B19306_01/appdev.102/b19307/sdo_rdf_concepts.htm

http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28397/sdo_rdf_concepts.htm

References : RDF support in Oracle (http://www.oracle.com/technology/tech/semantic_technologies/pdf/semantic_tech_rdf_wp.pdf)

Posted in OWL, RDF, Semantic Web | Tagged: , | 1 Comment »

Configuring Semantic Web Technology Support in Oracle 11g Release 1 on Windows XP

Posted by Aditya Thatte on January 4, 2009

The Oracle 11g platform offers a scalable, secure, robust platform for semantic web data management. Oracle 11g supports efficient
management of  RDF, OWL by executing simple queries. Data can be stored in the form of  RDF triples and queried easily. Here are a series
of steps to set up Semantic web support in Oracle 11g.

1. Install Oracle 11g and download the “JENADRV” patch from “http://www.oracle.com/technology/software/tech/semantic_technologies/files/jenadrv_patch111rdf.zip“.

2. Extract the JENADRV folder to some directory.
3. Goto Directory Your_Drive_Name:/>Oracle_Home/product/11.0.1.0/md/admin
4. Open SQLPLUS and connect as SYS user.
5. Type the following command at the SQL prompt

– SQL>@Your_Drive_Name:\>Oracle_Home\product\11.0.1.0\md\admin\catsem10i.sql;
1

This command restores Oracle 10 RDF data.

6. Now issue the following command at the SQL prompt

– SQL>@Your_Drive_Name:\>Oracle_Home\product\11.0.1.0\md\admin\catsem11i.sql;

This command installs 11 RDF, confirm with the snapshot given below to check if procedure has been executed successfully.

2

7. Now, alter the “mdsys” user by the following command

-SQL>ALTER USER mdsys ACCOUNT UNLOCKED IDENTIFIED BY mdsys;

3

8. Connect as user “mdsys” as show below
4

9. Now, apply the JENADRV patch to enable Semantic web Data support. Execute the following commands in the same sequence shown below

– SQL>@Your_Drive_Name:\>Extracted_Jenadrv_folder\sdordfh.sql;

5

–  SQL>@Your_Drive_Name:\>Extracted_Jenadrv_folder\sdordfxh.sql;

6

– SQL>@Your_Drive_Name:\>Extracted_Jenadrv_folder\sdordfa.sql;

7

-SQL>@Your_Drive_Name:\>Extracted_Jenadrv_folder\sdordfb.plb;

8

–  SQL>@Your_Drive_Name:\>Extracted_Jenadrv_folder\sdordfxb.plb;

9

– SQL>@Your_Drive_Name:\>Extracted_Jenadrv_folder\sdoseminfhb.plb;

10

– SQL>@Your_Drive_Name:\>Extracted_Jenadrv_folder\sdordfai.plb;

11

10. Connect sys as sysdba

12

11 . Create a new “rdf” tablespace by issuing the following command

CREATE TABLESPACE <<TABLESPACE_NAME>> DATAFILE ‘Your_Drive_Name:\>Oracle_Home\oradata\<SID>\<<TABLESPACE_NAME>>01.dbf’ SIZE 128M REUSE AUTOEXTEND ON NEXT 64M MAXSIZE UNLIMITED SEGMENT SPACE MANAGEMENT AUTO;

13

12. Similarly create a temporary tablespace

CREATE TEMPORARY TABLESPACE <<TEMP_TABLESPACE_NAME>> TEMPFILE ‘Your_Drive_Name:\>Oracle_Home\oradata\<SID>\<<TEMP_TABLESPACE_NAME>>01.dbf’ SIZE 128M REUSE AUTOEXTEND ON NEXT 32M MAXSIZE UNLIMITED

14

13. Now, create a new “testuser” and grant required privileges as shown in     the snapshot

15

14. Create a semantic network to enable semantic data management

16

15. Create table

17

16. Create a semantic model to enable a semantic data environment

18

Once this is done, you have enabled the semantic web technology support in Oracle 11g Release 1. Happy Semantic Data management !!

Posted in OWL, Semantic Web | Tagged: | 8 Comments »

Installing OWL-S IDE Eclipse Plugin

Posted by Aditya Thatte on December 15, 2008

There are a series of simple steps that you need to follow to install the OWL-S IDE Eclipse Plugin. The installation tasks are depicted
by snapshots shown below.

1. Goto http://projects.semwebcentral.org/frs/?group_id=37&release_id=192
2. Download “Code-Lib-Feature-1.1.zip” and “OWL-SEditor-1.1.zip”.
3. Create a folder on your local computer “OWL-S IDE”, copy and extract the downloaded files in the folder.
4. Start Eclipse
5. Click on “Help —-> Software Updates —-> Find and Install”

6. Select “Search for new features to install” and click “Next”.

7. Click “New local site”

8. Browse to the “Code-library feature” directory and select it.

9.  Select ok.

10. Select the checkbox next to the Code-lib feature and click Finish.

11. Select check box and click Next.

12. Accept agreement and click Next.

13. Click finish

14. Click Yes to restart eclipse.

15.  Follow the same procedure for “OWL-S Editor feature folder ” .

16.  Click Finish

17. Click yes to restart eclispe.

18.  Select “Help —-> Software Updates —-> Manage Configuration” to verify that the plugin has been installed properly.

19. Check if the contents in the ellipse appear.

17

20. Help —-> Help Contents.

21. Check if the contents in the ellipse appear.

Now, you will be able to create OWL-S descriptions in the OWL-S IDE.

Posted in OWL, Semantic Web | 5 Comments »