Wednesday, May 30, 2012

How to Convert WSDL to Java Using WSO2 Application Server

WSDL(Web Services Development Language) is an XML document used to describe web services & how to access them.

To convert WSDL to Java in there are tool/command available(EX:-Using Maven,Apache ant). WSDL facilitate to write the web service using the tool and command available.However the WSO2 Application Server support to achieve this substantially without a depth knowledge of java .To develop a  web services two approaches can be used.Java-first & WSDL-first.Further WSDL-first is a thinking design and Java-first coding is an implementation.Hence it is  easy to develop WSDL-first approach.

Therefore we need to install WSO2 AS which is not a big task and steps such as downloading, running the system, accessing service via browser should follow.

1. Download the WSO2 Application Server binary distribution.
2. Extract the zip archive where you want the WSO2 AS installed
3. Set the JAVA_HOME environment variable to your Java installation, and the PATH environment variable to the Java /bin directory.
4. Execute the WSO2 AS start script from the bin folder.(Linux/Unix :sh wso2server.sh/Windows :wso2server.bat)
5. Check your WSO2 AS instance using the URL https://localhost:9443/carbon which will take you to the WSO2 AS Management Console.
6. Login as "admin" using the default password "admin"


There in the user interface you will find the tool WSDL2Java among few other tools like Java2WSDL in the left-hand side.






When you go for WSDL2Java tool it will allow you to enter the relevant path of WSDL file and few other options. You can select them as want and click on "Generate" button.





Then it will automatically download the generated code with stub classes to your local machine and you can place it any where accordingly and carry on developing the client.Further we can write our business logic and then expose the service to the client.


Using this WSDL Converter easily generate WSDL version 2.0 document
by using  existing WSDL 1.1 document

Saturday, May 26, 2012

All About TABLESPACES

An Oracle tablespace is a big unit of logical storage in an Oracle database. It is managed and used by the Oracle server to store structures data objects, like tables and indexes.
An Oracle data file is a big unit of physical storage in the OS file system. One or many Oracle data files are organized together to provide physical storage to a single Oracle tablespace.

Each tablespace in an Oracle database consists of one or more files called datafiles, which are physical structures that conform to the operating system in which Oracle is running.
A database's data is collectively stored in the datafiles that constitute each tablespace of the database. For example, the simplest Oracle database would have one tablespace and one datafile. Another database can have three tablespaces, each consisting of two datafiles (for a total of six datafiles).

If you want to get a list of all tablespaces used in the current database instance, you can use the DBA_TABLESPACES view as shown in the following SQL script example:

SQL> connect SYSTEM/fyicenter
Connected.

SQL> SELECT TABLESPACE_NAME, STATUS, CONTENTS
  2  FROM USER_TABLESPACES;
TABLESPACE_NAME                STATUS    CONTENTS
------------------------------ --------- ---------
SYSTEM                         ONLINE    PERMANENT
UNDO                           ONLINE    UNDO
SYSAUX                         ONLINE    PERMANENT
TEMP                           ONLINE    TEMPORARY
USERS                          ONLINE    PERMANENT

If you want a new tablespace, you can use the CREATE TABLESPACE ... DATAFILE statement as shown in the following script:
SQL> CREATE TABLESPACE my_space
  2  DATAFILE '/temp/my_space.dbf' SIZE 10M;
Tablespace created.

SQL> SELECT TABLESPACE_NAME, STATUS, CONTENTS
  2  FROM USER_TABLESPACES;
TABLESPACE_NAME  STATUS          CONTENTS
---------------- --------------- ---------
SYSTEM           ONLINE          PERMANENT
UNDO             ONLINE          UNDO
SYSAUX           ONLINE          PERMANENT
TEMP             ONLINE          TEMPORARY
USERS            ONLINE          PERMANENT
MY_SPACE         ONLINE          PERMANENT

If you have an existing tablespace and you don't want it anymore. You can delete a tablespace by using the DROP TABLESPACE statement as shown in the example below:
SQL> CREATE TABLESPACE my_space
  2  DATAFILE '/temp/my_space.dbf' SIZE 10M;
Tablespace created.

SQL> DROP TABLESPACE my_space;
Tablespace dropped.

After you have created a new tablespace, you can give it to your users for them to create tables in the new tablespace. To create a table in a specific tablespace, you need to use the TABLESPACE clause in the CREATE TABLE statement. Here is a sample script:
SQL> connect SYSTEM/fyicenter
Connected.

SQL> CREATE TABLESPACE my_space
  2  DATAFILE '/temp/my_space.dbf' SIZE 10M;
Tablespace created.

SQL> connect HR/fyicenter
Connected.

SQL> CREATE TABLE my_team TABLESPACE my_space
  2  AS SELECT * FROM employees;
Table created.

SQL> SELECT table_name, tablespace_name, num_rows
  2  FROM USER_TABLES
  3  WHERE tablespace_name in ('USERS', 'MY_SPACE');

TABLE_NAME                     TABLESPACE_NAME    NUM_ROWS
------------------------------ ---------------- ----------
MY_TEAM                        MY_SPACE           - 
EMPLOYEES                      USERS              107
...

If you created a tablespace with a data file a month ago, now 80% of the data file is used, you should add another data file to the tablespace. This can be done by using the ALTER TABLESPACE ... ADD DATAFILE statement. See the following sample script:
SQL> connect HR/fyicenter

SQL> CREATE TABLESPACE my_space
  2  DATAFILE '/temp/my_space.dbf' SIZE 10M;
Tablespace created.

SQL> ALTER TABLESPACE my_space
  2  DATAFILE '/temp/my_space_2.dbf' SIZE 5M;
Tablespace altered.

SQL> SELECT TABLESPACE_NAME, FILE_NAME, BYTES
  2  FROM DBA_DATA_FILES;
TABLESPACE_NAME FILE_NAME                             BYTES
--------------- --------------------------------- ---------
USERS           C:\ORACLEXE\ORADATA\XE\USERS.DBF  104857600
SYSAUX          C:\ORACLEXE\ORADATA\XE\SYSAUX.DBF 461373440
UNDO            C:\ORACLEXE\ORADATA\XE\UNDO.DBF    94371840
SYSTEM          C:\ORACLEXE\ORADATA\XE\SYSTEM.DBF 356515840
MY_SPACE        C:\TEMP\MY_SPACE.DBF               10485760
MY_SPACE        C:\TEMP\MY_SPACE_2.DBF              5242880

SQL> SELECT TABLESPACE_NAME, FILE_ID, BYTES
  2  FROM USER_FREE_SPACE
  3  WHERE TABLESPAE_NAME IN ('MY_SPACE');
TABLESPACE_NAME                   FILE_ID      BYTES
------------------------------ ---------- ----------
MY_SPACE                                6    5177344
MY_SPACE                                5   10354688
This script created one tablespace with two data files.

ORA-00959: tablespace ‘TB001′ does not exist When Data Pump Import

When importing data from different database, sometimes you get errors like:

ORA-39083: Object type TABLESPACE_QUOTA failed to create with error:
ORA-00959: tablespace 'TB001' does not exist
Failing sql is:

This means that the tablespace “TB001″ doesn’t exist in the database where you’re importing the data. For this, you can use REMAP_TABLESPACE option. If you have more than one tablespace that doesn’t exist in the second database, use comma as follows:


REMAP_TABLESPACE=db01_tb001:db02_tbs,db_01_tb002:db02_tbs

To get which remap script you need to create, check TABLESPACE_NAME column for the DBA_SEGMENTS view and find in which tablespaces your objects are reside

Thursday, May 24, 2012

Simply View Oracle Session Usage


Here are some of PL/SQL scripts related to gather session statistics that I have developed to monitor the session usage.If the database getting slow it is possible to analyze using this that the resource usage by user connected

 SELECT
  'Currently, '
  || (SELECT COUNT(*) FROM V$SESSION)
  || ' out of '
  || DECODE(VL.SESSIONS_MAX,0,'unlimited',VL.SESSIONS_MAX)
  || ' connections are used.' AS USAGE_MESSAGE
FROM
  V$LICENSE VL










 SELECT
  'Currently, '
  || (SELECT COUNT(*) FROM V$SESSION)
  || ' out of '
  || VP.VALUE
  || ' connections are used.' AS USAGE_MESSAGE
FROM
  V$PARAMETER VP
WHERE VP.NAME = 'sessions'


 SELECT SUBSTR (df.NAME, 1, 40) file_name, df.bytes / 1024 / 1024 allocated_mb,
       ((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0))
       used_mb,
       NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb
       FROM v$datafile df,dba_free_space dfs
       WHERE df.file# = dfs.file_id(+)
       GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes
       ORDER BY file_name;









Saturday, May 19, 2012

Oracle Data Pump Export/Import

Oracle Data Pump utility is used for exporting data and metadata into set of operating system files and it is newer, faster and flexible alternative to “export/import” utilities.

1. Create directory object as SYS user.
SQL> create or replace directory export_dir as '/oradata/export’;

2. Grant Read/Write privilege on the directory to the user, who invokes the Data pump export.
SQL> grant read,write on directory export_dir to test_user;

3. Take Data Pump Export

Click here to see Roles/privileges required for Export modes.

Oracle data pump export examples for all 5 modes.

(i) Full Database Export
$ expdp test_user/test123 full=y directory=export_dir dumpfile=expdp_fulldb.dmp logfile=expdp_fulldb.log

(ii) Schema Export
$expdp test_user/test123 schemas=test_user directory= export _dir dumpfile=expdp_test_user.dmp logfile=expdp_test_user.log

If you want to export more than one schema then specify the schema names separated by comma.

(iii)Table Export
$ expdp test_user/test123 tables=emp,dept directory= export _dir dumpfile=expdp_tables.dmp logfile=expdp_tables.log

You can specify more than one table.

(iv) Tablespace Export
$ expdp test_user/test123 tablespaces=test_user_tbs directory= export _dir dumpfile=expdp_tbs.dmp logfile=expdp_tbs.log

You can specify more than one tablespace.

(v) Transportable tablespace
$ expdp test_user/test123 transport_tablespaces=test_user_tbs transport_full_check=y directory= export _dir dumpfile=expdp_trans_tbs.dmp logfile=expdp_trans_tbs.log

Click here to learn more on Transportable Tablespace with examples.

Oracle Data Pump Import :-
Data Pump Import utility is used for loading an export dump files into a target system and we can load one or more files.

Copy the dump file to the target system where you to import.

1. Create directory object as SYS user.
SQL> create directory import_dir as '/oradata/import';

2. Grant Read/Write privilege on the Directory to the user, who invokes the Data Pump import.
SQL> grant read,write on directory import_dir to test_user;

3. Import the data using Data Pump Import.

Oracle data pump import examples for all 5 modes.

(i) Full Database Import
$ impdp test_user/test123 full=Y directory=imp_dir dumpfile=expdp_fulldb.dmp logfile=imp_fulldb.log

(ii) Schema Import
$impdp test_user/test123 schemas=test_user directory=imp_dir dumpfile=expdp_test_user.dmp Logfile=impdp_test_user.log

(iii) Table Import
$ impdp test_user/test123 tables=emp,dept directory=imp_dir dumpfile=expdp_tables.dmp logfile=impdp_tables.log

From 11g, you can reaname a table during the import
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename
$ impdp test_user/test123 remap_table=test_user.emp:emp1 directory=imp_dir dumpfile=expdp_tables.dmp logfile=impdp_tables.log

Tables will not be remapped if they already exist even if the TABLE_EXISTS_ACTION is set to TRUNCATE or APPEND

(iv) Tablespace Import
$ impdp test_user/test123 tablespaces=test_user_tbs directory=imp_dir dumpfile=expdp_tbs.dmp logfile=impdp_tbs.log

Above example imports all tables that have data in tablespaces test_user_tbs and it assumes that the tablespaces already exist.

(v) Transportable Tablespace
Click here to to import data using Transportable Tablespace method.

Common Errors with Data pump import (impdp) utility:-

1. ORA-31631: privileges are required
ORA-39122: Unprivileged users may not perform REMAP_SCHEMA remapping
Cause: A user attempted to remap objects during an import but lacked the IMPORT_FULL_DATABASE privilege.
Action: Retry the job from a schema that owns the IMPORT_FULL_DATABASE privilege.

2. ORA-31631: privileges are required
ORA-39161: Full database jobs require privileges
Cause: Either an attempt to perform a full database export without the EXP_FULL_DATABASE role or an attempt to perform a full database import over a network link without the IMP_FULL_DATABASE role.
Action: Retry the operation in a schema that has the required roles.

3. ORA-01950: no privileges on tablespace "string"
Cause: User does not have privileges to allocate an extent in the specified tablespace.
Action: Grant the user the appropriate system privileges or grant the user space resource on the tablespace.

Click here to learn Roles/ privileges required for Data pump Export and Import.

4. import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
IMP-00017: following statement failed with ORACLE error 3113:
"BEGIN "
"SYS.DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE SYS.DBMS_RULE_ADM.CREATE_EVALUATIO" "N_CONTEXT_OBJ, 'SYS',TRUE);"
Cause: Import fails while executing the following command.
Action: Login as sys and run the following scripts
$ORACLE_HOME/rdbms/admin/dbmsread.sql
$ORACLE_HOME/rdbms/admin/prvtread.plb

Wednesday, May 16, 2012

CACHE & NOCACHE Hint

When we come to usage of the WSO2 products all product include  db scripts to both oracle.sql and oracle_rac.sql db script. In this two products oracle.sql include NOCACHE and oracle_rac.sql include CACHE.I intend to writing blog post about CACHE and NOCACHE hint.Furtherthis will be useful when we interacting WSO2 product with Oracle.


CACHE & NOCACHE Hint

The CACHE hint instructs the optimizer to place the blocks retrieved for the table at the most recently used (MRU) end of the least recently used (LRU) list in the
buffer cache when a full table scan is performed.
The buffer cache holds copies of data blocks so as they can be accessed quicker by oracle than by reading them off disk.Blocks within the buffer cache are ordered from MRU (most recently used) blocks to LRU (least recently used) blocks. Whenever a block is accessed, the block goes to the MRU end of the list.This hint is useful
for small lookup tables.

In the following example, the CACHE hint overrides the default caching
specification of the table:
SELECT /*+ FULL (hr_emp) CACHE(hr_emp) */ last_name
 FROM employees hr_emp;

SELECT /*+ FULL(hr_emp) NOCACHE(hr_emp) */ last_name
 FROM employees hr_emp;


The different between CACHE/NOCACHE hint is Cache blocks from Full Table Scan & Nocache do not cache blocks from a Full Table Scan.When we comes to Oracle RAC enviornment do need to cache blocks from a full table scan because it has two clustering nodes that needs to Cache blocks.The CACHE and NOCACHE hints affect system statistics table scans (long tables) and table scans (short tables), as shown in the V$SYSSTAT data dictionary view.


Above are collection of things that i gathered relevant to this subject

Sunday, May 13, 2012

Axis2 Clustering

Clustering concept is a remarkable and noteworthy model. I have comprehended the axis2 cluster mechanism.

A cluster consists of a group of independent but interconnected server client whose combined resources can be applied to a processing task. A common cluster feature is that it should appear to an application as though it were a single server. Apache Axis2 works the same.

Main requirement of any clustering enterprise deployment are High Availability & scalability. High Availability, which services, monitors, and restarts all other resources as required. Scalability challenges are ability to serve large capacity and speed continues to grow at an exponential rate due to a variety of factors like large number of request.

We can simply enable Axis2 Clustering in the axis2.xml file enabling attribute to true locate in the "Clustering".The services provided by Axis2 clusteringe include the Axis2 clustering management API.

The axis2.xml file clustering configuration section include below main parts.

Clustering Agent & Clustering Agent Parameters
Agents normally have parameters that can be changed in order to tweak their initialization & performance. Following is a brief introduction about the important parameters that are  useful.

synchronizeAll- Ensures Synchronization  that all synchronized content
is identical across the cluster member
maxRetries- Define the timestamps to retry to send message among the
cluster members before the node fail
localMemberHost- Include the IP addresses or host name of the cluster members
memberDropTime- If a if a member does not respond in a cluster axis2.xml include time interval to left the group,It include under this in milliseconds
AvoidInitiation- To be the cluster has automatically initialized this parameter include

        <parameter name="synchronizeAll">true</parameter>
        <parameter name="maxRetries">10</parameter>
        <parameter name="localMemberHost">127.0.0.1</parameter>
        <parameter name="memberDropTime">3000</parameter>
        <parameter name="AvoidInitiation">true</parameter>



State Management
If we have a load balanced cluster State Management it requires synchronizing to the state across the members. State Management Clustering behaviour maintain keeping track of a user's activity across sessions.

Node Management
Node Management, you can add nodes to your cluster, monitor node
status, and perform management actions on nodes

<nodeManager class="org.apache.axis2.clustering.management.DefaultNodeManager"
            enable="true"/>

Group Management
This Managing resource groups in a cluster.It describes adding and
removing resource groups, and changing resource group attribute

<groupManagement enable="true">
    <applicationDomain name="group1"
                      description="This is the first group"

agent="org.apache.axis2.clustering.management.DefaultGroupManagementAgent"/>
    <applicationDomain name="group2"
                      description="This is the second group"

agent="org.apache.axis2.clustering.management.DefaultGroupManagementAgent"/>
</groupManagement>

Members
A set of identifiers are used to give a cluster member an identity within the cluster with different Ip addresses & ports in axis2
cluster

Saturday, May 12, 2012

WSO2 Message Broker Integrated With Zookeeper & Cassandra

WSO2 implemented the new Distributed Message Broker by developing the WSO2 Message Broker Integrated with Zookeeper & Cassandra.

When we consider distributed systems this is arguably one the technologies involved in a distributed systems are complicated and not easy to understand because distributed systems are hard to program. Testing a distributed system can also be quite intricate. Verifying of the behaviour and impact of code in the system is not that easy and requires a lot of careful thought and planning during the testing phase.

In this blog post first I elaborate WSO2 Message Broker Integrated With Zookeeper & Cassandra

Apache Zookeeper

Zookeeper is a centralized service coordinator where distributed applications synchronize, all of which services that are used by other distributed applications. When we integrate WSO2 MB with Zookeeper MB the distributed application is going to use the services provided by the Zookeeper. Zookeeper determines which node in the ZooKeeper service is alive at any given time and the handling of the node failures. ZooKeeper supports high-availability, therefore all clients making requests receive consistent and immediate data.

Getting started with Apache Zookeeper

Download Apache Zookeeper (http://www.apache.org/dyn/closer.cgi/zookeeper/)

Create a zoo.config file in zookeeper-3.3.5/conf & include below

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181

Then locate to bin folder and start Zookeeper server by executing the command “zkServer.sh
start”

In MB instance qpid-config.xml file which locate repository/conf/advanced/ include the Zookeeper host and port configuration

<coordination>
<!-- Apache Zookeeper Host name -->
<ZooKeeperHost>127.0.0.1</ZooKeeperHost>
<!-- Apache Zookeeper port -->
<ZooKeeperPort>2181</ZooKeeperPort>
<!-- Format yyyy-MM-dd HH:mm:ss -->
<ReferenceTime>2012-02-29 08:08:08</ReferenceTime>
</coordination>

Default port for Zookeeper will be “2181”

Configure Cassandra

Download Apache Cassandra from http://cassandra.apache.org/download/

Cassandra will generate several log files that are usefula.there for we need to make create directoryies to include their paths in cassandra.yaml.We create followe directoryies data_file_directories, commitlog_directory, and saved_caches_directory & sure that the paths exist in conf/cassandra.yaml

Create system.log file & include conf/log4j.properties
log4j.appender.R.File=/var/log/cassandra/system.log

Navigate to bin folder and execute sh cassandra -f to start cassandra

There is a configuration in  MB instance to have in the distributed cluster details in MB /repository/conf/advanced/qpid-virtualhosts.xml file
<virtualhost>
<name>carbon</name>
<carbon>
<store>
<class>org.wso2.andes.server.store.CassandraMessageStore</class>
<username>admin</username>
<password>admin</password>
<cluster>ClusterOne</cluster>
<host>localhost</host>
<port>9160</port>
</store>

Running WSO2 MB

Download the WSO2 MB
using the export command Set the JAVA_HOME to your Java home
Execute the ./wso2server.sh in CARBON_HOME/bin
Login user name & password as "admin"


When we navigate to
Home-> Manage-> Message Broker Clusters -> Node List we can view Zookeeper ID c


Now Message broker can running, message sender and receiver.It can send messages to that cluster and consume messages from that cluster using sample client.I will come with the code next time

Thursday, May 10, 2012

Web Security With OAuth & OpenID

I was participated to a web security workshop and session while I have captured about web features called OAuth & OpenID. Below I have brief down about the two technologies.

Web Service Security is a major requirement & crucial part of the enterprise  services.WS-Security supports multiple formats for tokens multiple trust domains, multiple signature formats and multiple encryption technologies.Both OAuth and OpenID systems tend to live longer because those are  two powerful features of WS-Security.

OAuth

OAuth is an open protocol that allows users to share their private resources, it can simply stored resources in one site with another site without having to hand out their username and password. Users have to share their credentials with potentially untrustworthy. There for data spread across various websites like Flickr, Twitter.With OAuth can be haven. Further access rights can be granted over the limited period of time without being required to expose their user name and password.WSO2 Identity Server include the feature that delegation via OAuth. It is representing the user as an authority

According to OAuth mechanism when user use of a service to login the authentication process It will redirects the user to where the serviceruns. In here OAuth provider uses its own OAuth credentials (token) toretrieve credentials for User. OAuth provider stores users credentialsalong with User's account allow him to use & access the service. OAuth important when someone hacked all the passwords of OAuth providers Even though user have lost his OAuth provider password the unauthorized person doesn’t have his service user password.

OpenID

OpenID enables to keep control over your own identity by separating the identity. Currently most people use the same username and password for every site that they are accessing to authenticate purposes. However this becomes very insecurity and not a recommended way of accessing. OpenID limit this risk by reducing the number of sites that you can access with same username and password. Using OpenID technology will be allowing one username/password to access any number of websites or online services. Further OpenID provider a secure channel and validate your identity.

In OpenID scenario it provides web redirections to communicate between the relying party and OpenID. Assume you want to login to web service but you don’t have a web service user credentials when other web services supporting OpenID there will be a possibility of using  the supporting OpenID’s web services account as to login to the web service that we dont have user ID. There for web services supporting OpenID authorizations eliminating the need of remember more than one user id and password.

I'm willing to focus on discovering & presenting those two features how support in WSO2 Identity Server.