javablogspot

Just another WordPress.com weblog

Posts Tagged ‘Springs’

spring-30-source-build-error

Posted by damuchinni on March 14, 2009

1. dependency jar : “com.springsource.org.xmlpull” verison “1.1.3.4-O” can’t be download automatically. need to be manually download form here and copy it to the \ivy-cache\repository\org.xmlpull\com.springsource.org.xmlpull\1.1.3.4-O

2. set ANT_OPTS before you start if you got a java.lang.OutOfMemoryError: PermGen space exception.

set ANT_OPTS=-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m

Posted in Springs | Tagged: | Leave a Comment »

Multithreading Spring-Batch using Concurrent API

Posted by damuchinni on March 12, 2009

One of the recurrent task in my job is to create batch programs in Java.

Some are extraction batches (we get data from a DB and write it to an XML file), some are import (from XML to DB), and some are DB treatments only. All these batches are using Spring-Batch Library to do their tasks.

Spring Batch is a well thought-out library, that, among other things, allows you to:

Handle large data by splitting it in small chunks
Restart a batch exactly where it has stopped
Define how many handled chunks per commit steps
Automatic Rollback / Retry in case of error
Etc….

The diagram below shows a typical execution of a batch:

To simplify things, let’s say a batch is composed of steps, and a Step is a Reader and a Writer communicating together.

At the beginning of the step, the Reader and Writer are opened
The Reader opens a cursor on the DB, using a query.
The Writer creates the XML file to write
Then for each element read by the reader (For each row of the result set, we convert it into a Java Object)
The element read is sent to the Writer
The Writer convert it to XML then write it to the file.
When the Reader returns null, that means no more data to process
The Step tells the Writer to flush all it’s data
The Step closes the Writer and Reader.

Recently, I was asked to optimize a slow extraction batch.

Profiling the code, I see that the Reader must execute 15 SQL requests to get all the data to send to the Writer. As all elements are handled sequentially, that meant 150 000 requests for 10 000 elements ! A quick look at the database status with a batch running showed me that the database was not fully utilized.

I decided to Multi-thread the queries done in the Reader. By default, Spring-Batch does not easily support Multi-threading, and I didn’t want to do much change in the existing code, so I used the famous Java Concurrent API features of Java.

In my multithreaded batch, the Reader don’t execute the queries to get data for the Writer, but instead creates a FutureTask, and adds it to a Multithreaded ExecutorService. This FutureTask is then send to the Writer which ony stores it.

When writer.flush is called, we get the data one by one from each of the FutureTasks. All the threads of the ExecutorService are busy querying the DB to fill the data for the FutureTask. Thus all the reading will be multi-threaded.

Next, let’s deep a little bit into the code:

We create a Callable class that will execute the queries, we suppose we want to read contracts from the DB:
protected class ContractCreator implements Callable
{
protected long elementId;

public ContractCreator (long elementId)
{
this.elementId=elementId;
}
public Contract call() throws Exception {
// Called by the threads
// Execute all the queries here
// And return the real data object
}
}

The reader creates the ExecutorService with a pool of 10 threads:

protected ExecutorService executor=Executors.newFixedThreadPool(10);

Then the reader creates the Callable class for each element to read and submits it to the threads of the ExecutorService.

public Object read (ResultSet rs, int rowNum) throws SQLException {
long elementId=rs.getLong(“ELEMENT_ID”);
// Multithreaded execution
Future result=executor.submit(new ContractCreator (elementId));
return result;
}

The Future created is sent to the Writer which only stores it.

/**
* We store all the contracts running in parallel here
*/
protected List<Future> contracts=new ArrayList<Future> (10);

public void write(Object output) {
contracts.add((Future) output);
}

It’s only when flush is called that the Writer tries to get each contract and write them to the XML file:

@Override
public void flush() throws FlushFailedException {
for (FuturecontractCreator:contracts)
{
// Get the data read by the threads
// Will block until the data is avaiable
Contract contract = (Contract) contractCreator.get();
writeContractToFile (contract):
}
contracts.clear();
super.flush();
}

We then have now 10 threads querying the DB in parallel with little efforts ! All the basic thread / synchronization handling stuff is done by the Concurrent API, and the resulting file is exactly the same as with the single threaded example.

I hope this Spring-Batch example allows you to understand better the power of the Concurrent API of Java !

Posted in Springs | Tagged: | Leave a Comment »

Migrating From Spring dm Server

Posted by damuchinni on March 11, 2009

The easiest way to develop Spring-powered OSGi applications is to use SpringSource Tools (Eclipse plug-in) and SpringSource dm Server combo. They allow rapid development such as MANIFEST.MF validation, convenient deployment and automatic downloading of required (and OSGi-fied) libraries. Later on, when you need to migrate to plain OSGi (just like I do), you can follow this tutorial series.

A month ago I started development of an application with SpringSource dm Server (ssdms). In the middle of the project, concerns arose to deploy it in “plain” OSGi container (namely Equinox). The migration was rocky, so I decided to blog it for the sake of sharing.

First I downloaded the bare bone “Framework” from Eclipse. The “Eclipse Equinox” version (listed topmost) is Framework+stuff, but I didn’t use it since I wanted to build everything from scratch (and it’s stuffed with old Servlet 2.4 and Jetty 5).

The framework is a console, type ss. It will show one active bundle (which is the framework itself). Type close to shutdown.
view source
print?
1.osgi> ss
2.
3.Framework is launched.
4.
5.id State Bundle
6.0 ACTIVE org.eclipse.osgi_3.4.0.v20080605-1900

If you want to enable JMX, open eclipse.ini (it’s in Eclipse.app if you’re using Mac) and add the following entries:
view source
print?
1.-Dcom.sun.management.jmxremote
2.-Dcom.sun.management.jmxremote.port=6789
3.-Dcom.sun.management.jmxremote.ssl=false
4.-Dcom.sun.management.jmxremote.authenticate=false

Run jconsole from Terminal or Command Prompt (assuming JDK/bin in your path) and enter service:jmx:rmi:///jndi/rmi://localhost:6789/jmxrmi in JMX URL. You should be able to see “inside” the framework:

Next step is to fill it with libraries. The easiest way is to copy from ssdms distribution to our Equinox’s /plugins:

* Every jar in /lib EXCEPT:
o Jars started with com.springsource.server
o Jars having slf4j if you want “unified” logging like I do (will explain this later)
o Jars started with org.eclipse.osgi because they may conflict with our OSGi framework
* Every jar in /repository/bundles/ext EXCEPT:
o Jars having slf4j and anything related to any logging framework (commons-logging, Log4J) if you want “unified” logging
o Jars having -sources unless you want to keep the source code
o Jars started with org.springframework.osgi because we will use the ones from Spring Dynamic Modules distribution
* Every jar in /repository/bundles/usr EXCEPT:
o Jars having slf4j and anything related to any logging framework (commons-logging, Log4J) if you want “unified” logging
o Jars having -sources unless you want to keep the source code

What is “unified” logging? Well, you might want to unify your logging facility to be like this:

Essentially everything goes to one sink, easing administration. Anyway the logging deserves its own blog entry so I’ll skip it now. Let’s download Spring Dynamic Modules and copy the following jars from /dist to our Equinox’s /plugins:

* spring-osgi-core-VERSION.jar
* spring-osgi-extender-VERSION.jar
* spring-osgi-io-VERSION.jar
* spring-osgi-web-VERSION.jar
* spring-osgi-web-extender-VERSION.jar

Note: For now, you can use (download and put to /plugins) any logging framework library of your choice. Later on, when you decide to use the unified approach, you must replace them with the SLF4J bridges.

You will definitely want Tomcat and Spring Dynamic Modules to run everytime framework is started. You can go to console and start them manually (the framework remembers last active bundles upon shutdown and will start them next time it’s started), or you can list them in /configuration/config.ini. The latter is prefered since you can “reset” the framework (delete all configuration and cache) and still have the desired bundles run on startup. To do that, put the following entry:
view source
print?
1.osgi.bundles=org.eclipse.equinox.common@2:start, org.eclipse.update.configurator@3:start,
2.catalina.start.osgi-VERSION.jar@3:start, spring-osgi-extender-VERSION.jar@4:start,
3.spring-osgi-web-extender-VERSION.jar@4:start
4.eclipse.ignoreApp=true
5.osgi.noShutdown=true

Verify our configuration by running the framework again. You should see logs of Tomcat and Spring Dynamic Modules activity. More or less you should see something like this:
view source
print?
01.14:48 Start Thread I o.s.o.w.t.i.Activator – Starting Apache Tomcat/6.0.18 …
02.14:48 Start Thread I o.s.o.w.t.i.Activator – Using default XML configuration bundleresource://3/conf/default-server.xml
03.Mar 11, 2009 2:48:09 PM org.apache.catalina.startup.ClusterRuleSetFactory getClusterRuleSet
04.INFO: Unable to find a cluster rule set in the classpath. Will load the default rule set.
05.14:48 t Dispatcher I o.s.o.e.i.a.ContextLoaderListener – Starting [org.springframework.osgi.extender] bundle v.[1.2.0.m2]
06.Mar 11, 2009 2:48:09 PM org.apache.coyote.http11.Http11AprProtocol init
07.INFO: Initializing Coyote HTTP/1.1 on http-8080
08.Mar 11, 2009 2:48:09 PM org.apache.catalina.startup.Catalina load
09.INFO: Initialization processed in 227 ms
10.Mar 11, 2009 2:48:09 PM org.apache.catalina.core.StandardService start
11.INFO: Starting service Catalina
12.Mar 11, 2009 2:48:09 PM org.apache.catalina.core.StandardEngine start
13.INFO: Starting Servlet Engine: Apache Tomcat/6.0.18
14.Mar 11, 2009 2:48:09 PM org.apache.coyote.http11.Http11AprProtocol start
15.INFO: Starting Coyote HTTP/1.1 on http-8080
16.14:48 Start Thread I o.s.o.w.t.i.Activator – Succesfully started Apache Tomcat/6.0.18 @ Catalina:8080
17.14:48 Start Thread I o.s.o.w.t.i.Activator – Published Apache Tomcat/6.0.18 as an OSGi service
18.14:48 t Dispatcher I o.s.o.e.i.s.ExtenderConfiguration – No custom extender configuration detected; using defaults…
19.14:48 t Dispatcher I o.s.s.t.TimerTaskExecutor – Initializing Timer
20.14:48 t Dispatcher I o.s.s.t.TimerTaskExecutor – Initializing Timer
21.14:48 t Dispatcher I o.s.o.w.e.i.a.WarLoaderListener – Starting [org.springframework.osgi.web.extender] bundle v.[1.2.0.m2]
22.14:48 xtender-Init I o.s.o.w.e.i.a.WarListenerConfiguration – No custom extender configuration detected; using defaults…
23.14:48 xtender-Init I o.s.o.w.d.t.TomcatWarDeployer – No Catalina Service set; looking for one in the OSGi service registry…
24.14:48 xtender-Init I o.s.o.w.d.t.TomcatWarDeployer – Found service Catalina

That’s it for now, later I will write about making EclipseLink JPA to work (dynamically!) and integrating BlazeDS for Flex remoting.

Posted in Springs | Tagged: | Leave a Comment »

Tutorial: Your First Simple Spring MVC Template

Posted by damuchinni on March 10, 2009

In this post, I’m going to show you how to create your first simple Spring MVC template. Before we start, I assume that you already know the basics of Spring MVC. If you don’t, I suggest to read first some introductions to Spring. Below are some good resources to start.

Introduction to Spring Framework 2.5
What’s new in Spring 2.5
Spring 2.0: What’s new and why it matters
Introducing the Spring Framework
Spring Framework reference manual (PDF)
Let’s start. Setup a new blank project. Set all API dependencies in your classpath then configure your web.xml like this:

view sourceprint?01.
02.

03.
04.myproject
05.org.springframework.web.servlet.DispatcherServlet
06.1
07.
08.

09.
10.myproject
11./index.html
12.
13.

14.
15.myproject
16.*.html
17.
18.

19.
20.index.html
21.
22.

23.
I set two mappings, one for the url pattern and the other one is for the welcome page. Next is setting-up the beans. In your myproject-servlet.xml, put these configurations:

view sourceprint?01.
02.

03.
04.
05.
06.

07.
08.
09.
10.
showIndex
11.
12.
13.
14.

15.
16.
17.

18.
myProjectController
19.
20.
21.
22.

23.
24.
25.
26.
I used the ResourceBundleViewResolver to configure the views using properties file in order for us to still edit the configurations of views without recompiling the project. The bean actionMethodNameResolver is where all your url requests will be dispatched to its proper controller. Since the prop /index.html has a value of showIndex, Your PageController should contain this method:

view sourceprint?01.package com.myproject.controllers;
02.

03.import org.springframework.web.servlet.ModelAndView;
04.import org.springframework.web.servlet.mvc.multiaction.MultiActionController;
05.import javax.servlet.http.HttpServletRequest;
06.import javax.servlet.http.HttpServletResponse;
07.

08.public class PageController extends MultiActionController {
09.

10.public ModelAndView showIndex(HttpServletRequest request, HttpServletResponse response) {
11.return new ModelAndView(index-page, “model”, “Hello World!”);
12.}
13.}
Next step is to create the view index-page. Create a properties file named views. It should contain the class and the url of your jsp. Take note that you need the jstl lib in your classpath.

view sourceprint?1.index-page.class=org.springframework.web.servlet.view.JstlView
2.index-page.url=WEB-INF/index.jsp
Create the index.jsp under WEB-INF. To get the model from our controller, we can use this code.

view sourceprint?1.

${model}


Set up your Tomcat server. Deploy and run the project. Access your local server (http://localhost:%5Bport%5D) and your done. You should see the “Hello World!”. You can now add other page requests on your actionMethodNameResolver’s props and configure the proper method in your PageController. Don’t forget to add your views configurations in the views.properties. You can download the source code here. I created the project using Idea IntelliJ and included the required libraries so you don’t have to download them. You can also import the project using Apache

Posted in Springs | Tagged: | Leave a Comment »

Spring and EJB Transaction strategies

Posted by damuchinni on March 10, 2009

It’s a common mistake to confuse transaction models with transaction strategies. This second article in the Transaction strategies series outlines the three transaction models supported by the Java™ platform and introduces four primary transaction strategies that use those models. Using examples from the Spring Framework and the Enterprise JavaBeans (EJB) 3.0 specification, Mark Richards explains how the transaction models work and how they can form the basis for developing transaction strategies ranging from basic transaction processing to high-speed transaction-processing systems.

All too often, developers, designers, and architects confuse transaction models with transaction strategies. I typically ask the architect or technical lead in a client engagement to describe their project’s transaction strategy. I usually get one of three responses. Sometimes it’s a quiet “Oh, well, we really don’t use transactions in our applications.” Other times I hear a confused “Um, I’m really not sure what you mean.” Usually, however, I get the confident response that “We are using declarative transactions.” But as you will see in this article, the term declarative transactions describes a transaction model, but by no means is it a transaction strategy.

About this series

Transactions improve the quality, integrity, and consistency of your data and make your applications more robust. Implementation of successful transaction processing in Java applications is not a trivial exercise, and it’s about design as much as about coding. In this new series, Mark Richards is your guide to designing an effective transaction strategy for use cases ranging from simple applications to high-performance transaction processing.

The three transaction models supported by the Java platform are:

The Local Transaction model
The Programmatic Transaction model
The Declarative Transaction model
These models describe the basics of how transactions should behave in the Java platform and how they are implemented. However, they provide only the rules and semantics for transaction processing. How the transaction model is applied is entirely up to you. For example, when should you use the REQUIRED vs. the MANDATORY transaction attribute? When and where should you specify the transaction rollback directives? When should you consider the Programmatic Transaction model vs. the Declarative Transaction model? How do you optimize transactions for high-performance systems? The transaction models themselves can’t answer these questions. Rather, you must address them either by developing your own transaction strategy or by adopting one of the four primary transaction strategies I introduce in this article.

As you saw in the first article of this series, many common transaction pitfalls can affect transactional behavior and, consequently, diminish your data’s integrity and consistency. Likewise, the lack of an effective (or any) transaction strategy will have a negative effect on your data’s integrity and consistency. The transaction models I describe in this article are the building blocks for developing an effective transaction strategy. Understanding the differences among the models and how they work is paramount to understanding the transaction strategies that use them. After describing the three transaction models, I’ll introduce four transaction strategies that apply to most business applications, ranging from simple Web applications to large high-speed transaction-processing systems. Subsequent articles in the Transaction Strategies series will describe these strategies in detail.

Local Transaction model

The Local Transaction model gets its name from the fact that transactions are managed by the underlying database resource manager, not the container or framework your application is running in. In this model, you manage connections rather than transactions. As you learned from “Understanding transaction pitfalls,” you can’t use the Local Transaction model when you make database updates using an object-relational mapping framework such as Hibernate, TopLink, or the Java Persistence API (JPA). You can still apply it when using data-access object (DAO) or JDBC-based frameworks and database stored procedures.

You can use the Local Transaction model in one of two ways: let the database manage the connection, or manage the connection programmatically. To let the database manage the connection, you set the autoCommit property on the JDBC Connection object to true (the default value), which tells the underlying database management system (DBMS) to commit the transaction after the insert, update, or delete has completed, or roll back the work if it fails. This technique is illustrated in Listing 1, which inserts a stock-trade order into a TRADE table:

Listing 1. Local transactions with a single update

public class TradingServiceImpl {
public void processTrade(TradeData trade) throws Exception {
Connection dbConnection = null;
try {
DataSource ds = (DataSource)
(new InitialContext()).lookup(“jdbc/MasterDS”);
dbConnection = ds.getConnection();
dbConnection.setAutoCommit(true);
Statement sql = dbConnection.createStatement();
String stmt = “insert into TRADE …”;
sql.executeUpdate(stmt1);
} finally {
if (dbConnection != null)
dbConnection.close();
}
}
}

Notice in Listing 1 that the autoCommit value is set to true, indicating to the DBMS that the local transaction should be committed after each database statement. This technique works fine if you have a single database maintenance activity within the logical unit of work (LUW). However, suppose that the processTrade() method shown in Listing 1 also updates the balance in the ACCT table to reflect the trade order’s value. In this case, the two database actions would be independent of each other, with the insert to the TRADE table being committed to the database before the update of the ACCT table. Should the update to the ACCT table fail, there would be no mechanism to roll back the insert to the TRADE table, resulting in inconsistent data in the database.

This scenario leads to the second technique: managing the connections programmatically. In this technique, you would set the autoCommit property on the Connection object to false and manually commit or roll back the connection. Listing 2 illustrates this technique:

Listing 2. Local transactions with multiple updates

public class TradingServiceImpl {
public void processTrade(TradeData trade) throws Exception {
Connection dbConnection = null;
try {
DataSource ds = (DataSource)
(new InitialContext()).lookup(“jdbc/MasterDS”);
dbConnection = ds.getConnection();
dbConnection.setAutoCommit(false);
Statement sql = dbConnection.createStatement();
String stmt1 = “insert into TRADE …”;
sql.executeUpdate(stmt1);
String stmt2 = “update ACCT set balance…”;
sql.executeUpdate(stmt2);
dbConnection.commit();
} catch (Exception up) {
dbConnection.rollback();
throw up;
} finally {
if (dbConnection != null)
dbConnection.close();
}
}
}

Notice in Listing 2 that the autoCommit property is set to false, informing the underlying DBMS that the connection will be managed in the code, not the database. In this case, you must invoke the commit() method on the Connection object if all is well; otherwise, invoke the rollback() method if an exception occurs. In this manner, you can coordinate the two database activities in the same unit of work.

Although the Local Transaction model may seem somewhat outdated in this day and age, it is an important element for one of the primary transaction strategies I’ll introduce toward the end of the article.

Back to top

Programmatic Transaction model

The Programmatic Transaction model gets its name from the fact that the developer is responsible for managing the transaction. In the Programmatic Transaction model, unlike the Local Transaction model, you manage transactions and are isolated from the underlying database connections.

Similar to the example in Listing 2, with this model the developer is responsible for obtaining a transaction from the transaction manager, starting the transaction, committing the transaction, and — if an exception occurs — rolling back the transaction. As you can probably guess, this results in a lot of error-prone code that tends to get in the way of the business logic in your applications. However, some transaction strategies require the use of the Programmatic Transaction model.

Although the concepts are the same, implementation of the Programmatic Transaction model differs between the Spring Framework and the EJB 3.0 specification. I’ll illustrate the implementation of this model first using EJB 3.0, then show the same database updates using the Spring Framework.

Programmatic transactions with EJB 3.0

In EJB 3.0, you obtain a transaction from the transaction manager (in other words, the container) by doing a Java Naming and Directory Interface (JNDI) lookup on the javax.transaction.UserTransaction. Once you have a UserTransaction, you can invoke the begin() method to start the transaction, the commit() method to commit the transaction, and the rollback() method to roll back the transaction if an error occurs. In this model, the container will not automatically commit or roll back the transaction; it is up to the developer to program this behavior in the Java method performing the database updates. Listing 3 shows an example of the Programmatic Transaction model for EJB 3.0 using JPA:

Listing 3. Programmatic transactions using EJB 3.0

@Stateless
@TransactionManagement(TransactionManagementType.BEAN)
public class TradingServiceImpl implements TradingService {
@PersistenceContext(unitName=”trading”) EntityManager em;

public void processTrade(TradeData trade) throws Exception {
InitialContext ctx = new InitialContext();
UserTransaction txn = (UserTransaction)ctx.lookup(“UserTransaction”);
try {
txn.begin();
em.persist(trade);
AcctData acct = em.find(AcctData.class, trade.getAcctId());
double tradeValue = trade.getPrice() * trade.getShares();
double currentBalance = acct.getBalance();
if (trade.getAction().equals(“BUY”)) {
acct.setBalance(currentBalance – tradeValue);
} else {
acct.setBalance(currentBalance + tradeValue);
}
txn.commit();
} catch (Exception up) {
txn.rollback();
throw up;
}
}
}

When using the Programmatic Transaction model in a Java Platform, Enterprise Edition (Java EE) container environment with a stateless session bean, you must tell the container you are using programmatic transactions. You do this by using the @TransactionManagement annotation and setting the transaction type to BEAN. If you don’t use this annotation, the container assumes you are using declarative transaction management (CONTAINER), which is the default transaction type for EJB 3.0. When you use programmatic transactions in the client layer outside of the context of a stateless session bean, you don’t need to set the transaction type.

Programmatic transactions with Spring

The Spring Framework has two ways of implementing the Programmatic Transaction model. One way is through the Spring TransactionTemplate, and the other is by using a Spring platform transaction manager directly. Because I am not a big fan of anonymous inner classes and hard-to-read code, I will use the second technique to illustrate the Programmatic Transaction model in Spring.

Spring has at least nine platform transaction managers. The most common ones you will most likely use are the DataSourceTransactionManager, HibernateTransactionManager, JpaTransactionManager, and the JtaTransactionManager. My code examples use JPA, so I’ll show the configuration for the JpaTransactionManager.

To configure the JpaTransactionManager in Spring, simply define the bean in the application context XML file using the org.springframework.orm.jpa.JpaTransactionManager class and add a reference to the JPA Entity Manager Factory bean. Then, assuming the class containing your application logic is managed by Spring, inject the transaction manager into the bean, as shown in Listing 4:

Listing 4. Defining the Spring JPA transaction manager

If Spring doesn’t manage the application class, you can obtain a reference to the transaction manager in your method by using the getBean() method on the Spring context.

In the source code, you can now use the platform manager to get a transaction. Once all the updates are performed you can invoke the commit() method to commit the transaction, or the rollback() method to roll back the transaction. Listing 5 illustrates this technique:

Listing 5. Using the Spring JPA transaction manager

public class TradingServiceImpl {
@PersistenceContext(unitName=”trading”) EntityManager em;

JpaTransactionManager txnManager = null;
public void setTxnManager(JpaTransactionManager mgr) {
txnManager = mgr;
}

public void processTrade(TradeData trade) throws Exception {
TransactionStatus status =
txnManager.getTransaction(new DefaultTransactionDefinition());
try {
em.persist(trade);
AcctData acct = em.find(AcctData.class, trade.getAcctId());
double tradeValue = trade.getPrice() * trade.getShares();
double currentBalance = acct.getBalance();
if (trade.getAction().equals(“BUY”)) {
acct.setBalance(currentBalance – tradeValue);
} else {
acct.setBalance(currentBalance + tradeValue);
}
txnManager.commit(status);
} catch (Exception up) {
txnManager.rollback(status);
throw up;
}
}
}

Notice in Listing 5 the difference between the Spring Framework and EJB 3.0. In Spring, the transaction is retrieved (and consequently started) by invoking the getTransaction() method on the platform transaction manager. The anonymous DefaultTransactionDefinition class contains details about the transaction and its behavior, including the transaction name, isolation level, propagation mode (transaction attribute), and transaction timeout (if any). In this case, I am simply using the default values, which are an empty string for the name, the default isolation level for the underlying DBMS (usually READ_COMMITTED), PROPAGATION_REQUIRED for the transaction attribute, and the default timeout of the DBMS. Also notice that the commit() and rollback() methods are invoked using the platform transaction manager, not the transaction (as is the case with EJB).

Back to top

Declarative Transaction model

The Declarative Transaction model, otherwise known as Container Managed Transactions (CMT), is the most common transaction model in the Java platform. In this model, the container environment takes care of starting, committing, and rolling back the transaction. The developer is responsible only for specifying the transactions’ behavior. Most of the transaction pitfalls discussed in this series’ first article are associated with the Declarative Transaction model.

Both the Spring Framework and EJB 3.0 make use of annotations to specify the transaction behavior. Spring uses the @Transactional annotation, whereas EJB 3.0 uses the @TransactionAttribute annotation. The container will not automatically roll back a transaction on a checked exception when you use the Declarative Transaction model. The developer must specify where and when to roll back a transaction when a checked exception occurs. In the Spring Framework, you specify this by using the rollbackFor property on the @Transactional annotation. In EJB, you specify it by invoking the setRollbackOnly() method on the SessionContext.

Listing 6 illustrates the use of the Declarative Transaction model for EJB:

Listing 6. Declarative transactions using EJB 3.0

@Stateless
public class TradingServiceImpl implements TradingService {
@PersistenceContext(unitName=”trading”) EntityManager em;
@Resource SessionContext ctx;

@TransactionAttribute(TransactionAttributeType.REQUIRED)
public void processTrade(TradeData trade) throws Exception {
try {
em.persist(trade);
AcctData acct = em.find(AcctData.class, trade.getAcctId());
double tradeValue = trade.getPrice() * trade.getShares();
double currentBalance = acct.getBalance();
if (trade.getAction().equals(“BUY”)) {
acct.setBalance(currentBalance – tradeValue);
} else {
acct.setBalance(currentBalance + tradeValue);
}
} catch (Exception up) {
ctx.setRollbackOnly();
throw up;
}
}
}

Listing 7 illustrates the use of the Declarative Transaction model for the Spring Framework:

Listing 7. Declarative transactions using Spring

public class TradingServiceImpl {
@PersistenceContext(unitName=”trading”) EntityManager em;

@Transactional(propagation=Propagation.REQUIRED,
rollbackFor=Exception.class)
public void processTrade(TradeData trade) throws Exception {
em.persist(trade);
AcctData acct = em.find(AcctData.class, trade.getAcctId());
double tradeValue = trade.getPrice() * trade.getShares();
double currentBalance = acct.getBalance();
if (trade.getAction().equals(“BUY”)) {
acct.setBalance(currentBalance – tradeValue);
} else {
acct.setBalance(currentBalance + tradeValue);
}
}
}

Transaction attributes

In addition to the rollback directives, you must also specify the transaction attribute, which defines how the transaction should behave. The Java platform supports six types of transaction attributes, regardless of whether you are using EJB or the Spring Framework:

Required
Mandatory
RequiresNew
Supports
NotSupported
Never
In describing each of these transaction attributes, I’ll use a fictitious method named methodA() that the transaction attribute is being applied to.

If the Required transaction attribute is specified for methodA() and methodA() is invoked under the scope of an existing transaction, the existing transaction scope will be used. Otherwise, methodA() will start a new transaction. If the transaction is started by methodA(), then it must also be terminated (committed or rolled back) by methodA(). This is the most commonly used transaction attribute and is the default for both EJB 3.0 and Spring. Unfortunately, in many cases, it is used incorrectly, resulting in data-integrity and consistency issues. For each of the transaction strategies I’ll cover in subsequent articles in this series, I’ll discuss use of this transaction attribute in more detail.

If the Mandatory transaction attribute is specified for methodA() and methodA() is invoked under an existing transaction’s scope, the existing transaction scope will be used. However, if methodA() is invoked without a transaction context, then a TransactionRequiredException will be thrown, indicating that a transaction must be present before methodA() is invoked. This transaction attribute is used in the Client Orchestration transaction strategy described in this article’s next section.

The RequiresNew transaction attribute is an interesting one. More often than not, I find this attribute misused or misunderstood. If the RequiresNew transaction attribute is specified for methodA() and methodA() is invoked with or without a transaction context, a new transaction will always be started (and terminated) by methodA(). This means that if methodA() is invoked within the context of another transaction (called Transaction1 for example), Transaction1 will be suspended and a new transaction (called Transaction2) will be started. Once methodA() ends, Transaction2 is then either committed or rolled back, and Transaction1 resumes. This clearly violates the ACID (atomicity, consistency, isolation, durability) properties of a transaction (specifically the atomicity property). In other words, all database updates are no longer contained within a single unit of work. If Transaction1 were to be rolled back, the changes committed by Transaction2 remain committed. If that’s the case, what good is this transaction attribute? As indicated in the first article in this series, this transaction attribute should only be used for database operations (such as auditing or logging) that are independent of the underlying transaction (in this case Transaction1).

The Supports transaction attribute is another one that I find most developers don’t fully understand or appreciate. If the Supports transaction attribute is specified for methodA() and methodA() is invoked within the scope of an existing transaction, methodA() will execute under the scope of that transaction. However, if methodA() is invoked without a transaction context, then no transaction will be started. This attribute is primarily used for read-only operations to the database. If that’s the case, why not specify the NotSupported transaction attribute (described in the next paragraph) instead? After all, that attribute guarantees that the method will run without a transaction. The answer is simple. Invoking the query operation in the context of an existing transaction will cause data to be read from the database transaction log (in other words, updated data), whereas running without a transaction scope will case the query to read unchanged data from the table. For example, if you were to insert a new trade order into the TRADE table and subsequently (in the same transaction) retrieve a list of all trade orders, the uncommitted trade would appear in the list. However, if you were to use something like the NotSupported transaction attribute instead, it would cause the database query to read from the table, not the transaction log. Therefore, in the previous example, you would not see the uncommitted trade. This is not necessarily a bad thing; it depends on your use case and business logic.

The NotSupported transaction attribute specifies that the method being called will not use or start a transaction, regardless if one is present. If the NotSupported transaction attribute is specified for methodA() and methodA() is invoked in context of a transaction, that transaction is suspended until methodA() ends. When methodA() ends, the original transaction is then resumed. There are only a few use cases for this transaction attribute, and they primarily involve database stored procedures. If you try to invoke a database stored procedure within the scope of an existing transaction context and the database stored procedure contains a BEGIN TRANS or, in the case of Sybase, runs in unchained mode, an exception will be thrown indicating that a new transaction cannot be started if one already exists. (In other words, nested transactions are not supported.) Almost all containers use the Java Transaction Service (JTS) as the default transaction implementation for JTA. It’s JTS — not the Java platform per se — that doesn’t support nested transactions. If you cannot change the database stored procedure, you can use the NotSupported attribute to suspend the existing transaction context to avoid this fatal exception. The impact, however, is that you no longer have atomic updates to the database in the same LUW. It is a trade-off, but it can get you out of a difficult situation quickly.

The Never transaction attribute is perhaps the most interesting of all. It behaves the same as the NotSupported transaction attribute with one important difference: if a transaction context exists when a method is called using the Never transaction attribute, an exception is thrown indicating that a transaction is not allowed when you invoke that method. The only use case I have been able to come up with for this transaction attribute is for testing. It provides a quick and easy way of verifying that a transaction exists when you invoke a particular method. If you use the Never transaction attribute and receive an exception when invoking the method in question, you know a transaction was present. If the method is allowed to execute, you know a transaction was not present. This is a great way of guaranteeing that your transaction strategy is solid.

Back to top

Transaction strategies

The transaction models described in this article form the basis for the transaction strategies I am about to introduce. It is important to understand fully the differences among the models and how they work before you jump into building a transaction strategy. The primary transaction strategies that can be used in most business-application scenarios are:

Client Orchestration transaction strategy
API Layer transaction strategy
High Concurrency transaction strategy
High-Speed Processing transaction strategy
I’ll summarize each of these strategies here and discuss them in detail in subsequent articles in this series.

The Client Orchestration transaction strategy is used when multiple server-based or model-based calls from the client layer fulfill a single unit of work. The client layer in this regard can refer to calls made from a Web framework, portal application, desktop system, or in some cases, a workflow product or business process management (BPM) component. In essence, the client layer owns the processing flow and “steps” needed to complete a particular request. For example, to place a trade order, assume you need to insert the trade into the database and then update the customer’s account balance to reflect the trade’s value. If the application’s API layer is too fine-grained, you have to invoke both methods from the client layer. In this scenario, the transactional unit of work must reside in the client layer to ensure an atomic unit of work.

The API Layer transaction strategy is used when you have coarse-grained methods that act as primary entry points to back-end functionality. (Call them services if you would like.) In this scenario, clients (be they Web-based, Web services based, message-based, or even desktop) make a single call to the back end to perform a particular request. Using the trade-order scenario from the preceding paragraph, in this case you would have a single entry-point method (called processTrade() for example) that the client layer calls. This single method would then contain the orchestration necessary to insert the trade order and update the account. I’ve given this strategy its name because in most cases, back-end processing functionality is exposed to client applications through the use of interfaces or an API. This is one of the most common transaction strategies.

The High Concurrency transaction strategy, a variation of the API Layer transaction strategy, is used for applications that cannot support long-running transactions from the API layer (usually because of performance or scalability needs). As the name implies, this strategy is used primarily in applications that support a high degree of concurrency from a user perspective. Transactions are fairly expensive in the Java platform. Depending on the database you are using, they can cause locks in the database, hold up resources, slow down an application from a throughput standpoint, and in some cases even cause deadlocks in the database. The main idea behind this transaction strategy is to shorten the transaction scope so that you minimize the locks in the database while still maintaining an atomic unit of work for any given client request. In some cases, you may need to refactor your application logic to support this transaction strategy.

The High-Speed Processing transaction strategy is perhaps the most extreme of the transaction strategies. You use it when you need to get the absolute fastest possible processing time (and hence throughput) from your application and still maintain some degree of transactional atomicity in your processing. Although this strategy introduces a small amount of risk from a data-integrity and consistency standpoint, if implemented correctly, it is the fastest possible transaction strategy in the Java platform. It is also perhaps the most difficult and cumbersome transaction strategy to implement out of the four introduced here.

Back to top

Conclusion

As you can see from this overview, developing an effective transaction strategy is not always a straightforward task. Many considerations, options, models, frameworks, configurations, and techniques go into solving the data-integrity and consistency problem. In my many years working on applications and with transactions, I’ve found that although the combinations of models, options, settings, and configurations can seem mind-numbing and quite overwhelming, in reality only a few combinations of options and settings make sense in most use cases. The four transaction strategies I have developed and will be discussing in detail in subsequent articles in this series should cover most of the scenarios you are likely to encounter in the Java platform for business-application development. One word of caution though: These strategies are not simple “silver bullet” drop-in solutions. In some cases, source-code refactoring or application redesign may be necessary to implement some of them. When situations like that come up, you simply need to ask yourself, “How important is the integrity and consistency of my data?” In most cases, the refactoring efforts pale in comparison to the risks and costs associated with bad data.

Posted in EJB, Springs | Tagged: , | Leave a Comment »

Struts2 Spring plugin can be called multiple times at startup

Posted by damuchinni on March 8, 2009

I have been writing a web application that has a plugin framework. The plugins needed to be Struts2 and Spring aware so that they would have tight integration into the web application. In order to make things easier I have been using the struts-spring-plugin.jar that comes bundled with the Struts2 download. The problem was that the following happened when my application initialised:

The Struts2 framework would initialise and call the struts-spring-plugin to set the Spring ApplicationContext;
My PluginManager would initialise and set the Spring ApplicationContext within Struts so that my plugins Spring beans would be included and could be found by both Struts and Spring;
The struts-spring-plugin would be called again and overwrite my Spring ApplicationContext that I had just set.
I tried to track down that third call but could find out where it originated from, all I could get is that it came from a class that was instantiated through the sun reflection classes. Once I had figured out the problem the solution is pretty simple. Just create a copy of the org.apache.struts2.spring.StrutsSpringObjectFactory (from the struts-spring-plugin.jar) in your project and modify it as follows:

try {
// Try and set the SpringObjectFactory with the last child of the Spring ApplicationContexts tree from the MyPluginManager.
MyPluginManager myPluginManager = MyPluginManager.getInstance();
setApplicationContext(myPluginManager.getPluginsXmlWebApplicationContext());

} catch (ExceptionMyPluginManagerNotInitialised empmni) {
// We couldn’t set the applicationContext with the one from the MyPluginManager so use the default
setApplicationContext(appContext);
}

The MyPluginManager is the class that manages my plugins. When it initialises the plugins it stores the XmlWebApplicationContext used to initialise that plugin. Each plugin has a new XmlWebApplicationContext that has either the ROOT XmlWebApplicationContext or the last pluginXmlWebApplicationContext as the parent. This means that the last pluginXmlWebApplicationContext is the last in the chain of XmlWebApplicationContext and so can see all of the spring beans from the root context AND the plugins. That is the XmlWebApplicationContext that is used in the above codes call to:

setApplicationContext(myPluginManager.getPluginsXmlWebApplicationContext());

Now Struts uses my child XmlWebApplicationContext and not the ROOT XmlWebApplicationContext.

Posted in Java, Springs, struts | Tagged: , , , | Leave a Comment »

parallelism in Spring

Posted by damuchinni on March 7, 2009

I’ve been wanting to post about parallelism in Spring for ages by now. It’s an interesting topic that doesn’t get the attention it deserves(if any at all), probably because an IoC container and Spring in particular really shine managing dependencies, a task that intuitively promotes serial processing, and also because JEE APIs (say Servlet or EJB) hide the need of it. There’s a specific area where no matter what you’ll be looking for concurrency and that is data retrieval. As long as a couple of different resources are involved, or even one alone if the data requested is independent, there are efficiency gains processing the several connections in parallel. A common case in todays environments would be, for example, web services.

Standard Java does not really offer any API to manage concurrency once inside a request (though the request itself is managed from a pool). In fact, it’s open for discussion if the standard forbids opening new threads in a web context (it’s specifically banned for EJBs). WebSphere and Weblogic proposed an alternative called CommonJ aka WorkManager API. It’s a very good alternative when running under those application servers. Spring offers another, arguably even more powerful, option with the TaskExecutor abstraction. It’s sometimes preferable, in a Spring environment, because it can use CommonJ as the underlying API but it can use the Java5 Executor framework (among others) as well, making the switch just a couple of configuration lines.

Let’s review how to use the framework. The only pre-requisite is to have at least two data retrieval services already configured as a dependency of a third bean. All the data retrieval services must share a common interface, I can recommend something like the Command pattern here (beware that bellow this approach is not fully followed to showcase inbound data processing). At this point we’re going to change the individual dependencies and transform it into a collection, well add an init and destroy methods and and executor (let’s start with a JDK5 implementation):

public class ParallelService implements InitializingBean, DisposableBean {
private List<Command> commands;
private ExecutorService executor;
}

With our current implementation based on Java 5 executors we need to initialize the thread pool in the initialization method and conclude everything when Spring context is closed

public void afterPropertiesSet() throws Exception {
executor = Executors.newFixedThreadPool(commands.size());
}

public void destroy() throws Exception {
executor.shutdownNow(); // Improve this as much as liked
}

We just need to handle the concurrent execution now. It’s easy to do with the Future management of asynchronous tasks. Another alternative is to submit all tasks and await termination (see ExecutorService):

public void execute(Data data) {
Set<Future> tasks = new HashSet<Future>(commands.size());
for (Command command : commands)
tasks.add(executor.submit(new RunCommand(command, data)));
for (Future future : tasks)
future.get();
//Other stuff to execute after all data has been retrieved
}

The code above just creates a collection of Future objects to check when the jobs have finished. The tricky part is the creation of the concurrent job from a custom service and pass the required data (if needed). An inner class wrapper will suffice:

private static class RunCommand implements Runnable {
private final Data data;
private final Command command;
public RunCommand(Command command, Data data) {
this.data = data;
this.command = command;
}
public void run() {
command.execute(data);
}
}

Well, that was pretty easy indeed. Right now we have a perfectly valid way to invoke beans in parallel. This approach has pros and cons. In the former list we have independence from Spring APIs (of course imagine that the Spring interfaces are substituted by their matching XML attributes) but we are also limited to a Java5 environment. If we don’t mind introducing a dependency with Spring itself we can transform the source code to use the TaskExecutor framework:

public class ParallelService {
private TaskExecutor taskExecutor;
private List<Command> commands;
public void setTaskExecutor(TaskExecutor taskExecutor) {
this.taskExecutor = taskExecutor;
}
}

And now the init and destroy methods are substituted by some XML configuration:

Notice that not all implementations of the TaskExecutor interface allow tracking the progress of a task once scheduled for execution!

public void execute)( {
for (Command command : commands)
taskExecutor.execute(new RunCommand(command));
}

Publicado por Jose Noheda en 2:22

Posted in Springs | Tagged: | Leave a Comment »

Spring Framework 3.0 M2 released

Posted by damuchinni on March 2, 2009

If you aren’t following the SpringSource blog you may have missed it, but last week Juergen Hoeller announced the availability of the second milestone for Spring 3.0. Juergen’s blog post covers all of the details about the milestone including the new RestTemplate, early JPA 2.0 support, more Java 5 style API updates and other improvements.

You can always get the latest milestones, release candidates and full releases for Spring from the download center.

Posted in Springs | Tagged: | Leave a Comment »

Spring Framework 3.0 M2 released

Posted by damuchinni on February 26, 2009

We are pleased to announce that the second Spring 3.0 milestone is finally available (download page).
This release comes with a wealth of revisions and new features:

Further Java 5 style API updates: consistent use of generic Collections and Maps, consistent use of generified FactoryBeans, and also consistent resolution of bridge methods in the Spring AOP API. Generified ApplicationListeners automatically receive specific event types only. All callback interfaces such as TransactionCallback and HibernateCallback declare a generic result value now. Overall, the Spring core codebase is now freshly revised and optimized for Java 5.

Extended concurrency support: Spring’s TaskExecutor abstraction has been updated for close integration with Java 5’s java.util.concurrent facilities. We provide first-class support for Callables and Futures now, as well as ExecutorService adapters, ThreadFactory integration, etc. This has been aligned with JSR-236 (Concurrency Utilities for Java EE 6) as far as possible. Furthermore, we provide support for asynchronous method invocations through the use of the new @Async annotation (or EJB 3.1’s @Asynchronous annotation). And in Spring 3.0 M3, we’ll be adding a scheduling namespace for convenient configuration of it all… including support for cron-style timers.

OXM module in core: We moved the Object/XML Mapping module, as known from the Spring Web Services project, to the Spring core project. OXM has been updated and revised for Java 5 as well, supporting marshalling and unmarshalling through JAXB2, JiBX, Castor, XMLBeans, and XStream. There is also OXM support for Spring JMS (MarshallingMessageConverter) and Spring MVC (MarshallingView).

RestTemplate: We have brand-new client-side REST support: the long-awaited RestTemplate, with HTTP processing infrastructure that is as flexible and extensible as you would expect from a Spring solution. There are also several improvements with respect to REST support in Spring MVC… Stay tuned for Arjen’s upcoming blog post on the latest REST support features!

MVC on Portlet 2.0: Spring Portlet MVC is based on the Portlet 2.0 API (JSR-286) now. We provide specific @ActionMapping, @RenderMapping, @ResourceMapping and @EventMapping annotations for Portlet MVC handler methods, including support for specific characteristics of those request types: e.g. action names, window states, resource ids, and event names (as defined by Portlet 2.0).

Early JPA 2.0 support: Finally, we are actively tracking the JPA 2.0 specification as well as emerging JPA providers with JPA 2.0 preview support. Spring 3.0 M2 already delivers early support for the JPA 2.0 API, e.g. query timeouts within Spring-managed transactions and QueryBuilder access in Spring-managed EntityManager proxies. We’ll wrap this up for Spring 3.0 RC1, as soon as the JPA 2.0 API is stable.

Now is a good time to give Spring 3.0 an early try! Let us know how it works for you… M2 doesn’t include reference documentation yet but comes with extensive javadoc and an extensive test suite. We’ll also be showing specific examples in follow-up blog posts.

We are now working towards our final milestone already: M3 will introduce annotation-based factory methods, declarative validation (based on JSR-303 “Bean Validation”), as well as new XML configuration namespaces (orm, scheduling). Spring MVC will receive an overhaul in terms of conversation management. We are also preparing for JSF 2.0 as far as necessary, keeping up the smooth integration experience with Spring.

Posted in Springs | Tagged: | Leave a Comment »

How to use Spring (2.5) annotations

Posted by damuchinni on February 16, 2009

Hi. Today i fount a good tutorial about Spring 2.5 annotations.

1. I’d like how it is explained in detail
2. I’d like how it is using mave2

so, take a look a it 🙂
Spring 2.5 and annotation-based dependency injection

Possibly related posts: (automatically generated)

* Spring IOC
* Spring and Hibernate
* SpringOne 2008 Day 3

Posted in Springs | Tagged: , | Leave a Comment »