Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Design with Spring AOP

Save for later
  • 12 min read
  • 14 Oct 2009

article-image

Designing and implementing an enterprise Java application means not only dealing with the application core business and architecture, but also with some typical enterprise requirements.

We have to define how the application manages concurrency so that the application is robust and does not suffer too badly from an increase in the amount of requests. We have to define the caching strategies for the application because we don't want that CPU or data-intensive operations to be executed over and over.

We have to define roles and profiles, applying security policies and restricting access to application parts, because different kind of users will probably have different rights and permissions. All these issues require writing additional code that clutters our application business code and reduces its modularity and maintainability.

But we have a choice. We can design our enterprise Java application keeping AOP in mind. This will help us to concentrate on our actual business code, taking away all the infrastructure issues that can be otherwise expressed as crosscutting concerns.

This article will introduce such issues, and will show how to design and implement them with Spring 2.5 AOP support.

Concurrency with AOP


For many developers, concurrency remains a mystery.

Concurrency is the system's skill to act with several requests simultaneously, so that threads don't corrupt the state of the objects when they gain access at the same time.

A number of good books have been written on this subject, such as Concurrent Programming in Java and Java Concurrency in Practice. They deserve much attention, since concurrency is an aspect that's hard to understand, and not immediately visible to developers. Problems in the area of concurrency are hard to reproduce. However, it's important to keep concurrency in mind to assure that the application is robust regardless of the number of users it will serve.

If we don't take into account concurrency and document when and how the problems of concurrency are considered, we will build an application taking some risks by supposing that the CPU will never simultaneously schedule processes on parts of our application that are not thread-safe.

To ensure the building of robust and scalable systems, we use proper patterns: There are JDK packages just for concurrency. They are in the java.util.concurrent package, a result of JSR-166.

One of these patterns is the read-write lock pattern, of which there is the interface java.util.concurrent.locks.ReadWriteLock and implementations, one of which is ReentrantReadWriteLock.

The goal of ReadWriteLock is to allow the reading of an object from virtually endless number of threads, while only one thread at a time can modify it. In this way, the state of the object can never be corrupted because threads to reading the object's state will always read up-to-date data, and the thread modifying the state of the object in question will be able to act without the possibility of the object's state being corrupted. Another necessary skill is that the result of a thread's action can be visible to the other threads. The behavior is the same as we could have achieved using synchronized, but when using a read-write lock we are explicitly synchronizing the actions, whereas with synchronized synchronization is implicit.

Now let's see an example of ReadWriteLock on BankAccountThreadSafe object.

Before the read operation, that needs to be safe, we set the read lock. After the read operation, we release the read lock.

Before the write operation that needs to be safe, we set the write lock. After a state modification, we release the write lock.

package org.springaop.chapter.five.concurrent;


import java.util.Date;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public final class BankAccountThreadSafe {

public BankAccountThreadSafe(Integer id) {
this.id = id;
balance = new Float(0);
startDate = new Date();
}

public BankAccountThreadSafe(Integer id, Float balance) {
this.id = id;
this.balance = balance;
startDate = new Date();
}

public BankAccountThreadSafe(Integer id, Float balance, Date start) {
this.id = id;
this.balance = balance;
this.startDate = start;
}

public boolean debitOperation(Float debit) {

wLock.lock();
try {

float balance = getBalance();

if (balance < debit) {

return false;

} else {

setBalance(balance - debit);

return true;
}
} finally {

wLock.unlock();
}
}

public void creditOperation(Float credit) {

wLock.lock();
try {
setBalance(getBalance() + credit);

} finally {

wLock.unlock();
}
}

private void setBalance(Float balance) {

wLock.lock();
try {
balance = balance;

} finally {

wLock.unlock();
}
}

public Float getBalance() {

rLock.lock();

try {
return balance;

} finally {

rLock.unlock();
}
}

public Integer getId() {
return id;
}

public Date getStartDate() {

return (Date) startDate.clone();
}

...
private Float balance;
private final Integer id;
private final Date startDate;
private final ReadWriteLock lock = new ReentrantReadWriteLock();
private final Lock rLock = lock.readLock();
private final Lock wLock = lock.writeLock();
}


BankAccountThreadSafeis a class that doesn't allow a bank account to be overdrawn (that is, have a negative balance), and it's an example of thread-safe class. The final fields are set in the constructors, hence implicitly thread-safe. The balance field, on the other hand, is managed in a thread-safe way by the setBalance,getBalance, creditOperation, and debitOperation methods.

In other words, this class is correctly programmed, concurrency-wise. The problem is that wherever we would like to have those characteristics, we have to write the same code (especially the finally block containing the lock's release).

We can solve that by writing an aspect that carries out that task for us.

    • A state modification is execution(void com.mycompany.BankAccount.set*(*))
    • A safe read is execution(* com.mycompany.BankAccount.getBalance())

package org.springaop.chapter.five.concurrent;


import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

import org.aspectj.lang.annotation.After;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.aspectj.lang.annotation.Pointcut;

@Aspect
public class BankAccountAspect {

/*pointcuts*/

@Pointcut("execution(* org.springaop.chapter.five.concurrent.BankAccount.getBalance())")
public void safeRead(){}

@Pointcut("execution(* org.springaop.chapter.five.concurrent.BankAccount.set*(*))")
public void stateModification(){}

@Pointcut( "execution(* org.springaop.chapter.five.concurrent.BankAccount.getId())")
public void getId(){}

@Pointcut("execution(* org.springaop.chapter.five.concurrent.BankAccount.getStartDate()))
public void getStartDate(){}

/*advices*/

@Before("safeRead()")
public void beforeSafeRead() {

rLock.lock();
}

@After("safeRead()")
public void afterSafeRead() {

rLock.unlock();
}

@Before("stateModification()")
public void beforeSafeWrite() {

wLock.lock();
}

@After("stateModification()")
public void afterSafeWrite() {

wLock.unlock();
}

private final ReadWriteLock lock = new ReentrantReadWriteLock();
private final Lock rLock = lock.readLock();
private final Lock wLock = lock.writeLock();
}


The BankAccountAspect class applies the crosscutting functionality. In this case, the functionality is calling the lock and unlock methods on the ReadLock and the WriteLock. The before methods apply the locks with the @Before annotation, while the after methods release the locks as if they were in the final block, with the @After annotation that is always executed (after-finally advice).

In this way the BankAccount class can become much easier, clearer, and briefer. It wouldn't have any perception that it can be executed in a thread-safe manner.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
package org.springaop.chapter.five.concurrent;


import java.util.Date;

public class BankAccount {

public BankAccount(Integer id) {
this.id = id;
this.balance = new Float(0);
this.startDate = new Date();
}

public BankAccount(Integer id, Float balance) {
this.id = id;
this.balance = balance;
this.startDate = new Date();
}

public BankAccount(Integer id, Float balance, Date start) {
this.id = id;
this.balance = balance;
this.startDate = start;
}

public boolean debitOperation(Float debit) {

float balance = getBalance();

if (balance < debit) {

return false;

} else {

setBalance(balance - debit);

return true;
}
}

public void creditOperation(Float credit) {

setBalance(getBalance() + credit);
}

private void setBalance(Float balance) {

this.balance = balance;
}

public Float getBalance() {

return balance;
}

public Integer getId() {

return id;
}

public Date getStartDate() {

return (Date) startDate.clone();
}

private Float balance;
private final Integer id;
private final Date startDate;
}


Another good design choice, together with the use of ReadWriteLock when necessary, is: using objects that once built are immutable, and therefore, not corruptible and can be easily shared between threads.

Transparent caching with AOP


Often, the objects that compose applications perform the same operations with the same arguments and obtain the same results. Sometimes, these operations are costly in terms of CPU usage, or may be there is a lot of I/O going on while executing those operations.

To get better results in terms of speed and resources used, it's suggested to use a cache. We can store in it the results corresponding to the methods' invocations as a key-value pair: method and arguments as key and return object as value.

Once you decide to use a cache you're just halfway. In fact, you must decide which part of the application is going to use the cache. Let's think about a web application backed by a database. Such web application usually involves Data Access Objects (DAOs), which access the relational database. Such objects are usually a bottleneck in the application as there is a lot of I/O going on. In other words, a cache can be used there.

The cache can also be used by the business layer that has already aggregated and elaborated data retrieved from repositories, or it can be used by the presentation layer putting formatted presentation templates in the cache, or even by the authentication system that keeps roles according to an authenticated username.

There are almost no limits as to how you can optimize an application and make it faster. The only price you pay is having RAM to dedicate the objects that are to be kept in memory, besides paying attention to the rules on how to manage life of the objects in cache.

After these preliminary remarks, using a cache could seem common and obvious. A cache essentially acts as a hash into which key-value pairs are put. The keys are useful to retrieve objects from the cache. Caching usually has configuration parameters that'll allow you to change its behavior.

Now let's have a look at an example with ehcache (http://ehcache.sourceforge.net). First of all let's configure it with the name methodCache so that we have at the most 1000 objects. The objects are inactive for a maximum of five minutes, with a maximum life of 10 minutes. If the objects count is over 1000, ehcache saves them on the filesystem, in java.io.tmpdir.

<ehcache>
   ...
   <diskStore path="java.io.tmpdir"/>
   ... 
   <defaultCache


maxElementsInMemory="10000"
eternal="false"
timeToIdleSeconds="120"
timeToLiveSeconds="120"
overflowToDisk="true"
diskPersistent="false"
diskExpiryThreadIntervalSeconds="120"

/>
...
<cache name="methodCache"
maxElementsInMemory="1000"
eternal="false"
overflowToDisk="false"
timeToIdleSeconds="300"
timeToLiveSeconds="600"
/>
</ehcache>


Now let's create an CacheAspect . Let's define the cacheObject to which the ProceedingJoinPoint is passed. Let's recover an unambiguous key from the ProceedingJoinPoint with the method getCacheKey. We will use this key to put the objects in cache and to recover them.

Once we have obtained the key, we ask to cache the Element with the instruction cache.get(cacheKey). The Element has to be evaluated because it may be null if the cache didn't find an Element with the passed cacheKey.

If the Element is null, advice invokes the method proceed(), and puts in the cache the Element with the key corresponding to the invocation. Otherwise, if the Element recovered from the cache is not null, the method isn't invoked on the target class, and the value taken from the cache is given back to the caller.

package org.springaop.chapter.five.cache;


import it.springaop.utils.Constants;
import net.sf.ehcache.Cache;
import net.sf.ehcache.Element;

import org.apache.log4j.Logger;
import org.aspectj.lang.ProceedingJoinPoint;

public class CacheAspect {

public Object cacheObject(ProceedingJoinPoint pjp) throws Throwable {

Object result;
String cacheKey = getCacheKey(pjp);

Element element = (Element) cache.get(cacheKey);
logger.info(new StringBuilder("CacheAspect invoke:").append("n get:")
.append(cacheKey).append(" value:").append(element).toString());

if (element == null) {

result = pjp.proceed();

element = new Element(cacheKey, result);

cache.put(element);

logger.info(new StringBuilder("n put:").append(cacheKey).append(
" value:").append(result).toString());
}
return element.getValue();
}

public void flush() {
cache.flush();
}

private String getCacheKey(ProceedingJoinPoint pjp) {

String targetName = pjp.getTarget().getClass().getSimpleName();
String methodName = pjp.getSignature().getName();
Object[] arguments = pjp.getArgs();

StringBuilder sb = new StringBuilder();
sb.append(targetName).append(".").append(methodName);
if ((arguments != null) && (arguments.length != 0)) {
for (int i = 0; i < arguments.length; i++) {
sb.append(".").append(arguments[i]);
}
}
return sb.toString();
}

public void setCache(Cache cache) {
this.cache = cache;
}

private Cache cache;
private Logger logger = Logger.getLogger(Constants.LOG_NAME);
}


Here is applicationContext.xml

<beans


xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.springframework.org/schema/aop
http://www.springframework.org/schema/aop/spring-aop-2.5.xsd"> ...

<bean id="rockerCacheAspect" class="org.springaop.chapter.five.cache.CacheAspect" >
<property name="cache">
<bean id="bandCache" parent="cache">
<property name="cacheName" value="methodCache" />
</bean>
</property>
</bean>

<!-- CACHE config -->

<bean id="cache" abstract="true"
class="org.springframework.cache.ehcache.EhCacheFactoryBean">
<property name="cacheManager" ref="cacheManager" />
</bean>

<bean id="cacheManager"
class="org.springframework.cache.ehcache.
EhCacheManagerFactoryBean">
<property name="configLocation" value="classpath:org/springaop/chapter/five/cache/ehcache.xml" />
</bean>

...
</beans>


The idea about the caching aspect is to avoid repetition in our code base and have a consistent strategy for identifying objects (for example using the hash code of an object) so as to avoid objects from ending up in the cache twice.

Employing an around advice, we can use the cache to make the method invocations give back the cached result of a previous invocation of the same method in a totally transparent way. In fact, to the methods of the classes defined in the interception rules in pointcuts will be given back the return values drawn from the cache or, if they are not present, they will be invoked and inserted in the cache. In this way, the classes and methods don't have any cognition of obtaining values retrieved from the cache.