Thursday, January 24, 2008

Troubles with OSGi - Part 1 - Proxy Creation Issues with Hibernate

We are trying out using osgi as a container for our application hoping to leverage it's dynamic updates and staged rollout features. Our system needs to be updated/rolledback with 0 downtime. Yeah we're working on *that* critical an application ;).

I have run into some pretty interesting issues using OSGi. There is not a lot of info on typical business applications that use OSGi. Some of the issues we faced were not documented and we had to get a lot of help from many forums. So the next couple of posts would be some stories around how OSGi was used and what pitfalls were faced.

All our code was packaged into small(er) osgi bundles. Following bundles were created -

  1. Bundle 1 - All entity classes,
  2. Bundle 2 - All business logic classes,
  3. Bundle 3 - All DAO's ,
  4. Bundle 4 - All client classes.

In initial stages I like to know how and why things work so that we can trouble shoot issues easily later. This made me decide against using the dynamic-imports feature. So every dependency needed by a bundle had to be manually declared in the manifest file.

Most of the initial issues got resolved quickly. But we started getting NoClassDefFoundError for HibernateProxy in our main bundle. This was really weird because the bundle that was executing the code, had imported all of spring's and hibernate's packages.

A couple of hours were spent trying to recreate the bundles, re-declare all the imports etc, but still no progress. I decided debugging was the best way to go forward. I got all spring/hibernate sources and started stepping through. Here's what I found.

Hibernate creates a proxy for any entity which does lazy loading. This proxy is a cglib based proxy that implements the HibernateProxy interface and also derives from the actualy entity object. When CGLIB is called to create a proxy, it creates a new class definition and creates the byte array representing the class. This class is then loaded onto the entity object's classloader.

However in OSGi each bundle has it's own classloader. So the entity bundle that contained only the POJO's had a classloader of its own and this bundle did not import any packages from any other bundle. The business code bundle imported the entity, hibernate and spring bundles and so the bussiness code bundle's classloader was wired with the other 3 classloaders. When CGLIB created the proxy and tried to define the proxy class in Entity bundle which did not import hibernate packages, it was throwing the NoClassDefFoundError.

The fix was to import this package in the entity bundle and things were all set. But this whole issue raised 2 main concerns

1) The stack traces raised by Equinox OSGi framework does not give detailed info on source of an error and just gives info on the initial bundle that was executing the code when error occured. Is this an issue with Equinox alone or is it same across all other osgi containers ?
2) Even though a bundle may not directly use a class, you may still have to import it (or) use dynamic-imports and be masked from this. Either way it's still ugly.



Tags: ,

Subscribe to comments for this post

Monday, January 14, 2008

Hibernate, Encapsulation and OO

The current project I am working on, is a typical legacy application rewrite. The existing system in cobol has been around for ages and the code base had apparently got very bloated and unstructured with so many patches/fixes/what-not down the years.

One of the main issues in the old system was that the code had become un-manageable and making a small change involved poring over thousands of lines of code to find out where something was getting changed.

In the proposed java based system, in the tradition of all enterprise applications, we would be using hibernate and spring. When we reverse engineer Hibernate/JPA entities from DB schemas the java objects created have public getter/setters. This seemingly innocuous feature is the one I have a biggest gripe with. This totally violates the whole data abstraction notion in OO systems.

When the entities are exposed as such, it becomes easy for layers above it to change entity state. This is very convenient when writing code and lets one design classes that updates multiple entities. But this also leads into the same problem which caused the rewrite in the first place. By letting anyone update entities, we allow business logic to be just dumped into a class and be called. No structure is need.

I prefer having one gateway class where all business logic pertaining to one entity is located. This simplifies making any changes to the system and any impact analysis need not span the entire code base. But in a model where entities have public setters, this can never be enforced.

Whats to prevent the current rewrite from being as messed up as the system its replacing? Processes like code review helps to a certain extent. But when push comes to shove, cutting corners becomes the norm and out goes all the best practices. This leads us back into creating our own tangled web of code to replace the older tangled web. It may seem too far-fetched but after 8 years of looking/writing all sorts of patches/fixes and features, only one thing is certain. If it can be abused, it eventually will end up being. Maybe even by the original developers ;)

ObjectMentor has a blog posting about this that states that jpa/hibernate entities should be treated as datastructures and not objects. They suggest adding a new layer of objects that map to the hibernate/jpa data structures and the rest of the application code uses the created objects.

This is a good idea in that we can create proper OO code that are not limited by the active record style entity objects. This can potentially even help in resolving some of the issues i had with anemic data model.

Tags: ,

Subscribe to comments for this post

Monday, December 10, 2007

Dual booting Vista and Ubuntu

The last couple of weeks have been interesting. I got a new laptop, a dell inspiton 1720 to be precise. I got it pre-installed with Vista home premium edition. But I wanted to have a linux distro to play around with. Ubuntu was the obvious choice for its ease.

Before i got around doing that I had to overclocked my video card in vista to get it to play Oblivion smoothly. The laptop has a 8400m GS but when i started played Oblivion i got a measly 30-35fps. I had a 156.xx driver but somehow 169.04 would never install cribbing that that it could detect any suitable driver to update. However the 169.09 from laptopvideo2go installed properly after uninstalling all available videocard drivers. After I installed rivatuner and overclocked from 400Mhz to 500Mhz the fps on Oblivion increased to a nice 45-50 fps. Plus my 3DMark06 scores increased from 1276 to around 1450+.

Then it was bigger things on hand, so I downloaded ubuntu, waited for a nice saturday morning to install and configure ubuntu. List of steps followed

1) Created a new partition of 30Gb using windows partition manager.
2) Burnt the live cd iso image onto a cd
3) Booted up the laptop using the live cd and press install. Some site say inspiron series should use the alternate cd but live cd just worked fine for me. (At this point i got stuck since it would not return from trying to find the partions. So had to got Places>Computer>OS and then rety the install to get it going past this option.
4) Reboot once install was done and i find my wireless wont work.

Logged into windows and found some nice instructions on setting them up. Got that sorted out and I was all set finally !!

To finish things I installed all the required software using the synaptic package. As a nice touch i was even able to build ruby1.9 from the source.

Some must visit links when you are planning to dual boot ubuntu with vista on a dell inspiron 1720

1) General instructions - http://apcmag.com/5046/how_to_dual_boot_vista_with_linux_vista_installed_first
2) Setting up the broadcom wireless card - http://ubuntuforums.org/showthread.php?t=297092 (I should add other instructions were not this clear and did not work properly)
3) To get soundcard installed you have to install the package - 'linux-backports-modules-generic'
4) To install ruby 1.9 and keep 1.8 follow instructions at http://ruby.tie-rack.org/28/installing-19/

Subscribe to comments for this post

Wednesday, October 24, 2007

Unit Testing Guidelines

Ravi had blogged about the difficulties of getting people to write unit tests. I could relate to him since I have come across this problem quite often.

IMHO a developer gets turned off from writing test cases because

  1. Most developers dont have a clue on how to write proper unit tests. Most end up thinking integration test cases instead of unit testing.
  2. Proper unit testing (not integration testing) is hard work and needs proper design
  3. Estimating for unit test cases are not done or is under-estimated since we tend to have close to 2X lines of test code for a code of size X lines
Many times I have had to make developers understand what a unit test is and how to approach it. And this is the general guidelines i normally give them.

A unit test in my definition should
  1. Should test only one class - Even if a method in the class under test, calls methods on other dependent classes, this test should be responsible only for verifying that this method works fine provided the dependent classes return correct values.
  2. Continuing from 1, the dependent classes should have its own tests to verify all possible code flows. Doing this from a higher layer increases the # of test cases you have to write.
  3. The unit tests for the higher layers (above DAO layer) should use Mocks (and ofcouse Dependency Injection). Use either jMock or EasyMock to mock out calls to other layers. If you are unit testing without Mocks it means you are doing integration testing between two classes since you verify functionality of both.
  4. Test boundary conditions like what happens if you pass in a null object, what happens if your dependent class throws an exception etc.
  5. Test that the class throws all exceptions declared in @throws (and any runtime exceptions) exactly under the conditions documented
  6. Test DAO's even if you are using ORM tools, by using an in-memory DB like Derby or HSQL

In addition to above a code coverage tool like EMMA or Clover is a must have tool to capture coverage and draw attention to lesser tested parts of the application. Configure this to generate a daily/weekly report or better still hook it upto your cruise control. In most cases the developer themselves take it up as a challenge to get the code coverage up.

Subscribe to comments for this post

Anemic Entities - Fallouts of an EJB era ?

When I first started working with EJB's the 1.0 and 1.1 versions, there were two types of enterprise beans

  1. Session Beans
  2. Entity Beans

We were all taught to put business logic into session beans and persist them using entity beans. No business logic was present in entity beans and it generally had only getters/setters. The only reason we were encouraged to put business logic in entities was to get performance gain - EJB tips.

According to OO principles the definition of a class states that a class should contain both structure and behaviour. And we ended up violating this first principle of OO by splitting our structure(entity/vo) and behaviour(model/services) into 2 separate layers (because of our tools ??). This anti-pattern has been termed as Anemic Domain Model by Martin Fowler.

This influence sort of carried on with most of the people. Even after EJB's lost the appeal and with IOC/ORM tools gaining popularity, people still architected systems where entities/value-objects/dto were a layer of objects having just get/set methods. These objects were read from DB using DAO's and sent to model/services layer where all business processing happened.

To be fair to people, the IOC containers of the day did not support DI'ing objects read from DB using tools like hibernate. With such excuses, we lived on writing more procedural style code with OO languages.

Now Spring 2.x has started supporting dependency injection on objects whose life cycle is outside its control. Using the @Configurable annotation Hibernate can create entity/dto objects from database and spring configures these objects a normal bean and wires up the dependencies.

Some more info regarding this can be found here and here.

To me creating an architecture where i can tell the domain object to go take of certain things leads to a very powerful api and also the system is easy to understand.

For e.g. I would like to do things like the following in my api's.

  • order.ship() instead of shippingService.ship(order)
  • movieRental.calculateLateFees() instead of feeService.getLateFees(Rental)

Coupled with a FluentInterface, I think this should be the future of enterprise apps (well atleast till erlang/haskell become more mainstream). This would make systems more easy to maintain and cleaner.

I did not make the relationship between the anemic-domain-like-design to EJB's till i proposed to a co-worker on adding more domain logic into the entities, the first response was

"This looks good, but should'nt we have all business logic in separate classes like how we did it using session beans"

And then it stuck me, things are not about to change for a long while !

Subscribe to comments for this post

 
Clicky Web Analytics