Monday, December 10, 2007

Dual booting Vista and Ubuntu

The last couple of weeks have been interesting. I got a new laptop, a dell inspiton 1720 to be precise. I got it pre-installed with Vista home premium edition. But I wanted to have a linux distro to play around with. Ubuntu was the obvious choice for its ease.

Before i got around doing that I had to overclocked my video card in vista to get it to play Oblivion smoothly. The laptop has a 8400m GS but when i started played Oblivion i got a measly 30-35fps. I had a 156.xx driver but somehow 169.04 would never install cribbing that that it could detect any suitable driver to update. However the 169.09 from laptopvideo2go installed properly after uninstalling all available videocard drivers. After I installed rivatuner and overclocked from 400Mhz to 500Mhz the fps on Oblivion increased to a nice 45-50 fps. Plus my 3DMark06 scores increased from 1276 to around 1450+.

Then it was bigger things on hand, so I downloaded ubuntu, waited for a nice saturday morning to install and configure ubuntu. List of steps followed

1) Created a new partition of 30Gb using windows partition manager.
2) Burnt the live cd iso image onto a cd
3) Booted up the laptop using the live cd and press install. Some site say inspiron series should use the alternate cd but live cd just worked fine for me. (At this point i got stuck since it would not return from trying to find the partions. So had to got Places>Computer>OS and then rety the install to get it going past this option.
4) Reboot once install was done and i find my wireless wont work.

Logged into windows and found some nice instructions on setting them up. Got that sorted out and I was all set finally !!

To finish things I installed all the required software using the synaptic package. As a nice touch i was even able to build ruby1.9 from the source.

Some must visit links when you are planning to dual boot ubuntu with vista on a dell inspiron 1720

1) General instructions - http://apcmag.com/5046/how_to_dual_boot_vista_with_linux_vista_installed_first
2) Setting up the broadcom wireless card - http://ubuntuforums.org/showthread.php?t=297092 (I should add other instructions were not this clear and did not work properly)
3) To get soundcard installed you have to install the package - 'linux-backports-modules-generic'
4) To install ruby 1.9 and keep 1.8 follow instructions at http://ruby.tie-rack.org/28/installing-19/

Subscribe to comments for this post

Wednesday, October 24, 2007

Unit Testing Guidelines

Ravi had blogged about the difficulties of getting people to write unit tests. I could relate to him since I have come across this problem quite often.

IMHO a developer gets turned off from writing test cases because

  1. Most developers dont have a clue on how to write proper unit tests. Most end up thinking integration test cases instead of unit testing.
  2. Proper unit testing (not integration testing) is hard work and needs proper design
  3. Estimating for unit test cases are not done or is under-estimated since we tend to have close to 2X lines of test code for a code of size X lines
Many times I have had to make developers understand what a unit test is and how to approach it. And this is the general guidelines i normally give them.

A unit test in my definition should
  1. Should test only one class - Even if a method in the class under test, calls methods on other dependent classes, this test should be responsible only for verifying that this method works fine provided the dependent classes return correct values.
  2. Continuing from 1, the dependent classes should have its own tests to verify all possible code flows. Doing this from a higher layer increases the # of test cases you have to write.
  3. The unit tests for the higher layers (above DAO layer) should use Mocks (and ofcouse Dependency Injection). Use either jMock or EasyMock to mock out calls to other layers. If you are unit testing without Mocks it means you are doing integration testing between two classes since you verify functionality of both.
  4. Test boundary conditions like what happens if you pass in a null object, what happens if your dependent class throws an exception etc.
  5. Test that the class throws all exceptions declared in @throws (and any runtime exceptions) exactly under the conditions documented
  6. Test DAO's even if you are using ORM tools, by using an in-memory DB like Derby or HSQL

In addition to above a code coverage tool like EMMA or Clover is a must have tool to capture coverage and draw attention to lesser tested parts of the application. Configure this to generate a daily/weekly report or better still hook it upto your cruise control. In most cases the developer themselves take it up as a challenge to get the code coverage up.

Subscribe to comments for this post

Anemic Entities - Fallouts of an EJB era ?

When I first started working with EJB's the 1.0 and 1.1 versions, there were two types of enterprise beans

  1. Session Beans
  2. Entity Beans

We were all taught to put business logic into session beans and persist them using entity beans. No business logic was present in entity beans and it generally had only getters/setters. The only reason we were encouraged to put business logic in entities was to get performance gain - EJB tips.

According to OO principles the definition of a class states that a class should contain both structure and behaviour. And we ended up violating this first principle of OO by splitting our structure(entity/vo) and behaviour(model/services) into 2 separate layers (because of our tools ??). This anti-pattern has been termed as Anemic Domain Model by Martin Fowler.

This influence sort of carried on with most of the people. Even after EJB's lost the appeal and with IOC/ORM tools gaining popularity, people still architected systems where entities/value-objects/dto were a layer of objects having just get/set methods. These objects were read from DB using DAO's and sent to model/services layer where all business processing happened.

To be fair to people, the IOC containers of the day did not support DI'ing objects read from DB using tools like hibernate. With such excuses, we lived on writing more procedural style code with OO languages.

Now Spring 2.x has started supporting dependency injection on objects whose life cycle is outside its control. Using the @Configurable annotation Hibernate can create entity/dto objects from database and spring configures these objects a normal bean and wires up the dependencies.

Some more info regarding this can be found here and here.

To me creating an architecture where i can tell the domain object to go take of certain things leads to a very powerful api and also the system is easy to understand.

For e.g. I would like to do things like the following in my api's.

  • order.ship() instead of shippingService.ship(order)
  • movieRental.calculateLateFees() instead of feeService.getLateFees(Rental)

Coupled with a FluentInterface, I think this should be the future of enterprise apps (well atleast till erlang/haskell become more mainstream). This would make systems more easy to maintain and cleaner.

I did not make the relationship between the anemic-domain-like-design to EJB's till i proposed to a co-worker on adding more domain logic into the entities, the first response was

"This looks good, but should'nt we have all business logic in separate classes like how we did it using session beans"

And then it stuck me, things are not about to change for a long while !

Subscribe to comments for this post

Wednesday, May 09, 2007

Don't be Greedy be Dynamic

If you are given unlimited number of coins of values V1, V2,… Vn etc and asked to find the minimum number of coins needed to create a Sum S then what would be the solution you would come up with ?

To better illustrate take the typical example, if you are given unlimited supplies of coins value 1, 2 and 5 and asked to create values of 8. Then one solution can be 8 = 8 coins of value 1 or 8 = 4 coins of value 2 etc but the solution that uses minimum number of coins overall would be 8 = 1 coin of 5 + 1 coin of 2 + 1 coin of 1.


Being Greedy


When I looked at it for the first time I thought the easiest way to solve this would be to act greedy.

Sort the coins in descending order with maximum valued coin being first. If number of coins is N then

For c = 1 to N

  1. Take the value of coin at index 'c' and see how many times it would fit in the Sum required.
  2. Find out the modulo of the Sum with value of coin at index 'c'
  3. Repeat the calculations 1 and 2 for the next most valued coin on the modulo value got in step 2.

Sum of values obtained in 1 would be the number of coins required.

Applying this to get a value of 8 the steps would be

Loop1 = 5 will fit in 8 only 1 time, 8 mod 5 = 3

Loop2 = 2 will fit in 3 only 1 time, 3 mod 2 = 1

Loop3 = 1 will fit in 1 only 1 time, 1 mod 2 = 0

Number of coins needed = 3 !

Code in Java


private int[] coinArray = { 1, 2, 5};

private int minCoinsNeededToGetCount(int neededCount) {

int coinCountNeeded = 0;

int tempNeededCount = neededCount;

for(int k = coinArray.length-1; k >=0; k--) {

if(tempNeededCount >= coinArray[k]) {

int numCoinsOfThisTypeNeeded = (tempNeededCount - (tempNeededCount % coinArray[k])) / coinArray[k];

tempNeededCount = tempNeededCount - (numCoinsOfThisTypeNeeded * coinArray[k]);

coinCountNeeded = coinCountNeeded + numCoinsOfThisTypeNeeded;

}

}

return coinCountNeeded;

}

But is this the best and correct solution ?


Being Dynamic


Described as one of the two sledgehammers of the algorithms craft, Dynamic Programming is very powerful and can be used to solve a wide variety of problems.

The two major things to remember in Dynamic Programming is that we break the problem into a collection of sub problems to solve such that a solution to one sub problem depends on the solution of another smaller sub problem.

In plain recursion we solve the same sub problems again and again. One of the main differences that Dynamic Programming brings over plain recursion is that here we store the results of the sub problems and do not compute them again. This is called 'memoization'.


So Applying this how would our solution be ?

  1. Coins for Sum 0 = 0
  2. Coins for Sum 1 = 1 coin of Value 1+ No of coins for remaining Sum of 0= 1

-> Remaining sum 0 is got by Sum needed 1 minus coin value considered 1 = 0

  1. Coins for Sum 2 = Min ( 1 coin of Value 1 + No of coins for Rem.Sum 1, 1 coin of value 2 + No of coins for Rem.Sum 0 ) = Min (2, 1) = 1 ;

-> Remaining sum 1 is got by Sum needed 2 minus coin value considered 1 = 1

-> Remaining sum 0 is got by Sum needed 2 minus coin value considered 2 = 0

  1. Coins for Sum 3 = Min ( 1 coin of Value 1 + No of coins for Sum 2 , 1 coin of value 2 + No of coins for Sum 1 ) = Min (2, 2) = 2

So we take the sum required and find out the difference between that sum and various coin values and get small problems. The solution to those small sub problems are already available and we just use them to build bigger solutions.


private int[] coinArray = { 1, 2, 5};

private void findMinCoinsNeededForSum(int sum){

int coinCounts[] = new int[sum+1];

Arrays.fill(coinCounts, 999);

coinCounts[0] = 0;

for(int i = 1; i <= sum; i++) {

for(int j = 0; j <>

int stateToCheck = i - coinArray[j];

if(stateToCheck >= 0 && coinCounts[stateToCheck] + 1 <>

coinCounts[i] = coinCounts[stateToCheck] + 1;

}

}

}

int i = 0;

for(int value : coinCounts) {

System.out.println("for " + i++ + " coins needed " + value);

}

}


Somehow when I wrote these 2 I felt the greedy approach was more simpler to understand and that was the first thing that came to my mind. But is it the right thing?


Given coin values of 1, 4 and 5 and asked to compute a sum of 8 greedy returns a miserable minimum coin count needed of 4 - one 5 and three 1's. So there u have the clear winner !!

Subscribe to comments for this post

 
Clicky Web Analytics