More on NCommon

Once again, I’m going to talk about NCommon.  I want to share some of Ritesh Rao’s posts on concepts that NCommon incorporates.  I’m very impressed with his architecture and it helps that he provides NCommon for NHibernate, Linq2Sql, Entity Framework (and I see he is venturing into EF v2).

My last post touched on this, but I’ll review it again:

  1. Framework for implementing a Unit of Work Pattern
  2. Framework for implementing a Repository pattern that utilizes Linq
  3. Framework for implementing a Validations and Business Rules
  4. Implementation of the Specification pattern using Expressions
  5. Utility class to help store application specific data in the Thread Local Storage / Current Web Request and AppDomain Level Storage
  6. Guard class that mimics common guard statements that verify parameter values and throws exceptions if values are not acceptable.

NCommon uses the Apache License 2.0

NCommon uses the  Microsoft.Practices.ServiceLocation – this allows you to plug in your dependency inversion container of choice.  

I have gotten rid of the generic IoC wrapper implementation from the library as it doesn’t seem necessary anymore with the rally behind the Common Service Locator project.

This is shown in the accompany tests setup:

[SetUp]
        public void SetUp()
        {
            EFUnitOfWorkFactory.SetObjectContextProvider(() =>
            {
                var context = new TestModel();
                return context;
            });
            var locator = MockRepository.GenerateStub<IServiceLocator>();
            locator.Stub(x => x.GetInstance<IUnitOfWorkFactory>())
                   .Return(new EFUnitOfWorkFactory()).Repeat.Any();
            ServiceLocator.SetLocatorProvider(() => locator);
        }

NCommon sets the object context and implements the IUnitOfWorkFactory (the above shows with Entity Framework).

An example of how this is used can be found in the UnitOfWorkScopeTransaction class (UnitOfWorkScopeTransaction.cs- trunk/NCommon/src/Data)

ie. the ‘GetTransactionForScope’ method to retrieve the UnitOfWorkFactory:

 
var factory = ServiceLocator.Current.GetInstance<IUnitOfWorkFactory>();
var newTransaction = new UnitOfWorkScopeTransaction(factory, isolationLevel);
newTransaction.AttachScope(scope);
CurrentTransactions.Add(newTransaction);
return newTransaction;
 

The NCommon tests highlight the features of NCommon as well, including Fetching Strategies, Specification, Repositories, and Unit of Work.

Let’s look at some of Ritesh’s posts…

http://www.codeinsanity.com/2009/04/repository-pattern-thoughts.html

Some key items in this post to quote:

repositories should really represent a queryable data store that doesn’t abstract away queries behind methods that some unfortunate developer has to maintain and evolve, but rather allow consumers of the repository to query it directly. Hence why NCommon relies on repositories to implement the IQueryable interface to provide a query infrastructure directly on top of NCommon repositories.

I think this is an important piece to understand, and Ritesh follows up with the comment:

The approach I would take is rather than exposing the infrastructure requirements in the query object, I’d like it to take in an IQueryable and return back a IQueryable. This will allow chaining of queries by multiple of such Query objects without exposing any infrastructure concerns.

…

The problem with exposing ICriteria, or any other infrastructure component, to all layers of the application is that eventually a lot of infrastructure concern creeps into layers of the application where they don’t belong. I dislike the idea of having the expose ICriteria to the UI for adding paging and sorting on top of the query encapsulated by ICriteria.

That being said, in my opinion IQueryable is best suited for this job. Its a framework level member that is infrastructure agnostic, provides a very nice way to chain queries together and encapsulates our query requirements rather well.

An  example query of this nature can be found in the EFRepositoryTests class:

        var queryForCustomer = new Func<EFRepository<Customer>, Customer>

                (

                    x => (from cust in x

                          where cust.FirstName == newCustomer.FirstName && cust.LastName ==  
                                  newCustomer.LastName

                          select cust).FirstOrDefault()

                );

            using (var scope = new UnitOfWorkScope())

            {

                var customerRepository = new EFRepository<Customer>();

                var recordCheckResult = queryForCustomer(customerRepository);

                Assert.That(recordCheckResult, Is.Null);

                customerRepository.Add(newCustomer);

                scope.Commit();

            }

            //Starting a completely new unit of work and repository to check for existance.

            using (var scope = new UnitOfWorkScope())

            {

                var customerRepository = new EFRepository<Customer>();

                var recordCheckResult = queryForCustomer(customerRepository);

                Assert.That(recordCheckResult, Is.Not.Null);

                Assert.That(recordCheckResult.FirstName, Is.EqualTo(newCustomer.FirstName));

                Assert.That(recordCheckResult.LastName, Is.EqualTo(newCustomer.LastName));

                scope.Commit();

            }

After implementing a DDD pattern in my last application, I strongly agree with his final statement:

Allowing ad-hoc querying across the layers of your application is a good thing, and it’s something I’d recommend to anyone. Getting caught up in the “what is a DAL layer” topic is not going to provide any value to your application, rather it’s going to end up adding complexity in your application where you don’t need to. As our technologies and frameworks have evolved allowing us to do more with less, similarly our patterns must evolve. I think we can end the rigid repository pattern for a more open, pluggable and extensible repository pattern.

 

Moving on to another great post around ‘Fetching and the adaptive domain models in NCommon’

http://www.codeinsanity.com/2009/02/fetching-strategy-and-adaptive-domain.html

Ritesh explains what a fetching strategy is:

A fetching strategy simply put allows you to define at the time of loading the aggregate root, what all associated entities will also be queried for and loaded in the object graph. The fetching strategy is an explicit pattern which tells the ORM framework to pre-fetch objects within the graph and not perform lazy loading for the entities specified in the strategy.

Defining a fetching strategy would allow us to specify that the data required for the above code to work should be fetching in one single query to the database. That allows efficient use of the database resources as well as avoiding round trips back to the database.

 

I must say, I wish I would have had more experience understanding this on my early NHibernate projects  :)  Being aware of the fetching strategies of an ORM is extremely important.  What is nice then is that NCommon supports this strategy in it’s framework.

Let’s look at an example in the tests (he gives examples in the post as well) – where you want to determine lazy/eager fetching – and this can be shown well with the EF tests:

IEnumerable<Order> orders;

            using (new UnitOfWorkScope())

            {

                var strategies = new IFetchingStrategy<Order, EFRepositoryTests>[]

                                     {

                                             new Order_OrderItems_Strategy(),

                                             new OrderItems_Product_Strategy()

                                     };

                IRepository<Order> ordersRepository = null;

                ServiceLocator.Current.Expect(x => x.GetAllInstances<IFetchingStrategy<Order, EFRepositoryTests>>())

                                        .Return(strategies);

                ordersRepository = new EFRepository<Order>().For<EFRepositoryTests>();

                orders = (from o in ordersRepository select o).ToList();

            }

As you can see, this approach will enable to apply to searches on Order items (in the first strategy) to determine what is included in the query:

private class Order_OrderItems_Strategy : IFetchingStrategy<Order, EFRepositoryTests>

        {

            public void Define(IRepository<Order> repository)

            {

                repository.With(x => x.OrderItems);

            }

            #endregion

        }

When the OrderRepository queries for an Order it will eager fetch the OrderItems. 

Ritesh points to several good posts by others on this fetching and the adaptive domain model here:

This is somewhat the basis of an adaptive domain model where the fetching strategy for domain entities are based on the current execution context. A lot has been talked about on Adaptive Domain models by Udi Dahan here and here and even Oren Eini (Ayende) here. I suggest you take a look at their posts, if you haven’t already.

Please read more on this post – especially as Ritesh explains the details.  Where I also find real value in this is where you want to expose these object queries to a team of developers but have some control over how the object graphs are being fetched.

Ritesh has an update post on NCommon, where I find this post helpful to understand is the difference between an ORM such as NHibernate vs. what Linq2SQL and EF have provided

http://www.codeinsanity.com/2008/12/update-on-ncommon.html

This post is chalk full of useful nuggets of some changes and why they were done. One part that was actually helpful to see (outside of just groking the code) is his layout and the reasoning behind it:

  • Repository and UnitOfWork are under NCommon.Data namespace
  • Specifications is under NCommon.Specifications namespace
  • Business and Validation rules are under NCommon.Rules namespace
  • Storage classes (Application, Local, Session) are under NCommon.Storage namespace. The Storage class has been renamed to Store so that there is no clash between namespace and class name.
  • New NCommon.Extensions namespace to contain with common extensions.
  • New NCommon.Expressions namespace contains the ExpressionVisitor class and future implementations for ExpressionVisitor to help process expressions.

As he explains he has separated the patterns into their own namespaces – this is quite a good idea imo.

Too bad I took this long to get to what I think is one of Ritesh’s best posts – and it’s in his ‘patterns’ section.  I think reading this will shed light into the underpinnings of the NCommon framework.

http://www.codeinsanity.com/2008/08/repository-pattern.html

Martin Fowler defines the repository pattern in his P of EAA catalog

Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects.

So, as Ritesh explains here – what makes this any different than any other data access strategy ?

The first difference that I see is that a Repository exclusively deals in domain objects. The repository is not a generic data access component that uses Data Access Objects (DAO) or Data Transfer Objects (DTO) patterns. So the repository only accepts domain objects and returns domain objects.

Great, we are in a Domain Driven Design architecture, utilizing the repository pattern to retrieve and save entities.

Most people, including myself tend to stop there right?  :)   So, we fire up Studio, we code up a IRepository, get a RepositoryBase with our fancy generics, and start creating methods like

‘GetCustomerWhenSuchAndSuch’ and ‘GetCustomerWhenSuchAndSuchWithSoAndSo’ .  I start to put these methods in my repository, my domain object, my service layer, etc…

and so on and so on.   Guilty as charged here at times  :) 

But Ritesh doesn’t stop here and gleans some information from examining more into what Fowler says …

So going back to the definition and write up of the Repository pattern in P of EAA, I see the following explanation of Repository:

A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection. Client objects construct query specifications declaratively and submit them to Repository for satisfaction. Objects can be added to and removed from the Repository, as they can from a simple collection of objects, and the mapping code encapsulated by the Repository will carry out the appropriate operations behind the scenes. Conceptually, a Repository encapsulates the set of objects persisted in a data store and the operations performed over them, providing a more object-oriented view of the persistence layer. Repository also supports the objective of achieving a clean separation and one-way dependency between the domain and data mapping layers.

[Emphasis mine]

Source: http://martinfowler.com/eaaCatalog/repository.html

Something that piqued my interest is the highlighted line above. Clients construct query specifications and submit them to the Repository for satisfaction… hmmm… so I can declaratively query the repository and have it return business object instances that satisfy that query? Now that is a huge advantage over traditional data access patterns. So does that mean that I don’t have to add additional methods everytime there’s a feature change and I need to query for domain objects using a new criteria? I’M SOLD. Now, the only question is… just how do I do that huh?

 

BINGO!  Wow – read that like 3 or 4 times, because now were going to really start writing object oriented code.

As he goes on to dig out a link that I hadn’t seen that is a real keeper: the white paper by Martin Fowler and Eric Evans on the Specification pattern.  Wow, good stuff there – they can put this in the hall of fame archives as far as I’m concerned.

So – why the excitement?  Ritesh explains:

The Specification pattern in my opinion helps formalizing and declaring criteria as a set of specification that encapsulates business logic.

I think understanding this is as important as when seeing value in creating a ‘rules engine’, or ‘validation engine’ – where you start to move rules and their details out into objects , easier to find, review, etc… and let them be used by your domain objects.  In a recent project I had this issues with a small, but important set of business rules that were tucked into service layers along with domain layers – and eventually I refactored 5 core rules out and it was more readable, etc… (and the customer says ‘what is that rules again’ ? and I can easier go find it  – lol)

Take that idea, and hopefully you have see that in action, and push it into the specification pattern.  The beauty of both to me is a simple but highly effective understand of the Strategy Pattern :

Define a family of algorithms, encapsulate each one, and make them interchangeable. Strategy lets the algorithm vary independently from clients that use it.

I like my HeadStart Design Pattern book I bought years ago, it really helped me get this one.  But basically think I have a set of common objects – let’s  say Vehicles – I have a Car, Motorcycle, Truck, etc… and each one has a set of different behaviors (algorithms) –ie. IVehicleBehaviors like ISteeringBehavior, IBrakingBehavior, ITurningBehavior. 

The Car has an abstract method ‘void Stop’ and each type takes a IBrakingBehavior ‘Stop’ that is unique to each type.  It encapsulates what varies, etc… ( I didn’t do it justice here…)

Ritesh continues talking about the specification pattern …in his post Implementing Repository and Specification patterns using Linq

http://www.codeinsanity.com/2008/08/implementing-repository-and.html

The goal of this post is to implement infrastructure for a Repository and Specification pattern that achieves the following goals:

  1. We should be able to execute Linq queries directly against the repository.
  2. Implement a specification pattern that allows us to use Expressions as predicates for the specifications.
  3. Implement a specification pattern than allows us to combine specifications (Composite Specification Pattern)
  4. We should be able to query the repository using the Specification pattern.
  5. We should be able to execute Linq queries against the repository directly and also combine the query with specifications.

 

What catches my attention right away is that Ritesh takes the typical IRepository<T> that DDD enthusiasts will pick up on quickly and also has it implement the IQueryable interface:

(IRepository)…implements the IQueryable interface allowing us to execute Linq queries directly on the repository. Most repository implementations also have a Load or a Get method that takes in an ID that the repository uses to load the entity. In this case I have omitted those functions since they can be easily represented by Linq queries.

As we noted previously above this is some of the cornerstone I believe that he is building on.

He goes on to show the RepositoryBase<T> with the abstract IQueryable<T> RepositoryQuery {get;} …

Now we can implement members of the IQueryable interface and basically just delegate the calls for IQueryable to the IQuerable isntance that the RepositoryQuery provides us

I like his example you’ll see later and what it provides:

IRepository<Person> repository = new ….Repository();

var results = from person in repository

                     where person.LastName.Contains(“Doe”)

                     select person;

Cool stuff, Ritesh takes us back to the Specification pattern next in the following 3 requirements from above:

So he builds in the ISpecification<T> interface  “by defining a property ‘Predicate’ that should return the expression the specification uses”

This looks like Expression<Func<T, bool>> Predicate {get;}

Ritesh shows a Specification class next that uses this expression…which all it does is return the expression passed into the ctor.

So what is happening here outside of me getting confused?  :)  

Well, the key is the ‘IsSatified’ method that takes the expression that is given to the specification and compiles it, and invokes it against the entity that is passed in to ‘IsSatisfied’.  Clever (in a good way! lol)

public bool IsSatisfiedBy(T entity)

{

    return _predicate.Compile().Invoke(entity);

}

As you can see above the expression Func will have an expression that evaluates to true or false (bool) against the entity.  Ritesh explains the underlying details and gives an example that helps illustrate what is happening here:

public class CustomerSpecifications

{

   public Specification<Customer> IsPreferred

   {

         get

         {

              return new Specification<Customer>(customer => customer.AverageInvoice > 100000 …);

         }

   }

So it will evaluate the condition and return the specification.    These can be ‘composite’ – chained together to check for conditions.

The next step then is logical as he address how to integrate this with the repository base:

Now that we have a way to define re-usable business logic by the way of specifications, lets integrate the Specification class with the RepositoryBase class to allow querying the repository using specifications. First thing is to modify the IRepository interface and add a Query method to it so that returns an IQueryable based on executing the provided specification

Here again is the power of using the IQueryable interface:

IQueryable<T> Query(ISpecification<T> specification);    (this is in IRepository)

ok, looks good  – he goes on to show how it’s implemented (in the RepositoryBase)

public IQueryable<T> Query(ISpecification<T> specification)

{

    return RepositoryQuery.Where(specification.predicate);

}

So, what have we done here ?

What the Query implementation does is basically uses the Where Linq extension to add the predicate provided by the specification to the IQueryable instance

           

            using (new UnitOfWorkScope())

            {

                var customersInPA = new Specification<Order>(x => x.Customers.State == "PA");

                var ordersRepository = new EFRepository<Order>();

                var results = from order in ordersRepository.Query(customersInPA) select order;

                Assert.That(results.Count() > 0);

            }

We create the specification for an Order, give it the expression – where we are looking for Orders from Customers in the State of PA – this is then applied to the orderRepository and queried – deferring execution until ‘results.Count()’ is invoked.

Well, the last part – and I’ll include the links below – that I want to review is another fantastic post on ‘A framework for Validation and Business Rules’

http://www.codeinsanity.com/2008/12/framework-for-validation-and-business.html

Ritesh builds his case by comparing the differences between ‘Validation Rules’ and ‘Business Rules’.  I think he makes a good point and it’s important to be able to recognize the differences:

Validation Rules are re-usable parts of logic that perform validation on an entity, of which that validation can range from simple data integrity to a state validation, and whose primarily goal is to validate the entity before an action is taken on that entity.

ie. a data entry form where Postal Code is required – by making sure an Address isn’t saved if the Postal Code is null.

He continues on to describe what starts to get a bit fuzzy…

Then there is another class of validations that look very similar to Business Rules but really fall under the classification of Validations. I term these as business validations that most often validate the state of an entity. For example; If an Order entity’s customer has a preferred status of Silver, the the sum total of the Order amount along with all pending payments from the customer cannot be more than $10,000.

In short the above validation checks to make sure that the total credit on the customer’s order does not go beyond $10,000. That is classified as a state validation because when the customer submits the order the system should validate the order against the above validation rule to ensure that the Customer entity’s total pending payments does not exceed above a certain value, causing that entity to go into an invalid sate. If the validation fails, then the application should report back to the customer at the time of submitting the order.

So, let’s get to his Business Rules definition:

Business Rules are re-usable parts of logic that perform actions on an entity, based on certain conditions that are evaluated against the entity.

Short and sweet. Business Rules do not perform any validations, rather they perform actions based on conditions that is defined by the rule itself.

 

Describing the difference:

This distinction between validations and business rules is important because in the business rules world, the assumption is that data and state validations have already taken place and now business actions need to be performed based on the data and state of the entity. Business rules themselves do not stop an action from happening, such as saving of the Order, like validations but rather perform actions as defined by the business

I like this way of viewing the differences.

Very lastly and hopefully to tie into the talk of Specifications, Ritesh provides some more insight in the Specification role – where he finds that some validation and business rules are/should be actually specifications:

Using specifications to define the rules for validation and business rules is quite powerful, which allows further re-usability and using the composite specification pattern it gives the ability to compose multiple specifications to define rules for validations and business actions.

 

Ok, time to wrap this up as I have exhausted myself.  Ritesh’s thoughts to go along with grok’ing the code is something that has really been a good experience – the ability to understand the pattern and see it applied is a good learning tool.  We are fortunately to have developers willing to share their thoughts on how they build software as well as making this code open for review and discussion.  Thanks again to Ritesh Rao for doing both for the benefit of other developers (me! lol)

Here are some of the links discussed above  – it helps group some of his posts together for us – (I’m brushing over the Unit of Work as I have covered it in detail previously)

Update: Simon Segal has an 11 part series on the Entity Framework & Fetching Strategies (includes Specification) with demo code

Update 2: David DeWinter from the EF team has a good post on LINQ Expression Trees and the Specification Pattern.

Advertisements

2 thoughts on “More on NCommon

  1. Great post!
    My question is when should the specification pattern be used vs the fetching strategy pattern?

    In a way it seems that the fetching strategy is more useful for loading up your object graph, whereas the specifications should be thought of more as business rules.

    Would this be a correct assumption?

    Regards

  2. Yes, I think that is about right.

    fetching is about eager/lazy loading of child collections for an object. ie. every time I return a ‘Company’ I get a list of their ‘Employees’ through a join.

    specification : check out http://martinfowler.com/apsupp/spec.pdf

    “The central idea of Specification is to separate the statement of how to match a candidate, from the candidate object that it is matched against”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s