How YOU think about data access

Coordinator
Jul 21, 2009 at 2:12 AM
Edited Jul 21, 2009 at 2:13 AM

The gist of this question tries to get at the heart of how you make your initial data access decisions. Answer however you like, but here are some specific questions that might help.

* Do you use the same data access technology, design, and approach for all of your applications?
  * If so, are all of your applications similar enough that this makes sense?
    * Or is your technology, design, and approach just flexible to handle a wide variety of application types?
  * If not, what are the top 3 influencers you consider in making initial decisions on technology, design, etc?
* What are the most important quality attributes of your data access layer (perf, flexibility, etc)?
* Do you use a custom-built data access layer?

Thanks.

Jul 30, 2009 at 5:01 PM

Data Access isn't always the same, but the solutions we employ often build on top of each other.  We have made the design decision that in general we don't deploy rich clients with direct access to the database.  If we are building web applications they often will just leverage an ORM and go off and run with it.  They do still have to deal with change tracking issues when a change is posted back from the browser (most often that means we just mark the object as modified and save it to the database although occasionally we'll look at concurrency using some kind of discriminating column). 

For rich clients they go the service route and most often today they write standard WCF services to interact with the data.  Those services generally still use an ORM technology with all the change tracking issues that come from working disconnected from the context.  There are of course many frustrations when trying to work with something that is fundamentally a data service through an RPC style service - every query being a method gets ugly.  Of course that is where something like Astoria fits in nicely although there are several in our department who wonder about adequately securing it - and indeed every Astoria sample is very light in touching on securing the data (most just set the default exposing everything to everyone and say don't do this for real) which when you expose it RESTful over GET becomes vital.

For the most important attributes - performance is always key as an app without the data it needs is usually useless.  Query composability becomes important as well in most data service scenarios because writing query methods becomes tedious, redundant, and introduces maintainability issues if you are not careful to have things build on top of each other which before LINQ's composability capability through deferred execution was very difficult to achieve.

Aug 12, 2009 at 5:08 PM

I applaud the team for working on this much needed guidance.  I have also read the following with much interest:

http://blog.wadewegner.com/index.php/2009/06/26/architecting-your-data-access-layer-with-the-entity-framework/

Our difficulty comes from determining what the best pattern is when the application must be N-tier, with the Web tier and Application/DAL tier on different servers.  The challenges we are dealing with include:

- What is the scope of the model classes? (limited to DAL and BSL?)
- if a WCF layer sits between Web tier and Application tier, how best to translate model classes to DTOs?
- if some web services must be made available to the internet, how best to create a WCF facade in the Web tier, that basically extends (some or all) of the services exposed by the Application tier
- if we use EF, how best to architect things so that the DAL could be swapped out with a different DAL implementation (say an ADO.NET implementation)  (this became an issue when we ran into disconnected from context issues with EF v1)
- the proper place for queries... a LINQ query to select data, does that fit into the Application layer as business logic, or does the DAL only expose methods that encapsulate the LINQ to Entities queries? (blog above discusses this)
- if the Application layer is allowed to contain LINQ to Entities queries, then how would we be able to implement a different DAL implementation, say using ADO.NET (I'm guessing not very easily)

While targeting a single Web tier right now, the hope is that in future, several different UI clients may be supported from the same Application tier including Rich client or silverlight.  The Application server would typically be located inside the firewall with no public facing access to that set of services.  Indeed the best scenario for our application would be a flexible architecture similar to the StockTrader sample application which allows the app to run in process (single server deployment) or distributed (separate web tier server and app servers), depending on configuration settings.

Sep 17, 2009 at 7:02 PM

Last large app was built using .NET 2.0 and analyzing existing ORM solutions we made a decision to use bltoolkit.net

This choice was maid because this project was free and not so big and complicated as others so we changed it a bit to fit our requirements.

The biggest question was whether we should use DTOs or not.

After some performance tests we realized that only ADO.NET data sets can be serialized over the network with the minimal traffic. So we built our own .net remoting channel to convert our objects (data entities) to datasets and back. Another solution was to use typed data sets but as for me it looks to complicated because you have the as result same dataset with some properties and methods inside which gives you the possibility to work with it like with object. I prefer something more cleaner.

For other projects i used the same data access framework (of course with our own changes) because a lot of peoples in other teams already had some experience with.

Relating data access question: my opinion is that we have to separate data entities and business objects. Data Access Layer must deal only with data entities. Some middle level business logic must know how to validate business objects and convert it to/from the data entities. As for me the system with this type of separation can be really scalable.

In general i know that a lot of architects agrees that the best practice is to change the objects between layers because it gives you the possibility to have different layer specific logic inside this objects and do not share it with other layers. (ex. Remoting Facade pattern).

So i'd like to see in p&p data access guidance some chapter with recommendations about DTOs and object exchanges between layers.

 

In my new project i'm going to use ADO.NET Entity Framework.