I have just posted the 5th installment in the Summer of NHibernate screencast series for general download. This latest session covers the following main topics:
- Controlling Transactions in NHibernate (including Commit() and Rollback() methods
- NHibernate support for optimistic concurrency using the <version …> and <timestamp…> mapping elements
In addition, we further evolve the structure of our NHibernateDataProvider class to solve an insidious bug that crept into the code and resulted in some brittle tests that pass when run by themselves yet fail when run as a group — and this gives us a chance to dig into understanding more about the proper role of the ISession and ISessionFactory and their relationship to each other in any NHibernate-based DAL.
We also refactor most of our DAL methods to leverage the C# using (…) code blocks to easily ensure that we properly close and dispose of our ISessions and our ITransactions (sorry, you VB.NET guys will have to do without since there ain’t no such allegory in VB.NET…but you will all have XML-literals to keep you warm at night and we C#-heads will have to continue to struggle through with the XML DOM model ).
We also finally achieve that panacea sought after by all developers who write unit tests: 100% code coverage of our production code and in the process demonstrate how to write unit tests that properly exercise code-paths that lead to failure in addition to those code-paths that lead to success — both are equally valuable in ensuring that our tests correctly exercise our code.
As always, any comments, feedback, etc. are welcome.
I’m nice and toasty thanks
Public Sub UpdateCustomer(ByVal customer As Customer)
Using session As ISession = GetSession()
Using tx As ITransaction = session.BeginTransaction()
Catch generatedExceptionName As NHibernate.HibernateException
Excuse nasty code formatting, dunno what occurred there. But Using blocks arrived with .NET 2 for VB.
I’ll be darned — I stand (sit) corrected. How about that? Now we’ll just have to look longingly at the XML literals that C# is missing and keep ourselves happy about the ‘purity’ of our language even if its efficiency isn’t up there 🙂
Thanks for the clarification — I have now learned my one new thing for the day~!
If it helps you maintain a smug glow, you’ve still got anon delegates and yield break. 2 language features ommitted from VB8 & 9 much to my annoyance.
I have a basic question on what Hibernate does under the hood. Assuming I have a parent object and a children collection in it. Both parent and child have their mapping files. When i add ‘n’ new objects to my collections and save the parent object, what happens?
A] ‘n’ separate INSERT queries are executed (May be 2*’n’ if the id is generated thru a sequence)
B] Only one batch query (PL/SQL) is executed which takes care of all the inserts.
I need to understand it because a update with huge number of new objects will result into a lot of round-trips.
Parent and Child mappings (and their relations) are actually a topic for the upcoming (next) session of the screencasts. For now, more info can be found in the NHib docs at this topic:
As for batch operations (and a number of other ‘under the hood’ questions, see this section of the docs:
This goes into several ‘under the hood’ optimization stragies as well as listing a number of limitations, etc. that you need to be aware of.
Specifically, basth updates are mentioned here:
Per this section, batch updates are only supported under MS SQL Server as of 1.2 s ORA is out of luck.
Do note though that a strategy might be employed that allowed NHib to invoke your PL/SQL sproc to do batch updates since NHib can be configured to use sprocs to do its insert, update, delete, etc.
…for more info. This section covers using sprocs for a SINGLE insert, update, delete rather than batching so some experimentation may be needed.
To see if anyone else using ORA (or other) has attempted anything like you are suggesting, take a look @ the Nhib resource links on the LH side of the http://www.summerofnhibernate.com site and ask in some of those forums/newgroups listed there.
Hope this helps.
Thanks a lot. I will dig into this information and will let you know.
A quick question about the SaveOrUpdateCustomersThrowsExceptionOnFail method. Wouldn’t it be better to move the list count assertion into the catch block and then rethrow the exception? Then you could do an Assert.Fail() after the catch block and verify that both the rollback occurred *and* the proper exception was thrown. As it stands, the test would pass if the rollback occurs but no exception is ever thrown.
public void SaveOrUpdateCustomersThrowsExceptionOnFail()
// lots of code here…
int testListCount = // long line of code…
Great point — I think you’re absolute spot-on with that observation. Score 1 pt for pair programming (or 2 pts so that we can split it and each take 1 pt apiece
About the “Coverage”, we have to have 100%. For a query, you use transaction too. How can I have a 100% coverage with a query. I have to force it, in an another test method to crash but how ? I do this :
Convert.ToDouble(“HELLO”);in the try section but is it the right way? Thanks.
That’s certainly one way, but wouldn’t an easier one be to just explicitly THROW the exception you’re interested in raising…?
e.g., throw new NHibernate.HibernateException();
One question regarding concurrency.
1) In your example you used one row from the Customer table (id 1) to create 2 Customer objects.
2) You then update c1.Firstname and c2.Firstname
3) Then try to update them both. The exception is throw because of the stale state of c2.
Question: Can concurrency checking take into account the fields being updated and allow an update if the fields being changed are still the same.
e.g. c1.Firstname was updated BUT THEN c2.Lastname
I’m guessing the answer is to handle this in the application catching the stale state and then comparing the fields etc and maybe making another Customer object which then creates a updated version?
NHib’s internal support for dirty-checking is limited to the ‘object’ level by default (e.g., the session knows the object (entity?) is ‘dirty’ but not which fields on the entity are causing it to be in a dirty state.
Because of this, when the UPDATE statement is fired off, you will note (by inspecting the SQL statement with SHOW_SQL=true or in SQL profiler or other tool) that EVERY value is passed back to the DB rather than just the fields that are changed (e.g., you will get UPDATE customer set firstname=’steve’, lastname=’bohlen’, etc. etc. even if only the firstname field has changed).
This is the behavior by default, but it can be overridden to send just the known-changed field values back to the DB if that’s what you want; the exact setting in the mapping escapes my recollection right now, but its there– go hunt thru the docs and you will find it. I’m pretty sure that for this to work NHib has to expend more overhead client-side in re: checking ea. field to see if its changed and also hang onto a SECOND copy of the object in order to have something to ‘compare’ ea. field to so there is additional memory overhead on large collections and this is why the ‘send-all-values-back-to-db-on-update’ approach is the default behavior.
There’s also more on these trade-offs in the (excellent) ‘Hibernate In Action’ book from Manning. Even though its about Hib instead of NHib this is still a tremendously valuable reference guide in re: real-world implementations of NHib and until *N*Hibernate in Action is finally published its hands-down the best book out there for stuff like this if you cannot find good ans on the web.
Hope this helps~!
I’ve seen you open and close the session in your data provider on each method, something Ayende qualified as “worst practice” (http://www.ayende.com/Blog/archive/2008/07/24/How-to-review-NHibernate-application.aspx). So, what’s your opinion on that?
See later sessions where this pattern is changed 🙂
The intent of the screencast ‘series’ is that they are all intended to be taken as a whole rather than any one taken sort of ‘out of context’ from the middle. I am intentionally taking the viewer through an evolutionary process that starts ‘simple’ and evolves to introduce more complexity. I admit I was never clear about that, but it wasn’t my intention that a viewwer would take what was done in the middle and attempt to implement it in a real-world application as-is 🙂
Specifically to your point about session lifespan, I completely agree with Oren’s recommendation. I wanted to demonstrate in the beginning a pattern (open-session-per-data-access-method) that works and then take the viewer through understanding where its limiting rather than just stating “this is bad” or “don’t do that” without first laying a proper foundation for a viewer to understand WHY something is a poor design choice.
Later on in the series we see how this pattern tendsto prevent us from reaping any of the benefits of lazy-loading, etc. and so we refactor our code to introduce a higher-order session lifespan approach that mitigates this limitation.
Hope this makes the process more clear.
I’m learning NHibernate and have a question regarding your slide “The Real NHibernate Transaction Pattern”
You show using a try/catch block with the tx.Rollback in the catch block. The pattern used by System.Transactions is that the Dispose() of the transaction causes any transactional work not voted on to rollback. So typically you don’t need the try/catch unless your intend to do some logging or another behavior.
I used reflector and the NHibernate transaction simply wraps an ADO.Net transaction. Isn’t the use of the try/catch to rollback pointless as it will happen with the dispose. Also, you are catching only NHibernateException. If you have another type of defect that causes another type of exception the transaction rollback will be called. People might put logging in that catch block and they will not be logging other types of issues that might rollback.
Am I off base? Thoughts?
Those are all good points (and quite correct). The reason for the try/catch pattern is indeed that you usually want to do something else in addition to just the RollBack(…) in the catch although the samples I have provided don’t show that. For example, I would usually rethrow a new PersistenceException() from here that wrapped the original HibernateException so that caller of my data access wouldn’t need to bother to interpret the gory details of a HibernateException.
As to only catching HibernateExceptions rather than ALL exceptions, this is a philosophy about why/when to catch what type of exceptions in your code. There are about 1000 different opinions about how to catch exceptions and the one that I tend to subscribe to is “don’t bother catching something (at a low level) unless you are going to do something useful with it”. In these samples the only exception ‘worth catching’ is a HibernateException since that’s the only meaningful exception that my code in these samples can react to/correct for with a rollback(…). Any other type of exception would (should IMHO) be allowed to bubble up to a higher layer in the app for eventual handling (like logging, etc.) up there.
You’re not at all off-base, in either of your observations 🙂
I agree with all your comments. I don’t think most developers have internalized the meaning of when to use try/catch. That is another discussion entirely! I guess in a perfect world the slide code in the slide would read:
catch (NHibernateException ex)
// Do something with meaningful with ex here or remove the try/catch block
I think developers gloss over dispose semantics because they tend to be poorly documented for almost all libraries.
Thanks for putting together this great series! Your presentation style is just the right mix of concepts and code for learning NHibernate.
Oh, after I hit submit on my last post I wanted to say one more thing.
I use a layered policy drive approach to catching exceptions. I put exception handling only at the architectural layers in my application and delegate all handling to an exception policy component that is configuration driven. That way the exact behavior is determined by the policy and not the code.
I feel this is a best practice. But again, I digress… another discussion entirely.
I agree on all points; I think that the main thing that I really may not have made entirely clear to eeryone in the code samples that come out of these sessions is that they are really NONE of them literal recommendations about how to EXACTLY best implement NHibernate — instead they are of course contrived code snippets designed to showcase/demonstrate a particular feature of the library that I’m interested in focusing in on.
I would hope that most people wouldn’t copy these in their entirety and attempt to insert this exact code or these EXACT patterns into their own work without some serious consideration as to their appropriateness for their own situation.
This screencast wasn’t intended to be necessarily an NHibernate-best-practices guidance set but rather a ‘here is how the features work, choose for yourself what implementation makes the most sense for your application architecture’.
The way the internet has spawned all these google-enabled-clipboard-copy-paste-developers, I suppose its not all that unexpected that people might try to take EXACTLY what they see in these screencasts and implement them literally in their application(s).
There are (as you point out) all kinds of best-practices guidance that I am either skipping entirely, merely touching briefly upon, or intentionally conflicting with in the interests of brevity — your discussed approach to exception-handling at your layer boundaries via configurable policies sounds very interesting but it would obviously overcomplicate the content of these screencasts to dig into such an approach (or any other) in any detail 🙂
Thanks for the feedback; I’m glad you’re finding value in the content.
If you have parent/child classes and the IDs are populated by a sequencer, how do the children get the correct parent ID on an insert of parent and children if the parent ID hasn’t been generated by the time of the session.SaveOrUpdate(parent) call?
My child map contains the parent id as a property:
My parent map references the child as a one-to-many relationship:
Am I missing anything? The insert for the child always fails because the Parent ID is left as a 0 and referential integrity is violated in the database.
In the one-to-many relationship you describe, which end is set to inverse=”true” and which to inverse=”false”?
Although I didn’t really touch on this level of detail in the screencast, try this link for a thorough discussion of this attribute in the mapping (link is for Hibernate, but principle is the same in NHiberante as well)…
Also try this link (for NHib-specific view of the attribute’s use)…
You demonstrate some examples of optimistic concurrency usage. Are there any reasons why you chose not to use pessimistic concurrency examples?
Also, if you want to perform an “NHibernate transaction” that spans across more than one database, is this possible? – I’m thinking not due to the fact that a SessionFactory to database is 1:1
Pessimistic concurrency is supported in NHibernate for SOME platforms but as a concurrency strategy it scales terribly and so in nearly all cases (using NHib or not) you want to choose an optimistic concurrency strategy and so I didn’t bother to provide an example of pessimistic locking. Just FYI, MS SQL Server doesn’t support pessimistic locking but other RDBMs like Oracle do via their SELECT for UPDATE… pl/sql construct.
re: Transactions that span multiple databases, you are correct that this is made difficult by the 1:1 relationship between a SessionFactory and a database (or at least a connection string) but you can (with some work) develop a transaction that spans multiple databases at a layer ABOVE NHibernate although in these cases you really have to develop your own rollback logic that (typically) involves one or more compensating transactions (a complex topic to dig into).
We are attempting to perform optimistic concurrency using .
However when we profile the database, two separate calls seem to be made:
1. RPC:Completed UPDATE is executed (actual update)
2. RPC:Completed SELECT is executed (select on the new version timestamp generated by db)
Is there a way to get these batch executed i.e. one call to the database?
Is the way that NHibernate performs this inefficient?
The pair of calls is (of course) required for NHibernate (or any data-access technology using datestamps for concurrency support) to get its own copy of the object’s timestamp field updated with the new value from the DB when the UPDATE is successful.
NHIbernate *does* support batching calls to the database but only for MS MSQLServer at this time (AFAIK). Because its support is DB-specific but NHIbernate is DB-agnostic, its turned OFF by default even if you select one of the MSSSQL dialects in your config file.
For more info on BATCHing during UPDATE calls, see…
For more info on BATCHing during SELECTs, see…
Hope this helps~!
I was just wondering why can’t we use TransactionScope instead of ITransaction.
I am new NHibernate. I was just wondering about optimistic concurrency using version column as Int32. If after every update, the version number is incremented, Wouldn’t someday the it ll overflow the limit. Pardon me if the question sounds stupid.
Is there a specific reason you leave session.Flush() even though you have tx.Commit()? I thought committing transaction does both.
Actually, yes there is — and good question. The tx.Commit() will *sometimes* flush the session — its dependent on the setting of Session.FlushMode. Its true that the default FLushMode for the session will flush it on tx.Commit() but I’ve found it to be dangerous to make the *assumption* that it will in my data access methods. While one approach would (perhaps) be to check the FlushMode setting on the abient ISession before ‘bothering’ to call tx.Commit(), its also the case that if there aren’t any changes pending an ‘extra’ flush isn’t a perf hit so I am in the habit of calling ISession.Flush() explicitly when that’s when I *really* want to ensure its actually happened(even while obviously it doesn’t quite happen until the call to tx.Commit()).
both are equally valuable in ensuring that our tests correctly exercise our code