After a slight delay to give myself time to properly setup some of the infrastructure needed to better distribute these screencasts to everyone, I am pleased to announce the immediate availability of the fourth installment in the series (oddly enough entitled ‘Session 03’ for reasons explained here).
In this session we start to dig into patterns for using NHibernate to perform modifications to the data in our sample database by asking it to perform INSERT, UPDATE, and DELETE commands on our data.
As this session also introduces the first of our unit tests that exercise methods that begin to actually make changes to the data in our test database, we spend some time up front dealing with some unit test infrastructure work. This work is needed to wire up some custom tooling that we use to ensure that the database is in a known state both before and after each of our tests are run. While this part of the session isn’t per-se purely about NHibernate, hopefully anyone doing data-dependent unit tests will still find it of interest to see how our company has addressed the challenges of database state-management before, during, and after unit tests are run against the database.
Anyone not interested in seeing how we do this is welcome to fast-forward past the first 15-20 minutes of this screencast.
Astute long-time readers of my blog should note that the techniques presented in this screencast for dealing with data-dependencies in unit tests are a more advanced and better-integrated way of leveraging the NDbUnit framework in a way very similar to that mentioned in these prior post of mine from way-back:
- Inside the Sausage Factory: PART 17 (Approach to testing the DAL)
- Inside the Sausage Factory: PART 18 (Building Data-dependent Unit Tests)
- Inside the Sausage Factory: PART 19 (Coding the Data-dependent Unit Tests)
Thankfully, the download for this session is now available as a single AVI file from the newly-launched www.SummerOfNHibernate.com site (where you can also find an RSS feed for the videos which will further ease access to the content).
As always, comments, feedback, etc. are always welcome.
Hi Steve, great series. But thought I’d mention that I get an odd unit test failure with the CRIT_CanGetCustomersOrderedByLastname Test. Basically debugging it shows the first customer as SteveSUFFIX Goodbye and the second as Steve Goodbye. Suggesting that the teardown didn’t clear the db of prior unit test changes, but on inspection of db during test debug this is not the case so I can only guess that NH’s caching is failing, or MDbUnit is not a goodway to test Nhibernate as it bypasses NHibernates caching(i.e. NH doesn’t know a change has been rolled back). I’m new to NH so not sure how to debug further.
Would it not be easier just to wrap the tests in a transaction and roll that back to preserve the db state?
Ah yes, would I be correct in saying that your violating the Unit of Work principle by actually reusing the session via setting your _provider in the TestFixtureSetup rather than in Setup, hence NHibernates cache state is not in a known state between tests.
@Michael:
Hey, no fair 😛
That ‘oversight’ was intentional in order to prepare us for some refactoring in the next session that was purposely left that way so that we can use it as a reason to begin to understand both NH internal session-caching and NH second-level caching via plug-ins.
You win a special-mention for pre-identifying the bug (intentional design flaw, actually) that we plan to address in the next session — nice spotting work~!
Thanks for your this series. It’s being great so far.
Just a small personal opinion. I find the screencasts a bit slow and repetitive sometimes. It’s a long screencast and slowliness get me a little distracted/bored (maybe it’s just me, I don’t know).
@alberto:
This is great feedback, thanks for the suggestion.
As a teacher, I think one of the serious drawbacks to a no-audience teaching experience is that its a bit like a one-way conference call: no way to see from the expressions on the faces of the audience that they either ‘get it’ or are completely lost 🙂
This is further complicated by the fact that I don’t (and really can’t) have a sort of ‘prerequisite skills’ list that someone needs to be sure to have before they watch (and ‘get’) these screencasts. I’ve sort of set the bar at ‘capable C# dev with some ADO.NET experience’ but beyond that its kind of a crap-shoot.
For example, I’ve recieved comments that range from “thanks for showing me how to do unit testing in a more real-world way” to “you’re not doing TDD properly” (both of which are, frankly, accurate observations).
Thus far, nobody else has offered your specific comment (too slow/repetitive in places) but I’m more than open to ‘speeding things up’ and not belaboring the more simple points if others have similar thoughts; I’m trying to straddle a really blurry line between enough useful content for everyone without boring the more advanced audience members but this is REALLY hard when I can’t see the facial expressions on the ‘crowd’ 🙂
That said, what do others think –? Should I ‘speed’ things up or stay at present pace on the content…?
Hi, did you notice that in the first test you write in this screencast you are inserting a new object and then retrieving it pretending it was retrieved from the database? You are not going to the db unless you evict that from the current session. You should have definitely explained that. And another thing, it took you so much time to set up the test data with ndbunit, you could have had better time by starting a transaction at each test and then rolling it back.
@Simone:
Good point about that first test; thanks for mentioning that — nice catch. I’m pretty sure that the subsequent refactoring of the data provider class in later sessions should have now lead to that test performing the way its ‘expected’ but I will make a point to return to the code and ensure I’m recalling this correctly.
Re: using transactions to roll-back DB changes during unit tests, I am generally NOT predisposed to trust such a thing as its both out of my direct control (e.g. relying on the DB and/or ADO.NET to enforce proper roll-back semantics) and also brittle (in my experience a test that fails in an unpredictable way will often leave me with the DB in an inconsistent state and then I’m stuck going back to restore a backup).
The other reason that I prefer the NDbUnit approach implemented in the screencast is that part of its job is to ensure that KNOWN data is present in the DB when the tests are run. Using this approach I can have all kinds of other (non-test) data in the DB when I want to run my tests, get the expected test-data loaded into the DB, run the tests, and then ‘reset’ my DB to it pre-test-run state so that all my other non-test data is returned at the end.
This approach allows us to have developers run tests agains the DB on their local PC without compromising any other data that they may have in the DB to support their other development tasks on the same project. This way, if the test expects 3 orders in the DB but the dev has 22 orders in their DB becuase they’ve been developing the part of the UI that adds orders NDbUnit will reset the DB to contain the expected 3 orders, run the tests, and then return the prior 22 orders to the DB so the dev can go on about their work with no ill-effeects.
Since NDbUnit is already involved in setting up the test environment (DB) before tests, it only makes sense to me to allow it to also reset the test environment after tests instead of relying on other methods (transactions) to achieve the same thing even if they were reliable.
Hope this helps.
(in my experience a test that fails in an unpredictable way will often leave me with the DB in an inconsistent state
As far as I know transactions conform to the ACID principles, therefore I’m pretty sure that if your test fails then the transaction is not committed since you never commit it explicitly and the state of your db remains consistent.
About using ndbunit, I see no difference in setting up test data in a local db via other means other than the silly way ndbunit does it, but this has been discussed thoroughly in other circumstances and I guess it ends up being a matter of personal taste. Personally, I prefer the fresh db approach, using an in-memory sqlite database.
@Simone:
I agree (conceptually) with your comment about ACID transactions supposed to work properly — that’s just not always been my experience in some corner cases and that’s how I ended up not relying on them in all unit-test-related cases. As a simple example (though admittedly a sharp corner case), I’ve had VS actually crash on my during a unit-test-run.
Re: there being other methods to setup your DB than NDbUnit, I 100% agree. In the past I’ve used approaches that run the gamut from in-memory SQLLite/SQLCE, etc. as you suggest to scritping a complete backup and restore before and after the tests — all work pretty well but all (including my demonstrated approach using NDbUnit) but they all represent trade-offs between pros and cons.
As a for-example, the in-memory SQLLite approach you suggest doesn’t help me at all for integration-testing against Oracle DBs or other non-MS-SQL implementations. Since we tend to do a lot of work against spatial/GIS data-stores, we run into Oracle as our back-end DB fairly often. AFAIK, there isn’t any in-memory version of Oracle that I can leverage to use the kind of approach you’re advocating and I would rather not have one approach to unit-testing against one vendor’s DB technology and another approach to unit-testing against another’s if I can avoid it — and NDbUnit lets me use a single approach no matter what the DB target for the project.
I think its *sort of* personal preference, so long as your personal preference works for *all* of your real-world conditions; its just that an in-memory SQLLite approach doesn’t work for all of mine and that’s how I ended up with NDbUnit.
Inline now.
I also agree that the screencasts are a bit too slow moving, don’t go all turbo on us like Oren in the Rhino Mocks screencasts (Did he just create a new class, mock, and test it with 3 keystrokes?) but speeding it up a bit couldn’t hurt.
@George:
OK, that’s a second vote for ‘faster’ so I will consider it seriously.
Don’t fret though as I assure you that I am in no danger of out-coding Oren anytime soon. While I might aspire to be as prolific a developer as he is, the sad truth is that I spend waaaaay too much of my day-job managing projects and development teams to ever hone my skills to that level — its just been entirely too many years since I spent that kinda time day-in, day-out doing actual coding work.
That said, I’ll try to step it up in places a bit; thanks for the feedback.
Hello,
If am using SQL server and I want to make use of sqlBulkInsert, whereby I pass it a databtable and destination table. Is it possible to achieve this using NHibernate?
TIA
Dr Y Arezki
@Yazid:
Sort of. Google on “NHibernate + BulkInsert” and you will get plenty of hits.
Also investigate this…
http://www.ayende.com/Blog/archive/2006/09/16/7269.aspx
Note that the (limited) support that NHib has for bulk ops are limited to MSSQLServer AFAIK.
Hello there,
I had already seen that link, the batch size he has setup is in the config and the values that he has used are very small. In fact I seven dataTables which contain half a million records each. So I was just wondering if one uses just NHibernate whether it was possible to use the SQLBulkCopy whihc comes with SQL server.
TIA
Dr Y Arezki
A fantastic series and wanted to thank your for the material — I do agree with a couple of others that the screencast could be speed up a bit.
One thing I’m not finding however, is the source code for the Microdesk.Utility.UnitTest library. Related, and something I’m also not finding, is the source code for the NDbUnitUtility class used in particular with Inside the Sausage Factory: PART 18.
I really like this approach utilizing NDbUnit for the data-dependent unit tests, but am struggling to follow along without being able to view the source of NDbUnitUtility and the Microsodesk.Utility.UnitTest library, and just viewing what’s exposed via Reflector.
Am I just missing the source for these somewhere in the downloads? Ideally, an additional link to the source code for these utilities/libraries would be great!
Thanks,
–Scott Wade–
@Scott:
Thanks for the feedback; I’m glad you’re finding value in the content.
I’m not releasing source for the Microdesk.Utility.UnitTest.dll as that’s not (presently at least) open source — its something that I wrote explicitly for my company and so its not really public domain as source even though I have said clearly in other posts here that people are free to use the BINARY version of it (as provided in the code downloads) as they see fit.
As to the NDbUnit source, that project actually IS open source and so the source for this version of it is going to be available soon. Recently, after some neglect, the project was moved to Google Code (http://code.google.com/p/ndbunit/) and I have been made a committer on the project.
Over the past year or so I have make mods to the code and also collected code updates from others who have been involved in patching the code but couldn’t merge their changes into the main line since the project went completely quiet on the ‘main’ website (www.ndbunit.org).
Now I can (finally) fold back into the main release all of these changes (fix for FK ref integrity problems, etc.) Once that’s done (end of the month perhaps?), you will find links to it on my blog and you can visit google code and retrieve the source from there.
However, its not at all clear to me that lacking the source for these should be preventing you from following along in the videos though — you should of course be able to just use these libraries as demonstrated in the video(s) without needing to step into the source code unless you are actually experiencing some bug/issue with either of them (in which case I would encourage you to report that to me so that I can address it).
Hope this helps~!
Hi Steve,
thank you for great screen cast and i really appreciate your time.
the videos are great and its very very clear but i should say that i lost when you start adding the base class: Microdesk.Untility.UnitTest.DatabaseUnitTestBase
i have few questions on Unit test:
1) what is that base class looks like?
2) do you really have to have Microdesk.Untility.UnitTest.DatabaseUnitTestBase in order to DatabaseSetup and DatabaseTearDown?
is it possible to provide base class source code?
thanks again for your time.
@Nisar:
This is where we run into some trouble with the fact that these videos are pretty strongly targeted first for internal consumption by our own staff and only secondarily for viewing by the community at-large.
Obviously our own internal staff has already rec’d training on the use of the DatabaseUnitTestBase class to control database setup and teardown before and after test-runs whereas no other viewer would have rec’d the same training, of course.
I’ll try to explain as best I can…
1) The DatabaseUnitTestBase class is a thin wrapper around a bunch of the methods in the NDbUnit library and really adds not a lot of value except in so far as it wraps this stuff neatly up into simple calls like DatabaseSetup() and DatabaseTearDown().
The links in the above post to parts 17, 18, and 19 actually show how this same technique was used by leveraging ‘naked’ NDbUnit calls without the convenience wrapper of the DatabaseUnitTestBase class around them.
2) These methods are provided in the DatabaseUnitTestBase class and so strictly-speaking the answer to this question is a ‘yes’, you need the base class to have these methods.
But as mentioned above, the actual capabilities invoked here are nothing more than a sequence of public methods provided by NDbUnit and so if you familiarize yourself with the use of that library, you could accomplish the same thing as this base class by either writing your own base class or simply calling each of the NDbUnit commands at the appropriate points in your testing. NDbUnit offers a more granular API than I think is most often needed and so where it needs three to four method calls to reset your database, I wrapped that in the single DatabaseSetup() method on our base class, for example.
Re: the source for the base class (and the whole Microdesk.Utility.UnitTest.dll), as I’ve mentioned in other comments here that’s not entirely my choice to make (see comment 17 here for example) but I will see what my employer says to a request to do so.
Hope this helps~!
Steve,
thanks for answering my query.
what do you have to say about ado.net Entity Framework? do you think in future nhib will be replace?
just curious to know…..about you.
do you code or teach?
for how long you been working with nhib?
thanks again you doing excellent job…..again excellenta 🙂
-N
@Nisar:
Without starting another EF v Nhib flame-war, let me offer my *opinion* on EF v1: DON’T.
The first release of EF has so many shortcomings (ranging from issues with their lazy-loading to no support for SCC merges of your EDM, etc., etc.) that are better solved by other existing software (NHib, but many others too) that unless you are in an all-MS shop (read: no OSS tools allowed), I would avoid EF until v2 (or v3) when they have promised to solve some of these issues (to varying degrees).
As for it (ever) replacing Nhib (e.g., like NHib ceases to exist), I don’t see this happening any time soon — just like the Logging Application Block in the Enterprise Application Block stuff didn’t ‘kill’ Log4Net, NLog, or others I think there is enough difference between Nhib adopters and EF adopters in their approaches, concerns, focus, and other factors that there will remain NHib adopters for quite some time to come. If I didn’t think this, I’d be doing screencasts on EF 🙂
Re: myself, I am currently a Senior Project Manager and Solutions Architect for Microdesk, Inc. in New York City (http://wwww.microdesk.com) where I lead teams of developers in what are typcially consulting-style engagements to produce custom enterprise software solutions based on .NET sometimes focued on general business needs but often focused on CAD and GIS technologies and how these integrate into the enterprise.
In that role, I neither code *nor* teach 🙂 but spend a good deal of time mentoring staff in development methodologies (Agile, SCRUM, etc.), software architecture concerns, and improving techniques for efficient and effective software development. I’m one of the senior developers and generally the go-to guy for issues of software architecture challenges when they arise in a team. I also spend a significant amount of time focused on how we improve our process and procedures to steadily increase our efficiency and our effectiveness as a group. I have a strong focus on the BUSINESS of software solutions consulting as well as the software side of that.
I’ve been using NHib for about three years now, but the story I tend to share is that it took me two aborted tries over a year to teach myself about it before I was successful at learning it and that’s one of the reasons that I started this screencast series and have been making it available to the community at-large; the available resources to learn this stuff on your own tend to be scattered across the ‘net, poorly organized, and often assume you are already aware of all of the CONCEPTS behind NHib when you start to try to read thru most of them.
Thanks again for the feedback.
Understood on the Utility class and as I’ve gone through the NDbUnit source I’ve got a better understanding of how you may be “library’ng” up the functionality within it with your propriety code (some good info around it in some ‘Sausage Factory’ posts as well). Glad to hear NDbUnit is getting some attention, I will indeed start following that and contrib if/where I may. I too failed to find any original contact information for that project.
Your further explanations here were very benificial! Didn’t mean to imply I was stuck following the videos, just natural developer instinct to want to see code, while proceeding. 😉 Very well stated on the EF front, and again thanks for this series!
@Scott:
Sorry if I misunderstood — that’s partly why I couldn’t understand how you might be ‘stuck’ 🙂
I appreciate the curiosity and DO actually expect to be able to at least post the source code for at least the main parts of the Microdesk.Utility.UnitTest.dll so that the curious can get a better sense of what’s going on inside it.
Question regarding deletes – if I have a large number of objects to delete, do I need to grab them from the database before I can delete them? Or is there another way to delete that relies on criteria rather than object identity without requiring me to select my data before deleting it?
@Erik:
This is indeed one of those areas (speedy mass changes) that OR/M technology in general tends to not be able to (easily) keep up with the performance of ‘raw SQL’ approaches to DML commands.
The only ‘efficient’ way for you to accomplish mass-updates/mass-deletes without requiring the retrieval of all of the data to your local (client) computer using NHib is to pass literal SQL from the ISession.CreateSQLQuery(…) command that exists for just this purpose.
Using the .CreateSQLQuery(…) method you can just pass “DELETE CUSTOMER” and nuke the contents of your CUSTOMER table in one shot or something like “UPDATE CUSTOMER SET ACTIVE=TRUE” to make all of your CUSTOMER rows ‘active customers’. Without this ‘shortcut’ approach, you would indeed need to retrieve every CUSTOMER object, call ISession.Delete(…), etc. on each one in a foreach loop and then flush the session/commit the transaction to get these changes pushed back to the DB (clearly much less efficient).
GENERALLY you want to avoid using .CreateSQLQuery(…) for all but those things that just cannot be efficiently accomplished without it since passing literal SQL is a non-DB-neutral way to communicate with your DB and will tend to tie you to a specific DB vendor’s ‘special SQL syntax’ (e.g., T-SQL, PL/SQL, etc.) but this is MOSTLY ok in the case of mass-deletes since nearly all DBs implementation of the SQL-92 DELETE syntax is mostly identical anyway.
Hope this helps.
@Erik:
Sorry — the general points I’m making in comment 25 are correct, but the API call isn’t. I think you actually need IQuery.ExecuteUpdate(…) IIRC.
Am not @ a dev PC right now, so cannot check but am pretty certain from recollection that ISession.CreateSQLQuery(…) can only be used if/when a resultset is expected back but that IQuery.ExecuteUpdate(…) is for when you don’t want/need/expect results back (e.g., its like ExecuteNonQuery(…) in ADO.NET — or whatever that method is called 🙂 )
Hi Steve,
Question about DBUnit and Microdesk.Utility.UnitTest.dll: I have a number of foreign key constraints in my database and need to somehow specify the load order of the tables so that the constraints are satisfied. How can I specify the load order using your handy Microdesk.Utility.UnitTest library?
Peter
@Peter:
AFAIK, the version (modified) of the NDbUnit.Core.dll that is included with the code download should properly resolve all FK dependencies and load the the data in an order that DOES satisfy all of the referential integrity demands of your database schema.
The version of NDbUnit source that is presently available on the internet for download either at its old location (http://www.ndbunit.org) or its new location (http://code.google.com/p/ndbunit/) does not properly address FK dependencies, which is why I modified it to do so.
Are you actually experiencing troubles with the tooling (in which case I would ask for a DDL script that would let me reconstruct your schema on my end and test the updated NDbUnit code against it so I can correct any bug(s)) or are you theorizing about a *potential* problem from reviewing the videos?
Let me know and if you are actually experiencing an issue please send me a DDL script that I can use to create your schema and fix the bug.
-Steve B.
Hi Steve
With the microdesk.utility.unittest.library, how do you handle user defined types in sqlserver since the dataset designer doesn’t support them? Specifically I’m thinking about the geography type in 2k8. Do you have a pattern for supporting these?
Oh yeah forgot to mention: great series.
@Ryan:
This is an interesting point you make here (generally about UDTs and more specifically about the Geog/Geom types in MSSQL2008).
Up until this ‘core functionality’ of the the platform was exposed as UDTs, I had generally shied away from UDTs in SQLServer as an avoidable evil (this is just my opinion, UDT-lovers out there, I don’t really want to hear from you about this).
But with the advent of 2008 and the core spatial functionality being encoded in this manner, I think I will have to take a look into this. As I think I’ve mentioned in perhaps the screencasts and certainly in my blog, Microdesk does a considerable amount of work in the GIS market and so this is something that we will have to come to grips with, I imagine. To-date, our spatial solutions have all been Oracle-based since it was really the only commerical enterprise-ready spatial database out there (yes, I’m aware of both MySQL Spatial and PostGIS, but these don’t count in many of our client’s eyes and so they don’t count for us either).
In Oracle situations, the SDO.Geometry datatypes are represented in datasets as BLOBS and in objects as byte[] (byte arrays); is this not the case with the geometry types in MSSQL2008…?
We don’t yet have any clients moving to SQL2008 for spatial support though looking into the product is definitely on the agenda and I have seen a number of presentations at various conferences on the spatial support in the product.
I’d be very interested to hear from you what experiences you have had with it; the limited spatial query operators and the lack of coodinate system projection support, no concept of spatially-correlated raster data (AFAIK, would love to be proven wrong) all look like deal-breakers for us in evaluting the platform as a primary-role GIS spatial data store.
@sbohlen
Well my needs are a lot less heavyweight than yours, so I’m not really qualified enough to address a lot of those feature issues. Primarily I’m using simple ‘distance from’ type of calculations, which I’ve certainly done using just Lat and Long values in SQL 2k5. However with 2k8 and using the built in Distance function it’s made my queries much cleaner.
I’ve kinda made a solution…I manually create a dataset in the designer by creating a sql query that doesn’t contain the geography column. The trick is that the geography column must be nullable (you’ll get a warning when running tests that the column doesn’t exist in the Dataset but it’s ok because it’s nullable). Then in the Setup method I just create a sql connection and execute a stored procedure that updates all the geography fields with their appropriate value. Finally in the TestFixtureTearDown method I run that stored procedure again and my database is back to normal. It’s got that oh so familiar aroma of code smell but it ‘works’.
This post got me started in the right direction: http://www.freddes.se/index.php/2008/09/25/net-and-spatial-data-with-sql-server-2008-part-2/
Hi Steve,
Good work with the screencasts, I’m enjoying them and am working my way through them converting a project as I go.
I have just finished setting up the NDbUnit SetUp / TearDown methods but now all my tests fail with an error like this:
Cannot insert explicit value for identity column in table ‘tblCustomers’ when IDENTITY_INSERT is set to OFF
My pk’s in each table are auto incrementing int’s, generator class=”native” in my mapping files… any ideas why I’m getting that error? It looks to me like the table is rejecting any attempts to ‘manually’ set the pk.
@Simon:
In order to successfully insert PK values if the DB has them set to auto-increment, a IDENTITY_INSERT ON statement has to be invoked first (this tells the DB to allow you INSERT identity values and effectively turns OFF the auto-increment Identities for the duration of until you send the IDENTITY_INSERT OFF statement that turns auto-incrementing IDs back on.
The thing I can’t quite figure out is why you are having this issue since NDbUnit should be doing this for you automatically before its trying to load the test data into the database.
I have one idea re: what might be the issue…where did you get your copy of the NDbUnit.Core.dll that you are using? Did it come from my code sample download or from the ndbunit.org site or the google code for ndbunit? If from my code sample downloads, I have no idea why you would be having the issue. But if you got it from either of the other two sources, you need to throw out the dll( s ) you would have gotten from there and use the one in my sample code download.
I have made multiple changes to the NDbUnit.Core.dll code to resolve a several problems with the way it handled PK (and more importantly, FK) values and referential integrity issues. I am a committer for NDbUnit but haven’t had time to check my code changes back in. For now, this means that the only instance of NDbUnit that I can vouch for the functionality of is the one in my code sample download. What NDbUnit are you using…? Let me know and let’s see what we can do to get you up and running.
Hi Steve,
Thanks for getting back so quickly!
I figured something should be instructing the database to allow inserts into the pk column – hadn’t fully worked that bit through, so that’s good to know how it works there.
I am using the UnitTestUtility folder from the code download for Session 03. The error, in all cases seems to be on the TearDown. Would there be any point in getting the code from a later episode; I wouldn’t have thought you’d change the lib folder too often, so haven’t tried it.
Simon
@Simon:
No, you’re quite correct there’s no point in either the code or the NDbUnit.Core.dll from a later episode as that stayed stable across the whole series and the code that changes just calls into the .dll and doesn’t actually DO that work itself (e.g., the issue is with the NDbUnit util rather than any code I wrote in the series per-se).
If you would like (and are allowed) could you ZIP up the whole project that you are using with the problem in it and send it to me via e-mail (my gmail addr is on this site in the RH sidebar under ‘contact me’) so that I can review it? If there is something unique about your schema that’s causing this issue, I’d like to know what it is so that I can work a fix into the present code I’m folding into the NDbUnit project so that I, you, and everyone else can benefit from the fix. Probably you would need to send me the DDL that is your DB schema too for it to be testable by myself on my end (you can use SQL Server Mgt Studio to generate the DB-creation DDL for you).
Can you do that or are you precluded from sending such a thing to me? Let me know.
@Steve
A huge thanks for your offer. I will email you in a moment. I am grateful for any help you can give.
Simon
@Steve
Did you figure out the problem that @Simon was having with SQL Identity? I am experiencing the same problem. I have used the Testing dll’s from your source folder.
John
@John:
Yes, I sure did but as my request to Simon to send me an e-mail took this dicussion offline (well, off-blog at least since e-mail still counts as online to me), the solution also ended up offline.
Here’s the relevant part from my e-mail response to Simon so that you can see if your issue is the same as his was. If your issue *isn’t* the same as his, please e-mail me ZIP file with your solution (or a smaller test solution that also evidences the problem) along with either a DB backup or (even better, a DDL sql script that I can just run to create a copy of your DB for myself to assist in my testing process) and I will attempt to dig into it for you.
Let me know.
-Steve B.
—Begin E-Mail Excerpt—
The specific error you are reporting is due to (I think) a problem in the way the .NET dataset designer reverse-engineers your DB type when you use the SQLServer ‘tinyint’ datatype for a field. The .NET dataset designer seems to want to assign this to the .NET System.Byte type. You can see this for yourself if you open your Database.xsd file, click on the pkSaleTypesID field in the tblSaleTypes table, and look @ the VS property panel.
The reason that I classify this as a ‘bug’ in the VS dataset designer is that not only do you get System.Byte for the type, you also don’t get the AutoIncrement property as being ‘true’ (I checked your DB and the PK field in the actual DB *is* set to true). This setting being wrong in the Database.xsd file effectively ‘fools’ NDbUnit into thinking it doesn’t need to set INSERT_IDENTITY ON before inserting data into the DB for the tblSaleTypes table. I actually think that we could ‘live with’ the incorrect data-type (System.Byte) being assinged in the dataset xsd file, but the fact that AutoIncrement is set to ‘false’ is confusing NDbUnit. In NDbUnit, we don’t set INSERT_IDENTITY ON for *every table* we insert values into, just those that have auto-increment PK values (because if the PK *isn’t* set to auto-increment, its 100% valid to just insert a PK value right into the table directly so no need to toggle IDENTITY_INSERT on and then off on either side of inserting data into the table when we do our loading of data into the tables).
I’m (reasonably) sure that you selected tinyint as a database size optimization, figuring that there wouldn’t be any need to have more SaleTypes than a tinyint could hold. But the fact that (for whatever reason) the VS dataset designer fails to properly reverse-engineer (deduce) the correct settings for a SQL type of tinyint means that this optimization is really the source of this problem (even if only somewhat indirectly).
So I addressed that issue by simply changing your DB schema (and the related Database.xsd) to use int as the datatype for all the PKs in all the tables. I had to do several since other tables than just tblSaleTypes had tinyint as the PK datatype and as soon as I corrected tinyint in the tblSaleTypes table the next table with it threw an exception, etc., etc. so I stopped and changed them all to int and the error ‘magically’ went away.
For the record, I didn’t really bother to experiment with whether leaving the datatpe as tinyint in SQL server and as System.Byte in the dataset would still work as long as you ‘remembered’ to go into the dataset xsd and flip auto-increment to TRUE for all of these tables every time you regenerated the dataset if/when your schema changed. That might very well work too (and allow you to retain the use of tinyint in the DB) but my guess is that its simpler to just change to int in the DB than to have to remember to always manually adjust the dataset schema in the future all the time. Your choice; feel free to experiment a bit to see if that also might be a viable way to solve this issue; I’d be interested to hear back if you find out one way or the other.
—End E-Mail Excerpt—
Hi Steve!
Great work on the Screencasts.
I’d like to adress what alberto mentioned in the beginning of the comments.
We just started a project with NHibernate and, from what i gather, NHibernate is fairly new in this world. So the documentation of ‘real-world’ examples are few. When we got the assignment to do this we started googling to the point of bleeding stumps of what once was fingers.
Now Alberto said that it was rather slow etc, I have to say i disagree. I think it’s just the right speed for a comprehensive tutorial of a NHibernate setup.
However if you already know about NHibernate I can understand that it could get abit slow at times when you already have all the answers.
Steve, Thanks alot for helping us with this huge problem to overcome.
Greets from Sweden!
/Magnus
Would you consider sharing the code to the base class?
Bryan:
To which base class are you referring –? If the Microdesk.Domain.* hierarchy, this has been open-sourced under the Proteus Project at http://proteusproject.googlecode.com (and the namespace changed to Proteus.Domain.*
See this post for details:
http://unhandled-exceptions.com/blog/index.php/2009/03/16/proteus-unit-test-utility-and-domain-foundation-code-goes-oss/
If you were referring to something else, please let me know and I will see what I can do for you.
-Steve B.
Steve,
I cant begin to tell you how great this series is. Additionally, the proteus project is another fine example of your work. This is truly excellent material. I am excited to get through the Agile series. I have a long trip coming up so hopefully I can get through the entire series on the trip. Thanks again…
Hey Steve,
Love this series. It’s a bit slow moving, but if you have the time, it’s very, very helpful.
With that said, I’m having issues getting the GetMyTestDataXMLFile to run. I’m using VS2K8 and have the Gallio Test Runner suite installed. And when trying to run the unit tests, it isn’t executed GetMyTestDataXMLFile. It sees all the other tests and fixtures, but not that.
Thoughts?
Firstly, thanks for the tutorials. Very useful..
Secondly, after completing tutorials 1&2, I’ve encountered a problem at the 3rd one, when testing the GetMyTestDataXMLFile(). Here’s the exception:
NDbUnit.Core.NDbUnitException: SqlDbCommandBuilder.CreateSelectCommand(DataSet, string) failed for tableName = ‘Customer’ —> System.Data.SqlClient.SqlException: Invalid object name ‘Customer’.
I’ve also seen this same problem posted on a cupple of other sites:
http://pastebin.com/xhkVBb1B
http://stackoverflow.com/questions/8964784/annoying-error-on-summer-of-nhibernate-screencast
,but no answer has been given to solve it.
Thanks in advance,
Andrei V