For those of you who are long-time followers of my blog, you are probably aware that as long ago as this post from October of 2007 (originally from my blog’s former home on the Microsoft Live! Spaces site but since ported over here so I wouldn’t lose it when I declared my independence from Live! Spaces), I have been a user of the NDbUnit utility to assist me in the management of the state of my database content during integration tests of my data-access-layer code.
In that same post from way back then, I also indicated that I had come upon a real issue with the NDbUnit 1.1 code in re: the way it handled FK relations during INSERT and DELETE operations and also that I had incorporated a fix to the code into my own copy of the source and would make that available (in binary form) to anyone who was interested in it.
I’m happy to announce that I have now (finally!) gotten around to merging the content of my changes into the main NDbUnit project repository on Google Code so that anyone else interested can also go get this fix without having to ask me directly and wait for me to e-mail the binary to you. Called v1.2, the source for this change as well as others that various people have sent to me over the last 18 months are now incorporated into the available source code for all to download and build.
Misc Cleanup
In addition to incorporating the specific FK-relation fixes, I also took the opportunity to do some other housekeeping on the code…
- ported the code from .NET 1.1 to the 2.0 CLR (1.1 support is hereby abandoned in the project, sorry all you laggards still on 1.1 all these years later )
- removed the very old version of NUnit from the tests and migrated them to MbUnit 3.x/Gallio (note that this has nothing to do with the NDbUnit project itself continuing to work with whatever unit test framework you choose — this is just about what unit test framework the project’s own internal tests are written against)
- introduced various bug-fixes submitted to me over the years
Roadmap, contributions, etc.
If anyone is interested, I have also posted the project roadmap for the ‘official’ upcoming 1.2 release as well as the several planned future releases on a wiki page for you to see for yourself what’s planned next. If anyone is more driven than myself to add support for additional database types to the project, feel free to submit a patch or two supporting your favorite DB platform.
Happy coding~!
are you planning to contintue on Agile series? just curious 🙂
@Nisar:
If I were as single-minded in my pursuit of completeing the series as you were at asking if I plan to continue it, I’d be long-since done with it by now 😀
Seriously though, the next installment is already half-complete and the other half is on-tap to be recorded this evening so its nearly here…!
Like others, I was hitting the FK wall with NDbUnit so I gave up on it :). Been using transactions to keep the testDB clean and it’s working ok for basic repository tests.
Maybe now I’ll look back and try it out again.
I’ve been using liquibase with some success (xml, argh!!), and it supposedly has the ability to output the dataset without having to go around and creating all that stuff in VisualStudio which is a plus.
@Jorge:
I’d encourage you to give NDbUnit a try again — the FK fixes I have incorporated go a long way to making it useable for real-world work.
The single biggest advantage that I see to NDbUnit over the traction-rollback approach is that NDbUnit also provides a simple way to manage seeding the DB with the expected data before the test is run and because different data can be pre-loaded before specific tests or groups of tests, things like asserting that a select really returns the proper set of records are simple and consistently pass or fail.
The transaction-rollback approach solves the “undo-the-delete-all” issue, but doesn’t provide an easy way to address the very real need to have “known data in the DB” before each test. I’m curous how you are presently handling that need; can you advise in more detail?
Re: using VS to create the dataset XSD file, in my screencasts you may have seen me demonstrate that appraoch because its the simplest to show others but in the real-world we actually leverage the MyGeneration open-source code-gen templating engine to create the XSD file(s) for us so that we don’t have to bother ourselves with the (significant) annoyance of the VS .NET Dataset designer’s quirks (including its seemingly unquenchable thirst to always add a default DataAdapter to every table over and over again 🙂 ).
If you do decide to take another look at NDbUnit, I’d be interested to hear your take on the update(s) and thanks for the comment!
My approach has been rather simple, although a bit error-prone. Unfortunately, all of this comes from my learning sessions at home (for which your work has been valuable) since at work we just do it cowboy style, so I can’t tell anything about real team efforts, unfortunately.
Basically I keep the development and the test databases separate. Liquibase makes keeping their schema consistent very easy for me. All the test data is included in the changesets tagged with “test”, and allows me to recreate everything from scratch or incrementally if I ever need to (and helps a lot with the CI server). Also, I can’t test any code that uses transactions by itself with my current codebase.
Have you ever looked into liquibase (http://liquibase.org/)? As I mentioned in my previous comment it’s all xml based (the IDE is old) but makes automating some tasks very easy, although it may be a little time consuming to make sure everything works (which is not something every boss likes :)). It does require you to design your DB together with the domain model, and that’s somewhat DDD unfriendly :).
NDbUnit would allow me to keep a single DB for both unit-testing and developing, as well as saving me from typing xml (http://liquibase.org/manual/insert_data) for every test row I needed, so I’ll try it again soon. 🙂
I recall you mentioning MyGeneration wrt the xsd files somewhere, was it in the autumn of agile series?
@Jorge:
No experience with Liquibase but I googled it as soon as u mentioned it and took a look. At first glance it looks to me to be a lot like migrator.net and others that provide similarities to Ruby-on-Rails ‘migrations’ although some kind of IDE and also a DB diff engine would seem to differentiate it a bit from those others that seem to lack those features. Can you compare it to any experience that you might have with some of thoes other classes of tool?
Also, am I understanding that Liquibase also stores DB *records* in its xml files or just the DB schema? Is that how you are loading content into your DB to support your tests or are you using some other method for that?
Re: MyGeneration, I *did* actually demonstrate its use in the Summer of NHibernate series when we explored slaving our object model to the DB schema although that wasn’t speficially about generating XSD files.
I don’t have any experience with migrator.net but I looked into both and had to choose one to try. Maybe my current work environment made me go away from pure code implementations since DBAs and developers are two separate and often conflicting entities. Maybe next time 🙂 . The IDE is eclipse based, very out of date, and I couldn’t get the DbUnit integration working for that xsd magic I wanted (and people say bad things about dll hell on windows…). The tool itself is very stable and works as advertised but commandline only, which actually turned out to be fine because it’s very configurable and has many useful options. Being able to see the SQL it generates is one of them.
And yeah, liquibase allows you to add “insert” tasks that add records to the DB and that’s the way I create the test data. And by tagging these tasks with “test” I’m able to separate these with the standard ones. With a simple script I made, I just call “syncdb update” where is “dev” or “test” and it runs the appropriate changesets automatically. The real difficulty is in typing all that markup properly in the first place 🙁 .
Markup problems in my previous post 🙂
should be:
“syncdb :dbserver: :context: update” where :context: is “dev” or “test”
@Jorge:
Thanks, I’ll take some time and actually try it out to see what I think. Appreciate the info!
@steve
We talked about this a while back but…I was curious have you yet had the pleasure of trying to work with the spatial datatypes of SQL2008 and using them with NDbUnit? My project was cruising right along until I introduced that little bit of lovelyness into the equation. The issue seems to boil down to the inability of DataSet/Designer to handle user-defined types, which the spatial objects are…so I can’t write-em out or read-em in when I’m running my tests. The result is I have to run a seperate method to recalculate all those fields during each import, and my fear is that I’m creating an un-maintainable process (kludge) in the long run. Any thoughts? Advice? Comforting words? A teddy bear?
Ryan
@steve,
I have just started to look at nDbUnit, and its certainly interesting. I’m wondering about the MyGeneration stuff. Do you use your own MyGen template to generate the XSD? Also, how do you create the XML datafiles? I assume that you dont do this manually 🙂
@tab:
For the creation of the XSD there are two basic ways that I do this: using the .NET dataset design surface if I want to quickly select one or two tables from a larger set or via MyGeneration templates if I want to produce XSD for the entire DB quickly. There is a template that ‘ships’ with the MyGeneration download that does dataset XSD creation quite capably so we didn’t have any need to author our own.
For the creation of the test-data XML files, *sometimes* I will in fact create them ‘by hand’ since once the XSD file is created and referenced into the XML file the VS XML editor will provide intellisense based on the XSD schema definition file, making this approach feasible where it otherwise wouldn’t be so in cases where I need only a few records to support my present testing.
The other way I typically work is by using the database itself (usually via tools like the SQL Server Enterprise Mgr, etc.) to enter sample data directly into the tables. This can obviously ‘protect’ me from violating FK constraints and other in a way that typing in the XML directly cannot. Once this is entered into the DB, I use NDbUnit to persist the DB contents to an XML file. You can use ‘naked’ NDbUnit for this but in my UnitTest Utility library there is a convenience method of the DatabaseUnitTestBase class called SaveTestData() that makes this operation trivial (you just ensure the properties of the class are set to the right XSD and XML filenames, invoke the method, and it orchestrates NDbUnit as needed to persist the DB contents from the tables contained in the XSD to the XML filename you specify).
Remember that NDbUnit’s operations on your DB are controlled by the tables that exist in the XSD file that you direct it to for any given test cycle. Rarely do I create XSD (and accompanying XML) files for my entire DB at once — the 90% use-case is that I create XSD files and accompanying XML test data files on a test-by-test (or at least testfixture-by-testfixture) basis ensuring that I am only interacting with the part of the DB that I ‘care’ about for any given integration test.
Hope this helps some,
-Steve B.
@Steve,
I used both methods mentioned to create XSD’s, and it worked fine. What I had trouble with was creating XML files by selecting from the DB using the Ent.Mgr. by using “select … FOR XML…”. I never got it right. Now, you mention a utility class, but I’m not able to find it in the DbUnit project (from subversion). Am I missing something, og is it not part of NDbUnit?
Thx
/Tore Andre
@tab:
Interesting — I hadn’t actually even considered the “select … for XML” approach~! I actually meant entering the data into the DB tables directly then invoking NDbUnit methods to create a dataset from that data. I don’t think the “…for xml” T-SQL construct produces 100% valid XML for a dataset (and that’s actually what is needed since NDbUnit uses the dataset’s intrinsic .WriteXML(…) and .ReadXML(…) methods to get data in and out of the .XML files it uses.
The utility class I mention isn’t part of NDbUnit per-se (sorry!) its a set of utilities that I have written to abstract a lot of the messiness of dealing with NDbUnit away from test-authors by providing a base class from which u derive your testfixtures when you want them to be NDbUnit-enabled with a minimum of fuss. For details, see this post…
http://unhandled-exceptions.com/blog/index.php/2008/10/09/unittest-utility-library-update/
Hi Steve.
Tried to download the new nDbUnit v1.2 from the Google Code repository (http://code.google.com/p/ndbunit/downloads/list). I was surprised to see that not only there was no v1.2 download but also the v1.1 download is missing. Any idea what is happening?
@Zeljko:
Yes, there aren’t (presently) any binaries posted for release as the 1.2 release isn’t ‘baked’ just yet. For the time-being, this means you need to grab the code from the SVN repository (trunk) and build it yourself.
My intent was to post the 1.2 binaries for download once that release was fully-baked but your question here perhaps points out that I should really post binaries from every build/check-in whether it represents a ‘release’ or not just for people’s convenience…maybe I’ll take a second and do just that later today…
@Zeljko:
FYI, there is now a binary available for download from the NDbUnit site; sorry for the delay in posting it~!
-Steve B.