Monday, July 22, 2013

Fun with multi threading

This one will be short 'n sweet.

Background

It so happens that I have a project that involves decompiling a certain vendor's binary files into XML, then manipulating the data in that XML, the recompiling the XML back into the proprietary binary format.

The tool for de/re-compiling is provided by the vendor, and for our purposes let's say that it's a "black box" - we don't know how it does what it does, we just know that we give it an XML or binary file as input, and it spits out the opposite (sort of like that machine that could remove stars from the star-bellied Sneetches and vice versa).

In the cartoon version it was one machine. Use your imagination, people.

The Problem

The black box tool works Ok, but takes a second or two to spin up, do its thing, and produce a file of the expected output type. On my (pretty vanilla for 2013) dev laptop, it takes about 75 seconds to process 500 files.

Can we throw more threads at it?

As we all undoubtedly know, multi-threading is both the cause of, and solution to, all software performance problems (apologies to The Simpsons). Multi-threaded processes can be exceedingly difficult to debug, performance can actually be degraded if you do it wrong, and for some things it brings only modest performance gains at best.

So, will it work for my problem?

In theory it should. All of the complexity of what happens inside the black box is encapsulated away from my code and my poor brain. I just need to process lots of files, the more/faster the better.

This is the kind of thing the .NET Threadpool was made for.

The implementation

Well, I wish I could claim that I came up with the whole thing myself. But, standing on the shoulders of giants and all that, the oracle of sublime wisdom that is The Internet, and more importantly resources like StackOverflow, I found someone trying to solve basically the same problem.

But what about this bit?
I only used four threads in my example because that's how many cores I have. It makes little sense to be using 20 threads when only four of them can be processing at any one time. But you're free to increase the MaxThreads number if you like.
Good point. The implementation should make use of as many cores as it can, but any more than that doesn't do any good. So how do you do that? Oh StackOverflow, is there anything you don't know?

Ok. So here's my completed (and heavily blog-post-ified) code, with probably the bare minimum of flair:

using System;
using System.Collections.Generic;
using System.Text;
using System.Threading;
using System.IO;
using System.Diagnostics;
 
namespace AcmeCo.BigProject
{
    /*
     * As far as the rest of the app is concerned, XmlCompiler does all the work.
     * You just tell it what XML files to process, and let it worry about the rest.
     */
    public class XmlCompiler
    {
        List<string> Files;
 
        private static int CoreCount = 0;
 
        // No zero arg constructor: this class is pointless without files to process.
        public XmlCompiler(List<string> Files)
        {
            this.Files = Files;
 
            // Stolen from http://stackoverflow.com/questions/1542213/how-to-find-the-number-of-cpu-cores-via-net-c
            foreach (var item in new System.Management.ManagementObjectSearcher("Select * from Win32_Processor").Get())
            {
                CoreCount += int.Parse(item["NumberOfCores"].ToString());
            }
 
        }
 
        public void Process()
        {
            DoTheWork();
        }
 
        private static int MaxThreads = 8;
        private Semaphore _sem = new Semaphore(MaxThreads, MaxThreads);
 
        // Stolen from http://stackoverflow.com/questions/15120818/net-2-0-processing-very-large-lists-using-threadpool
        void DoTheWork()
        {
            MaxThreads = CoreCount;
            int ItemsToProcess = this.Files.Count;
 
            Console.WriteLine("Processing " + ItemsToProcess + " on " + CoreCount + " cores");
 
            int s = 0;
            for (int i = 0; i < ItemsToProcess; ++i)
            {
                Console.WriteLine("Processing " + i + " of " + ItemsToProcess + ": " + Files[i]);
 
                _sem.WaitOne();
 
                XmlCompileTarget target = new XmlCompileTarget(Files[i]);
 
                ThreadPool.QueueUserWorkItem(Process, target);
                ++s;
                if (s >= 19)
                    s = 0;
            }
 
            // All items have been assigned threads.
            // Now, acquire the semaphore "MaxThreads" times.
            // When counter reaches that number, we know all threads are done.
            int semCount = 0;
            while (semCount < MaxThreads)
            {
                _sem.Release();

                ++semCount;             }             // All items are processed             // Clear the semaphore for next time.             _sem.Release(semCount);         }         void Process(object o)         {             // do the processing ...             XmlCompileTarget target = (XmlCompileTarget)o;             target.Process();             // release the semaphore             _sem.Release();         }     }     // A "unit" of work... this class' job is to hand the file to the processing     // utility and raise an event when it's done.     public class XmlCompileTarget     {         private string file;         public XmlCompileTarget(string file)         {             this.file = file;         }         public void Process()         {             Compilefile();         }         public static event EventHandler<XmlProcessEventArgs> OnProgress = delegate { };         protected virtual void Progress(XmlProcessEventArgs e)         {             OnProgress.Raise(this, e);         }         private void Compilefile()         {             if (!System.IO.File.Exists(file))             {                 OnProgress(thisnew XmlProcessEventArgs(file, "File not found!"));                 return;             }             OnProgress(thisnew XmlProcessEventArgs(file, XmlUtilities.RunTool(@"Tools\XmlComp.exe"new FileInfo(file), null)));         }     }     // The processing utility runs the vendor's XML compiler and     // returns any output from that tool as a string.     public class XmlUtilities     {         public static string RunTool(string ExecutablePath, FileInfo FileInfo, string Arguments)         {             Process p = new Process();             ProcessStartInfo info = new ProcessStartInfo();             if (!File.Exists(ExecutablePath) || !FileInfo.Exists)             {                 Console.WriteLine("Error: File path not found - \r\n"                     + ExecutablePath + " exists == " + File.Exists(ExecutablePath) + "\r\n"                     + FileInfo + " exists == " + FileInfo.Exists);                 return null;             }             Console.WriteLine(Arguments);             info.FileName = ExecutablePath;             info.Arguments = string.IsNullOrEmpty(Arguments) ? "\"" + FileInfo.FullName + "\"" : Arguments;             info.UseShellExecute = false;             info.RedirectStandardOutput = true;             info.RedirectStandardError = true;             info.RedirectStandardInput = true;             info.ErrorDialog = true;             info.Verb = "runas";             info.WindowStyle = ProcessWindowStyle.Hidden;             p.StartInfo = info;             p.Start();             string output = p.StandardOutput.ReadToEnd();             string error = p.StandardError.ReadToEnd();             p.WaitForExit();             StringBuilder sb = new StringBuilder();             sb.Append(output);             return sb.ToString();         }     }     // Not much to see here...     public class XmlProcessEventArgs : EventArgs     {         private string filename;         private string output;         public XmlProcessEventArgs(string filename, string output)         {             this.filename = filename;             this.output = output;         }     }     // Ah, why this?     // Because it is a drag to continually have to add tons     // of thread-safe invocations on every last UI      // element that might need updating as a result of the event     // that was raised.     // Isn't it better to make the *event* notification thread-safe,      // and let UI elements be their merry little selves on their own     // merry little thread?     // But we digress... // Stolen from: http://stackoverflow.com/a/2150359/2124709     public static class ExtensionMethods     {         /// <summary>Raises the event (on the UI thread if available).</summary>         /// <param name="multicastDelegate">The event to raise.</param>         /// <param name="sender">The source of the event.</param>         /// <param name="e">An EventArgs that contains the event data.</param>         /// <returns>The return value of the event invocation or null if none.</returns>         /// <remarks>Usage: MyEvent.Raise(this, EventArgs.Empty);</remarks>         public static object Raise(this MulticastDelegate multicastDelegate, object sender, EventArgs e)         {             object retVal = null;             MulticastDelegate threadSafeMulticastDelegate = multicastDelegate;             if (threadSafeMulticastDelegate != null)             {                 foreach (Delegate d in threadSafeMulticastDelegate.GetInvocationList())                 {                     var synchronizeInvoke = d.Target as System.ComponentModel.ISynchronizeInvoke;                     if ((synchronizeInvoke != null) && synchronizeInvoke.InvokeRequired)                     {                         try                         {                             retVal = synchronizeInvoke.EndInvoke(synchronizeInvoke.BeginInvoke(d, new[] { sender, e }));                         }                         catch (Exception ex)                         {                             Console.WriteLine(ex.Message + Environment.NewLine + ex.StackTrace);                         }                     }                     else                     {                         retVal = d.DynamicInvoke(new[] { sender, e });                     }                 }             }             return retVal;         }     } }

Benchmarking


What kind of impact did all of that have? Let's compare:

   Operating System: Windows 7 Professional 64-bit (6.1, Build 7601) Service Pack 1 (7601.win7sp1_gdr.130318-1533)
          Processor: Intel(R) Core(TM) i7 CPU       Q 720  @ 1.60GHz (8 CPUs), ~1.6GHz
             Memory: 8192MB RAM
Available OS Memory: 8124MB RAM
          Page File: 8020MB used, 8224MB available

 * No multi-threading (serial processing of each XML file), 500 files. Processing time: 75 seconds. 
 * Using code above (multi-threading on 4 cores), 500 files. Processing time: 15 seconds.

Well, we've cut our processing time to 20% of what it was originally. What happens if we add more cores? I happen to have access to a bigger box here somewhere...

   Operating System: Windows 7 Professional 64-bit (6.1, Build 7601) Service Pack 1 (7601.win7sp1_gdr.110408-1631)
          Processor: Intel(R) Core(TM) i7 CPU       X 990  @ 3.47GHz (12 CPUs), ~3.5GHz
             Memory: 6144MB RAM
Available OS Memory: 6136MB RAM
          Page File: 1932MB used, 12392MB available

 * Using code above (multi-threading on 6 cores), 500 files. Processing time: 5 seconds.

Summary and Conclusions

I started with a bottleneck that required my code to run an external tool to process many thousands of files. After investigating running multiple instances of the external tool in parallel via multi-threading (and using hardware better suited for the job at hand), I was able to decrease the net runtime to ~ 7% of the time it did running sequentially in a single thread.

I can live with that.

Tuesday, July 9, 2013

Jumping into Micro ORM, Ctd

As you may remember from our last installment, I'm going to experiment with cobbling together a small proof of concept project using C# .NET, a Micro ORM (PetaPoco), and a lightweight database (Sqlite).

I've already got a Visual Studio 2010 project (targeted at the .NET Framework 4.0 runtime), so it's time to get PetaPoco.

Getting PetaPoco

At this point it appears that I have two options for getting PetaPoco - making it a project dependency with NuGet, or cloning the source from Git. For now, I'll go with the latter. (For a nice quick 'n easy into to Git, you can't get much simpler than this.)

After cloning to a local repository, I see a new solution and associated files in the new PetaPoco directory. I open the solution file in Visual Studio 2010, and it builds a .dll on the first try with a couple of (hopefully minor) warnings.

------ Build started: Project: PetaPoco, Configuration: Debug Any CPU ------  PetaPoco -> C:\Users\bricej\repos\PetaPoco\PetaPoco\bin\Debug\PetaPoco.dll------ Build started: Project: PetaPoco.Tests, Configuration: Debug Any CPU ------c:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1605,5): warning MSB3245: Could not resolve this reference. Could not locate the assembly "Npgsql". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors.  PetaPoco.Tests -> C:\Users\bricej\repos\PetaPoco\PetaPoco.Tests\bin\Debug\PetaPoco.Tests.exe------ Build started: Project: PetaPoco.DevBed, Configuration: Debug Any CPU ------  PetaPoco.DevBed -> C:\Users\bricej\repos\PetaPoco\PetaPoco.DevBed\bin\Debug\PetaPoco.DevBed.exe========== Build: 3 succeeded or up-to-date, 0 failed, 0 skipped ==========

Setting up PetaPoco with Sqlite

At this point I don't have Sqlite downloaded/installed/configured/etc.

PetaPoco doesn't seem to say anything about Sqlite. But, reading from the comments in PetaPoco.Core.ttinclude, we see some examples of setting up DbProviders, and this helpful nugget:
Also, the providers and their dependencies need to be installed to GAC
That's good info, because it doesn't look like Sqlite is intended to live in the GAC  (the Global Assembly Cache, where  the .NET runtime may go looking for referenced dlls) by default:
All the "bundle" packages contain the "System.Data.SQLite.dll" mixed-mode assembly. These packages should only be used in cases where the assembly binary must be deployed to the Global Assembly Cache for some reason (e.g. to support some legacy application on customer machines). 
So let's download the bundle version (If we already knew what we were doing, we might have gotten away with downloading just the binaries and using the gacutil...)

Next, we're going to need a .NET wrapper for Sqlite so that we can set it up as a datasource in Visual Studio. There appear to be multiple options, but let's try ADO.NET 2.0 Provider for SQLite.

After installing the Sqlite wrapper, I can now set up a connection to an existing database, or create a new Sqlite one. Let's create one:





Next let's add a table (using the sqlite command line tool) to see if our database is working:



Getting somewhere...





Running the built-in unit tests for PetaPoco didn't work out so hot: Permission to Rock - DENIED.

Hmm. Lots of invalid cast exceptions. It *seems* like the unit tests are able to connect to the database and are performing the setup/teardown steps. But clearly something fundamental is wrong.

Let's try and rule out the unit tests themselves as the problem. Since we already created a table 'foo' with two columns (int/varchar), let's create a Poco and see if we can query it:

   public class foo
    {
       public int id{get;set;}
       public string name { getset; }
    }

Testing the query:

   // Create a PetaPoco database object
   var db = new PetaPoco.Database("sqlite");
 
   try
   {
        // Show all foo    
        foreach (var a in db.Query<foo>("SELECT * FROM foo"))
        {
             Console.WriteLine("{0} - {1}", a.id, a.name);
        }
            }
    catch (Exception ex) {
        Console.WriteLine(ex.Message + Environment.NewLine + ex.StackTrace);
    }

Outputs the following:



So our stack (C# .NET 4.0 / PetaPoco / Sqlite) clearly works, but there's something unsettling about those unit tests failing. I'm not committed to PetaPoco yet. It should be easy to try out another Micro ORM framework since 1) in theory ORM frameworks try to be as implementation agnostic as they can get away with, and 2) we haven't really written any code yet - so why not try something else?

Trying a different Micro ORM

Let's look for something that's actively maintained, and has a bit more friendly support/documentation for Sqlite.

MicroLite ORM? Actively maintained, lots of documentation, Sqlite examples...

Getting MicroLite ORM

Although you can clone the project from Github, the docs say to install from NuGet. Ok then. For those of you new to NuGet, start here. (If you've ever used a package manager on a *nix distro, this will be a pretty familiar concept.)

After you install MicroLite with NuGet (depending on which packages you choose) you should see something like this when you right-click on your solution and choose "Manage NuGet Packages for Solution."



Ok, now let's see if we can reproduce our simple SELECT query.

Our new foo, with MicroLite attributes:

    [Table("foo")]
    public class foo
    {
        [Column("id")]
        [Identifier(IdentifierStrategy.DbGenerated)]
        public int id { getset; }
 
        [Column("name")]
        public string name { getset; }
    }

And the code to do a simple query:

var sessionFactory = Configure
     .Fluently()
     .ForConnection(connectionName: "sqlite", 
         sqlDialect: "MicroLite.Dialect.SQLiteDialect")
     .CreateSessionFactory();
    using (var session = sessionFactory.OpenSession())
    {
        var query = new SqlQuery("SELECT * from foo");
        var foos = session.Fetch<foo>(query); // foos will be an IList<foo>
        foreach (foo a in foos) {
            Console.WriteLine("{0} - {1}", a.id, a.name);
        }
    }

That's a bit more coding than the simpler PetaPoco version. In fact it's starting to look a lot like Hibernate! (Well, that's unfair, but one of the original design goals was a much simpler code base than a full-ORM would require.)

Let's see what happens if we use MicroLite's built-in conventions (vs. configurations for table/column mappings). Refactoring the above code we get:

    //[Table("foo")]
    public class foo
    {
        //[Column("id")]
        //[Identifier(IdentifierStrategy.DbGenerated)]
        public int Id { getset; }
 
        //[Column("name")]
        public string name { getset; }
    }

And :

   #region new_code!
 
    Configure.Extensions() // If used, load any logging extension first.
        .WithConventionBasedMapping(new ConventionMappingSettings
            {
// default is DbGenerated if not specified.
                IdentifierStrategy = IdentifierStrategy.DbGenerated, 
                // default is true if not specified.
                UsePluralClassNameForTableName = false 
            });
            
    #endregion
 
    var sessionFactory = Configure
      .Fluently()
      .ForConnection(connectionName: "sqlite", 
            sqlDialect: "MicroLite.Dialect.SQLiteDialect")
      .CreateSessionFactory();

    using (var session = sessionFactory.OpenSession())
    {
        var query = new SqlQuery("SELECT * from foo");
        var foos = session.Fetch<foo>(query); // foos will be an IList<foo>
        foreach (foo a in foos)
        {
            Console.WriteLine("{0} - {1}", a.Id, a.name);
        }
    }

There's one small change I made that might be hard to notice. I made the foo.Id property camel case (it was foo.id previously). Why? Because MicroLite's convention-based mapping strategy requires that
"The class must have a property which is either class name + Id or just Id (e.g. class Customer should have a property called CustomerId or Id)." 
(Side note - I also had to rename the column in the foo table to "Id" - yuck! I assume this is because MicroLite's convention-based mapping is case-sensitive on both ends - code and database objects.)

This post is starting to get long... so I'll snip out the part where I try full CRUD operations using both MicroLite and PetaPoco - but both work fine as expected.

We need a tie-breaker

Both Micro MicroLite and PetaPoco will serve as nice smallish ORM frameworks. MicroLite appears to be more recently maintained and has lots of features, but PetaPoco is super simple to use and requires (at least in simple CRUD examples) a lot less code.

Let's have a race and see what happens. How fast can each of these insert 10,000 rows? (Note: times are in milliseconds)

First, MicroLite:

  DateTime start = System.DateTime.Now;
            
            using (var session = GetFactory().OpenSession())
            {
                using (var transaction = session.BeginTransaction())
                {
                    for (int i = 0; i < 10000; i++) {
                        var foo = new foo();
                        foo.name = "MicroLite Insert Test " + i;
                        session.Insert(foo);
                }
                    transaction.Commit();
                }
            }
            Console.WriteLine("Elapsed: " + (System.DateTime.Now - start).TotalMilliseconds);

Outputs: Elapsed: 1811.1036

Next up, PetaPoco:
  DateTime start = System.DateTime.Now;
 
            // Create a PetaPoco database object
            var db = new PetaPoco.Database("sqlite");
 
            for (int i = 0; i < 10000; i++)
            {
                foo foo = new foo();
                foo.name = "PetaPoco Insert Test " + i;
                db.Insert("foo""Id", foo);
            }
            Console.WriteLine("Elapsed: " + (System.DateTime.Now - start).TotalMilliseconds);

Outputs: Elapsed: 115555.6095

WOW! PetaPoco took nearly 2 minutes to insert what MicroLite did in under 2 seconds. (NOTE: See updates below regarding performance improvements)

What about read operations? I created similar code that reads the 10,000 rows created in the bulk insert test and adds them to a Dictionary<int, string>.

PetaPoco:
Run #1 Elapsed: 219.0125
Run #2 Elapsed: 125.0072
Run #3 Elapsed: 101.0057
Run #4 Elapsed: 96.0055
Run #5 Elapsed: 98.0056
Run #6 Elapsed: 121.0069
Run #7 Elapsed: 118.0068

MicroLite:
Run #1 Elapsed: 926.0529
Run #2 Elapsed: 355.0203
Run #3 Elapsed: 398.0228
Run #4 Elapsed: 351.0201
Run #5 Elapsed: 483.0276
Run #6 Elapsed: 347.0199
Run #7 Elapsed: 357.0204

Hmm. PetaPoco is actually quite a bit faster than MicroLite. Both appear to be doing some caching since the first run is significantly slower than subsequent runs.

Are these fair tests? Is there a way to make them more Apples-to-Apples? Let me know in the comments...

Summary and Conclusions


I started out looking for a stack that would let me run a.NET Winforms app on top of a Micro ORM for simple CRUD operations on a Sqlite database.

I found at least two that would be reasonably serviceable: PetaPoco (the original choice), and MicroLite ORM.

PetaPoco just "feels" simpler to use and less like the clunky Spring/Hibernate syntax I wanted to avoid. But it doesn't appear to be actively maintained. Then again, there are 500+ hits on Stackoverflow (always a useful measure ;-) for PetaPoco, and 6 for MicroLite.

Which one to use? Originally, I was leaning MicroLite due to PetaPoco's apparent performance issues - until I realized they weren't issues at all. So for now, I'm leaning towards PetaPoco. EDIT: see updates below, including adding Dapper to the mix.

Downloads

If you'd like to steal the tech in this example, a Visual Studio 2010 solution is here.

UPDATE - Now with Dapper

Well leave it to me to forgot the Granddaddy of them all, Dapper. If you don't know what Dapper is, it's the Micro ORM that StackOverflow uses. If you don't know what StackOverflow is, welcome to Earth, and please enjoy the show.

I didn't originally include Dapper because although I had heard of it, I didn't think of it as a Micro ORM (not for any particular reason - just my unfamiliarity with it).

I've added Dapper to the solution you can download (see Downloads), and you can judge for yourself how it compares. I, for one, was hoping for the magical unicorn combination of MicroLite speed and PetaPoco simplicity. Alas, not so (caveat: I am sure that I could be "doing it wrong," but as far as out-of-the-box and less than 1 hour of Googling goes... Edit: Yes, I was doing it wrong).

Dapper took over 77 seconds to insert 10,000 rows. Also, compare the syntax for inserting a new record.

Here's PetaPoco:
 foo foo = new foo();
 foo.name = "Created by PetaPoco";
 db.Insert("foo""Id", foo);

Here's Dapper:
 string sqlQuery = "INSERT INTO foo(name) VALUES (@name)";
 conn.Execute(sqlQuery, new{ name="Created by Dapper" });

I don't say this to start a Holy War, but how would you like to be the gal who has to refactor the all the strings in the queries that use the 'foo' class when the business requirements come along to separate the 'name' property into 'FirstName' and 'LastName'? Or when the foo class grows to be a few dozen (hundred?) properties long? (Yes I know it's probably not good design if a class has that many properties but we've all been there.)

I am sure Dapper has many pluses going for it, but my expectations were (probably too) high.

Maybe in a future project I'll write a PetaPoco-like wrapper for Dapper, and finally be happy :-).

UPDATE #2 - Massive Performance Improvements

I just couldn't believe that StackOverflow's MicroORM was so slow, and was convinced I was doing it wrong. Well, I was. For doing the kind of work I was doing (lots of inserts), it turns out you are supposed to wrap the work in a TransactionScope.

So, adding the System.Transactions library to my project, and wrapping the inserts with the following:
      using (var transactionScope = new TransactionScope())
            {
              // do lots of transactions
             transactionScope.Complete();
            }
Dapper was able to insert 10,000 rows in a stunning  493 milliseconds. Folks, we have a clear winner with performance. Now if only the Dapper had cleaner insert/update syntax!

Likewise, PetaPoco has a similar mechanism:
 var db = new PetaPoco.Database("sqlite");
 using (var transaction = db.GetTransaction())
 {
    // do lots of transactions
    transaction.Complete();
  }

Which brings the PetaPoco bulk insert time down to right around 1 second (1087 millis).

So now we're back where we started. PetaPoco's speed is comparable to the others, and it's got very nice, simple syntax. And if I hit a gnarly roadblock, I know I at least have a couple of decent alternatives to fall back on.

Wednesday, July 3, 2013

Jumping into Micro ORM

I've known about (and worked with multiple forms of) standard Object Relationship Management frameworks in the past (e.g., Hibernate - with and without Spring wrappers - for Java, nHibernate for .NET, SQLAlchemy for Python), but usually only on fairly big enterprise systems that justified the setup and overhead.

But what if you want something simple, small, fast (performance and development), and portable? More on that in a minute...

Recently I've needed to track a medium-sized relational data set that is stored in (ugh) a few hundred different XML files. This is data that needs to be queried quickly, updated automatically, and viewed using some kind of light-ish weight GUI.

So the cocktail napkin sketch of the data flow looks like this:

XML Data -> (deserialized) -> domain objects --> (stored) -> DB

The data is only manipulated once it's in the DB, then back out again to be used with the 3rd party system that needs the data in XML.

DB -> (queried) -> domain objects -> (serialized) --> XML Data

My experiences with the various flavors of Hibernate are my comfort zone. But this seems like a good time to experiment with a Micro ORM. I don't have a lot (or really any) multi-phase commits or concurrent transactions to worry about, no database clusters, no complex legacy data structures, none of the other stuff you tend to need to worry about in a Very Big Enterprise App(TM).

I already have a fair amount of the domain objects representing the data model in an existing C# .NET project, so here's what attempt #1 will look like:

  • A fairly bare-bones (for now) C#.NET v4.0 winforms app that will
    • house the library of domain objects, 
    • do the serialization/deserialization to XML,
    • perform unit tests,
    • interact with the Micro ORM library
  • A Micro ORM Library
    • PetaPoco for now, we'll see how it works!
  • A database
    • Sqlite for now, we'll see how well it plays with PetaPoco and/or other Micro ORMs.

The next post will be initial setup, maybe some code snippets, and a thumbs up or down whether I can quickly and easily test storing and querying with the stack I've chosen.

Stay tuned...

fr1st p0st w00t.

Hi everyone, welcome to Steal This Tech.

The intent of this blog is to capture some of the process I go through experimenting with new languages, tools, technologies, etc., document any neat tricks/ideas, and hopefully help fellow techies in some small way.

I have a full-time job, so my posting frequency and volume will not rival people who do it for a living. Also, I will be shocked if anyone ever reads it, even by accident. But then here you are.

Oh yeah, and if you see something here you like - steal it!