North America and Europe are parting faster than the time left in my contract is passing.
In my final days here, I am acting as a knowledge resource for a service I wrote, one that should truly be replaced by a well-known open-source solution. Having said that, it does its job reasonably well.
The person taking over the code has a looming deadline of her own, so we have only had a couple of conversations about it. She is debating whether to use the component I built or to write her own.
She plans to take an approach that will invalidate the design I was asked to build. I was asked to make a service that could be configured to handle different situations without deploying a code change, so the complexity in my solution lies in the configuration. Her solution is to consume my service but put the complexity into a controller class, which I admit will make for much simpler configuration, but will involve the creation of new controllers to handle new situations.
This frustration epitomizes this contract, but I have gained some valuable insight into things I do value. For example, being busy and challenged at work is something I thrive on more than monetary compensation. I’ve also learned that I’ve come to expect a certain level of influence at work, neither of which this position has really been able to provide.
My manager wants to conduct an exit interview tomorrow, and I hope he will afford me the opportunity to relate my observations. Otherwise, I fear future consultants will face the same alienation I felt.
Last time, we talked about accepting file input, the last feature added to the Object Mapper. Now, it has come time to document the module.
I use ReSharper to help code in Visual Studio, but it still doesn’t generate XML comments as well as I’d like, so I also use GhostDoc, which turns code into better English than ReSharper does. There is still editing to do, for while describing the code in English is marginally useful, GhostDoc still cannot provide context.
Finding missing documentation was trial and error until I installed the AgentSmith ReSharper plug-in, which made finding them a breeze after I turned solution-wide analysis on and marked missing documentation as a ReSharper error. Once I had all the classes, properties, and methods documented, I set out in search of a tool to convert those XML comments into something nice.
The first tool I tried was Microsoft’s Sandcastle. Sandcastle itself is tough to use, so I went to CodePlex and also downloaded the Sandcastle Help File Builder. It does a nice job of creating HTML and compressed help files (CHM).
I did have to alter a few things. I wanted namespace comments, and dug around a little before I discovered the Project Properties -> Comments expansion button on the right, and I typed those in. I did want to avoid outputting VB.NET and C++ contextual references, so I had to go into the Syntax Filters dialog and disable them. I also customized the root namespaces, help title, HTML help name, and copyright text, and I was ready to go.
But the requirement was to generate the documentation in Word. For as good as Sandcastle is, it doesn’t seem to do anything but HTML and CHM.
The next tool I tried was doxygen. It’s a UNIX tool with Windows binaries, so it took a little more cajoling. At first, it wouldn’t generate any content at all, but I got it working in short order after I saved its configuration file prior to rendering.
doxygen also supports more formats: LaTeX, RTF, and MAN pages, though it won’t do CHM. RTF is a close cousin to Word document formatting, so I had high hopes. However, I found the generated RTF document disappointing, and later found a forum where the author admitted as much.
LaTeX is a great layout specification language, so I thought it might be a decent stepping stone to a Word doc. I spent a merry while trying to monkey with LaTeX2RTF, a free converter on sourceforge, but the results were disappointing. It couldn’t understand doxygen’s custom elements.
I’m still looking for a better solution, any ideas?
The work on the Object Mapper is nearly complete from when we left off with the conversion engine.
I did enhance the parsing engine, though. My co-worker was trying to consume the engine, so he was reading in his document and extracting the text. We decided this was a common enough activity it would be worth adding to the parsing engine.
I didn’t want to pass in the filename as a string, because it would mean that the parsing engine would also become responsible for determining if a string contained a filename or an XML string. One rule of thumb is to avoid using primitives like strings to represent concepts – things with rules – like filenames. Instead, create a class to be the custodian of that knowledge.
The .NET Framework already has such a class, System.Uri. The nice thing about using a URI is that any network location can also be used, not just a filename, and I get some simple validation capability.
Integrating the URI was easy. The parsing engine wraps the input string into a StringReader and creates an XmlReader from that with which to do its work. And really, any TextReader would do, which made the enhancement easy. I created an overload that accepts a URI and wraps it in a StreamReader, which is also a TextReader. Problem solved!
The other component, the Conversion Engine, is the workhorse of the Object Mapper. Its ObjectConversionEngine class is the public interface to conversion functionality, and it exposes methods with the following signatures:
The first two signatures are used when constructing a new instance of the target object. The generic overload makes use of the extra type information to attempt to use .NET type converters when all else fails. The final signature is used when updating an existing target object.
When asked to convert an object, the engine will first try to locate a mapping that matches the source and target object types. If it finds a mapping, it invokes the mapping’s Command against the source object. Being a Composite, the invocation will cascade down to each individual leaf element.
I wanted to mention two things about this process. Because we may need to create an object in a multi-step process, a Source can have multiple Targets. After each Target’s command is executed, the result is cached in the ObjectFactory so it can be injected by a later command.
However, it’s easy to imagine a situation where I may want to do something conditionally. For example, if I have an array of addresses on my source type, my destination type may have a PrimaryAddress property and a SecondaryAddresses array. In this case, I want the first address of the source to map to the PrimaryAddress property and the rest to go into the array.
For this, I need a new type of command, a ConditionalCommand. The XML looks like this:
I thought about how to implement this for a while. When I specified this feature, I’d envisioned using expression trees. When I found out I was restricted to C# 2.0, I even spiked a version using an IL generator to create methods programatically using Reflection.Emit!
IL generation is not for the faint of heart, and I told myself there’s got to be a better way! What I wanted originally was a lambda, which thanks to J.P. Boodhoo’s Nothin’ But .NET course I knew was just syntactic sugar for delegates.
I defined a Condition delegate type:
And then I created a ConditionalCommandBuilder to create them. It’s really a Factory class, but I preferred the name “builder” here. It uses a fluent interface, so the If methods return a ConditionalCommandBuilder for chaining.
The GetPredicate() command returns a Predicate, which is a delegate defined in the .NET Framework. The operation variable defines what type of predicate to retrieve, such as Equals, Exists, or GreaterThan. The operand value given to the delegate is captured for later use.
Perhaps the most interesting GetPredicate() method is the one that retrieves CompareTo() results:
The TypeResolver class is analogous to the MethodResolver class I talked about in a previous post, but it finds types by name instead of methods.
One advantage to coding with design patterns is that they isolate concerns. Once I got a ConditionCommand to be created correctly, the rest of the conversion engine worked like a charm!
When running some unit tests for the Object Mapper project, I found the test suite’s execution time had slowed down noticably. I had been spoiled at my previous employer, where I had access to ANTS Profiler. So, I set about finding a performance profiling tool.
While there are some free profilers out there, the reviews on the web led me to believe they wouldn’t be up to the task. I was drawn to dotTrace because of its ReSharper integration. I decided to give it a go with their 30-day trial.
What you will read in forums is absolutely true: ANTS provides more data, with ReSharper, dotTrace is much easier to use. And, I have learned that more data is not necessarily better.
It seemed that dotTrace could not trace the code execution once it had been loaded into ReSharper’s test runner thread, so I had to write a quick console app to exercise the code I wanted to check.
dotTrace was able to pinpoint the area of code that plagued me, where I was doing some repetitive MethodInfo lookups. Some caching brought the performance back in line with what it had been.
During this tweaking process, I found dotTrace’s capability to compare test runs to be invaluable! I was able to see in percentage terms how much things improved with the caching. I also removed the caching and saw the performance degrade as expected, so I could be sure it wasn’t coincidence. And, I am happy to report that the Console.WriteLine statements in the console app were the most time-consuming piece of the process, so my engine ought be able to withstand the load I understand it will receive in production.
I strongly considering buying dotTrace when my trial expires. It’s a good tool to have in the toolbelt.