I left off last time talking about array transformation Commands, and mentioned they highlighted the troubles with inferring type information.
Remember that the parser can take a signature of
The reason all of the Convert methods do not take generics is for Java interoperability. Because the method can take any object, I need to lean heavily on the Reflection library to infer type information. (This is a situation where the new dynamic typing in C# 4.0 would have been invaluable!)
For example, let’s take the ElementSetter command. In IronRuby or C# 4.0, this would be a piece of cake. In Ruby, I could simply say something like:
Instead, I have to use type metadata to do the work:
The ElementInfo class serves as an Adapter for the Reflection library’s PropertyInfo and FieldInfo classes, so that I can treat properties and fields the same throughout the rest of my code. The ID property of the _source and _target</tt> variables contain the name of the property. Under the covers, the ElementInfo class just defers to the appropriate instance of either a PropertyInfo or FieldInfo` class.
The logic behind finding the right MethodInfo object to represent a conversion function is a little more challenging. I wrote a MethodResolver class to handle the lookup of a method by its name. It can find both instance and static methods.
Another responsibility of the MethodResolver class is handling generic types. Using the Type.GetMethod() method, I can get a MethodInfo object. MethodInfo contains a method ContainsGenericParameters(). While that is true, there are open generic parameters to the method that need to be bound.
The last challenge to mention in this post is the creation of objects. How can one create a new instance of an arbitrary type. Generics provide some useful constructs like default(T), but there is no convenient way to invoke this language feature outside of a generic method. And, as it turns out, default(T) doesn’t always give me the answer I want.
I stumbled upon this in writing the array Command Decorator. I have an array of strings. I want to fill an array of integers by converting each member of the string array to an integer. But where do I get the new array from?
I tried a few casting solutions, but found that if I used an object[], for example, the objects inside the array would appear to lose their type information and be just objects, which caused problems later in the process.
I also tried to write something like:
I have an issue, though. This generic method must now have the constraint new(), which means I have to handle arrays as a special case.
I found that by creating an ObjectFactory class, it simplified other areas of the code. The ObjectFactory, being a Factory, knows how to create new objects. It also holds a cache of previously created objects.
Activator lives in the System namespace, and it’s what the .NET Framework uses for instantiating objects in AppDomains. It works for my purposes, but as you can see, I did end up with some special handling of arrays in the end.
Having talked about some of the challenges of type inference, next post will discuss the conversion engine itself.
Recently, I talked about the generation gap I faced when considering elements at different levels of the hierarchy. I have a partial solution.
Recall I have the following hierarchy of objects:
So, I created the following test classes:
Obviously, these test classes will never form the basis of a top-notch genealogy program, but they are adequate to serve my purpose.
And, with the changes I’m about to describe, I’m able to write a passing test like this:
And, yes, I used some of my own family names for testing. And I crammed multiple tests into one for brevity.
The part of the design that made this painful before was my ElementInfo class. In the Reflection library, the PropertyInfo and FieldInfo classes are very similar, but their method signatures are slightly different to accommodate indexed properties. I wanted to avoid making this distinction throughout my code, so I created an Adapter class I called ElementInfo to provide the rest of my code a unified interface, exposing a GetValue(), SetValue(), and Type.
However, it turns out my ElementInfo class had two responsibilities:
Make a distinction between a property and a field
Act as an accessor for a type (that is, to get and set values)
As a result, it was hard to extend. So, I decided to separate the ElementInfo class into two. During this exercise, it dawned on me that properties and fields are collectively called accessors, which lit the way for me to redesign this part of the domain model.
It was the ElementInfo constructor that was determining whether an accessor was a property or a field, so its logic got moved to an AccessorFactory. I re-ran my test suite, and everything passed (it was all green).
Once I did that, ElementInfo just had that decision logic in its own methods. Remembering that my goal was to extend this class to handle more types of accessors, I extracted an IAccessor interface and made ElementInfo implement it. (I use ReSharper, so refactorings like this are largely automated.) I then used the “Use Base Type Where Possible…” refactoring, so that as much as possible, I wasn’t using my old ElementInfo class.
I then created Property and Field classes that implemented IAccessor, and reprogrammed the AccessorFactory to return Property and Field objects instead of ElementInfo objects. Re-ran the tests; all were green except the indexed property tests. So I deleted the ElementInfo class.
Now, I was in the position to create some new IAccessor implementations. The first I did was IndexedProperty, which you may recall that I had put into ElementInfo from last time. As I hoped, the only thing I needed to do to integrate it was add it to the Create() method on the AccessorFactory. Ran the tests; test suite still shows all green.
The first new implementation I needed was something I started out calling IndexableProperty, the idea being that it was a property whose type could be accessed with an indexer. However, it quickly became an ArrayProperty, because in my use case, the only examples of this are arrays. The analogous ArrayField followed shortly thereafter. With these in place, the Grandchildren[0] -> Firstborn mapping works.
What about Grandchild[0].Name? The engine doesn’t understand dot notation yet. So I created an AccessorComposite IAccessor that takes multiple accessors chained by dots. It splits the accessor string on the dots and calls the AccessorFactory on each fragment. Now the Grandchildren[0].Name -> FirstbornsName mapping works.
I still have a challenge ahead. The commented-out mapping, Name -> Parents[0].Name, still won’t work. When the AccessorComposite tries to set the value of Parents[0].Name, it fails because Parents hasn’t been initialized. I could create some code that would initialize the Parents array. If I did, the array of Parents would be { null }, and trying to get the value of null.Name doesn’t compute. I would need to have the array of Parents equal to { new Person() }, and then try to set that new Person’s name. For me, this is beyond the call of duty of an accessor representation!
I need to decide whether the user should explicitly populate Parents[0] before trying to set the name in the mapping, and if so how, or whether some other part of the engine should handle it. I’m leaning towards making it an explicit initialization, because having an array be null or empty might have meaning to the consumer of a converted object, and I don’t want to prevent the ability to return those special values.
Transforming arrays is a nice segue into some of the challenges of having to infer type information, which will be discussed in the next post.
Recently, I’ve described the object mapper’s domain model and illustrated that it’s still evolving by discussing the “generation gap”. In this post, I talk about the components of the object mapper.
There are two main components to the object mapper. Since the mapper is configured via XML, clearly I need something to read in the XML and initialize the domain model. Because I’m constrained to C# 2.0, I used an XmlReader. To handle malformed mapping XML documents, I wrote an XSD schema and validate incoming XML before parsing.
Each node triggers the creation of a domain object. Each Source and Target is implemented with the Composite pattern as an Element. Sources are comprised of Targets, and Targets are comprised of leaf elements like property mappings.
Each component of the Element Composite exposes an Execute method that calls a Command. There are two sets of commands: one working on objects, the other on elements (properties or fields). For example, there is an ObjectSetter, that simply sets the target to the source. Correspondingly, an ElementSetter sets the target property value to the source property’s value.
Other types of Commands include an ObjectConverter, which invokes the conversion Function on the source and sets the target object equal to the result, and an ElementInjector, which sets a target element equal to a value specified in the mapping XML.
Array transformations are handled by way of a Decorator, which handles the invocation of the decorated Command on each element in the array.
Next post, I’ll talk about how I made some progress on bridging that generation gap.
Hopefully, that notation isn’t too hard to read. By that, I mean to say that a Person has a Name, a Parent is a Person with an array of type Person called Children, and a Grandparent is a Parent with Grandchildren.
Let’s further say that I want to use my Object Mapper to convert a Grandparent object into a Parent object.
To probe the problem, I created a new test class and started to write tests. I determined I’d need something like the following XML:
The fly in the ointment is what to put in the [TBD] section. I want the name of, say, the first child.
However, even with all of the elements I introduced yesterday, I have no way of asking for elements of different “generations”, at different levels of the hierarchy. I have commands that can copy objects to objects and properties to properties, but not properties to objects or objects to properties.
Back to the TBD piece. It would be nice if it could read:
Children[0] is a C#-ish way of saying the first item in the array of Children, and the number is called the index of the array.
Of course, the test is red (i.e., doesn’t pass), because the Parent object doesn’t have a property called “Children[0],Name”. So far, so good.
This is really two problems in one. I decided to defer the issue of calling a property on a property (nested properties) by ignoring the Name part of the puzzle. So, I created an extra property on Parent called Firstborn, which I’ll try to populate like this:
Failing test in hand, I applied myself to getting the test to pass. As I’ll explain more in a future post, I have an Adapter class that allows me to treat fields and properties the same called ElementInfo. It’s responsible for resolving properties, so I added support to the ElementInfo object to resolve indexed properties as well.
I re-ran my test. Still red.
After some debugging and head-scratching, I realized that what I really need is not support for indexed properties, but support for a property of a type that itself supports indexing, like Children which is an array.
To wit:
So, once more into the breach, dear friends!
I now return you to the scheduled posts in the series while I resolve this. The next post will talk some about the XML parser.
The request and response objects that require mapping are plain-old CLR objects with properties for the most part, though there are some arrays to contend with. The challenge is that the request and response objects do not have the same structure.
Many of the properties have a one-to-one correspondence. For example, a Name field on one request object may align with a Name property on the DataStore request. In some cases, though, the property names are different.
Other properties share a one-to-many association, so the contents of the source object must be copied to several target properties. And yet other pesky properties share a many-to-one association. For example, a Java application’s request might have address, city, state, and ZIP code as separate fields, but they must be combined into a single property of the DataStore request.
Here’s an example of a simple mapping document for a Foo object with Name and Age properties:
There is a Source, which describes a source object. That Source object can be converted into a Target object. Objects have Elements, which describe either properties or fields. Each element has its own source and target attributes to describe what property of the source corresponds to what property of the target.
So far, so good. Let’s say, though, that the source request object doesn’t define an Age property, but it’s required on the target request. For cases like this, I need the capability to inject a value:
The Inject node allows me to specify the value I’d like to inject, in this case any target object would have an Age of 37.
This poses a challenge, because that “37” is a string by virtue of being XML. .NET doesn’t offer a way to implicitly make that “37” into the number 37. So, I need a way to convert primitive objects, objects with no properties:
The Function node defines a method to invoke when converting the object. It’s a valid child of the Element node as well, so a property can be converted by a function as well.
Functions can refer to instance methods, which are called against an instance of a type. They can also refer to static methods, which belong to the type itself. One can even specify arguments, for cases like Substring where I might only want the first few letters of a string:
Sometimes, I may want to convert an array of one type to an array of another by converting each of its members individually by use of the ApplyToEachElement flag:
Here, the conversion engine will take an array of strings and convert it to an array of integers by calling int.Parse with each string as an argument.
That introduces most of the domain concepts of the object mapper. Next post will delve into the components that comprise the object mapper itself.